I was having dinner with a well-known SQL Server MVP a few weeks ago, and we began to opine about the future of performance tuning in the SQL Server space. Specifically, we were talking about what might happen when it’s possible to build a database server with 1TB of solid state disk (SSD) space for less than $10K. One question we discussed boiled down to, “Will we even need performance tuning experts at that point?”

I remember sitting in a conference more than 10 years ago—I’m thinking that it was for the launch of SQL Server 7.0, but I might be wrong—when Jim Gray posed the question, “What will happen when we can build commodity database servers with 1TB of disk storage for less than $10,000 USD?” Flash forward a decade, and it’s not inconceivable to find 1TB of traditional disk storage for $100 if you find a decent sale. Heck, I was chatting with someone last week who mentioned that he was having a really hard time moving a big database between two servers because of a firewall and some connectivity issues. I’m grossly oversimplifying the conversation here, but the solution boiled down to someone finding a 1TB USB drive lying around and then moving the backup using Sneakernet rather than Ethernet. We’re clearly in the age of $10,000 1TB database servers. But I’m wondering what’s changed, other than the fact that we can solve business needs that require vastly more data. That’s pretty cool, but it feels more evolutionary than disruptive.

What will happen when $10K buys us a server with 1TB of SSD storage? And to sweeten the pot, let’s say that same $10K USD also gets us 1TB of memory. What will that look like? Will we still need performance tuning experts? Part of me thinks that surely the world won’t need performance tuning experts if hardware is so massively fast. Then again, I wonder if it’s the nature of technology that we will also bloat until it hurts. Last week, I saw a table scan of a 400,000,000-row fact table. Ten years ago, it might have been a 40,000,000-row table. Will data and feature bloat lead to 40-billion–row fact tables when servers become cheap enough so that we can fit everything in main memory or at least have I/O being served up by SSD?

Here’s what I think. Servers with 1TB of memory and 1TB of SSD will solve a heck of a lot of the common performance problems that we see today. We’ll still have certain classes of performance problems, but I’m not sure I would want to be making my living doing nothing but performance tuning. Will tuning experts go the way of the milk man? (By the way, I know someone who literally gets their milk daily from a milk man. It’s organic. My guess is the cows probably went to Harvard, and I’m sure they speak Moo, plus at least three other languages—but still—it’s door-to-door milk delivery from a genuine milk man.) So, I’m not saying performance tuning experts will go away entirely. I just think the role will look very different than it does today, when $10K buys you a server with no moving parts other than perhaps a fan. (After all, we still have niche markets for milk delivery, and collectively we probably drink a heck of a lot more milk with mass-market milk farming than we did when everyone had fresh milk sitting in an icebox on their porch in the morning.)

Perhaps the most interesting question isn’t whether or not we will need performance tuning experts when $10,000 USD buys you a server with 1TB of memory and 1TB of storage space. Perhaps the most interesting question is, What sort of remarkable things will we be able to do on such a server that simply aren’t possible today? Will it be cool and extraordinary? Or will it be more of the same old stuff, just with a heck of a lot more bloat?