Will we look back on 2008 as the year that virtualization technology started to become the norm for commodity-based SQL Server installations? My sense is that virtual SQL Server is still seen as a niche solution in many areas of the SQL Server community, and it tends to be avoided when performance is the absolute bottom line. But will recent changes to Microsoft’s licensing, as well as the availability of quad-core servers, shift price/performance dynamics to a point in which virtualization becomes cheaper and faster than raw iron (i.e., a physical server with no virtualization involved)?

As you might know, licensing for the Enterprise Editions of Microsoft’s OSs now allows an unlimited number of guest virtual machines (VMs) to be run on a single Microsoft Virtual Server host, as long as the host machine has a proper license. Couple this change with per processor licensing on quad-core servers, which charge by the socket, not the core, and you can theoretically license an infinite number of virtual quad-core SQL Servers for the price of a single per processor license.

My experience is that SQL Servers tend to be memory and I/O constrained before they’re CPU constrained. I’m not saying that all SQL Server workloads tend to stress memory or disk over CPU; it’s simply an observation that most SQL Server sites I’ve worked on tend to have ample processor capacity—with much of it being wasted at any given time—and more often than not tend to bump up against memory and disk constraints. Quad-core servers will only make this imbalance worse, but perhaps that’s the tipping point for virtualization. You have to admit that it’s intriguing to think about what you could achieve with a single processor license for SQL Server running on a high-end quad-core box, with 32GB of RAM or more, and attached to a decent SAN. Let’s say you need 10 SQL Server instances and might otherwise have been looking at raw iron. Let’s further assume that none of the installations would come close to stressing the CPU’s of the raw iron servers on a regular basis. Would raw iron theoretically be faster? Sure, but going with the Enterprise Editions of Windows and SQL Server in this case might let you save a substantial amount of money in licensing costs, and it’s very possible that those software savings could be invested in memory, CPU, and I/O upgrades, so that the net price/performance of 10 virtual instances of SQL Server on a single box is vastly better than that of 10 raw iron systems.

I don’t think Microsoft, or any vendor, has published comprehensive price/performance information comparing its technology in a raw iron to virtual server test since quad-core servers have come out and Microsoft made its license changes. Let me know if I’m wrong and tests like that are out there. I’d love to read them and will be sure to pass along the links.

The theoretical benefits of virtualization are so great that I think it’s simply a matter of time before virtual SQL Servers become the norm rather than the exception. We’re certainly not there yet, but it’s starting to feel closer and closer. What do you think?