Earlier this year I blogged about how SQL Server Standard artificially imposes a 64GB limit on max memory used per instance, and I outlined some ways you can potentially work around that problem in specialized consolidation scenarios. Unfortunately, based on pre-release documentation, it looks like there's the potential that SQL Server 2014 will keep the artificially imposed limit of 64GB per instance (along with 16 cores max) for SQL Server Standard. If so, those artificial limits are going to feel more and more oppressive in the years to come—during SQL Server 2014's full lifecycle—as servers keep getting more and more powerful out of the box. Likewise, as I argued earlier in the year (and still believe now), artificially restricting SQL Server Standard to only enabling 64GB of max memory per instance supports the perception that Microsoft seems greedy and grasping.
If my "greedy and grasping" assessment seems a bit harsh, then consider two things.
First, consider the state of modern hardware today versus where it was, say, 5 or 10 years ago. Today all but the most meager entry-level servers can accommodate up to 192GB or 256GB of RAM out of the box (basically where higher-level boxes topped out 5 or 10 years ago). RAM prices, while they continue to fluctuate, are also at a premium or "sweet spot" where the ability to drop significant amounts of RAM into a given workload still remains one of the most cost-effective means of addressing workload requirements in many cases. Likewise, processing power has gone through the roof in the past 5 to 10 years, with increasing numbers of cores crammed into single sockets—and corresponding NUMA architecture designed to provide highly optimized access (for those cores) to dedicated memory.
Second, consider that Microsoft pointed out these drastic changes in computing power in the past 5 to 10 years when the company made the argument for switching SQL Server licensing from a per-socket model to a per-core model. And while I doubt anyone was really excited about those pricing increases, the reality is that Microsoft has always tried to license SQL Server against the amount of computing power being leveraged. So, in the company's defense, a switch from per-socket licensing to per-core licensing was only fair.
What's not fair is to assume that a 16-core server running SQL Server Standard only needs (or can only use) up to 4GB of memory per core. In fact, that idea is patently ludicrous—given server hardware configurations today. It's also a bit galling to the spirit of SQL Server licensing that has existed in the past, where customers paid to license the amount of computing power they were harnessing with SQL Server—because paying per-core without being able to fully use each core doesn't seem like a great bargain for customers. Which, in turn, makes it feel like Microsoft is being entirely too one-sided in its licensing requirements and restrictions (and thereby trying a bit too hard to force customers to make the leap to Enterprise edition licensees just to take advantage of modern hosting architecture and capabilities).
Hopefully, though, I'm just overreacting and before SQL Server 2014 goes to RTM we'll see a jump to at least 128GB of supported RAM for SQL Server 2014 Standard. Anything less would just be disappointing.
Related: New Features in SQL Server 2014