IBM's recent stunning score of 440,879 transactions per minute (tpmC) on the Transaction Processing Performance Council's (TPC's) TPC-C benchmark adds drama and intrigue to the never-ending game of TPC-C one-upmanship. But what does IBM's score mean for SQL Server's scalability story? Let's explore what the IBM numbers mean to the SQL Server community.

First, IBM posted this record-breaking benchmark on a Windows 2000 platform. Hmmm. I thought Windows didn't scale. It's interesting that IBM picked a nonscalable platform to post the world's fastest TPC-C numbers. Past questions about SQL Server's scalability often have been tied to Windows' scalability. As other relational database management system (RDBMS) vendors begin to post high-end TPC-C scores on Windows platforms, the foundation of the "Windows doesn't scale" arguments will begin to wash away.

Second, IBM achieved its high-end TPC-C score by using clustering architecture very similar to SQL Server 2000's distributed partitioned views feature. IBM partitioned its data across 32 SMP nodes, each running four 700MHz Pentium III processors, whereas Microsoft partitioned its database across 12 8-way SMP machines. The shared-nothing approach to database clustering, which IBM, Microsoft, and Tandem use, is noticeably different from Oracle's Parallel Server approach, which uses a Distributed Lock Manager (DLM) in a shared-disk cluster architecture. Although shared-disk environments might be easier to manage, shared-nothing clusters are inherently more scalable. Each clustering technique has its pros and cons; only time will tell which solution the market finds more appropriate. For now, IBM's scale-out strategy validates the clustering path SQL Server is following.

So far, IBM's TPC-C performance is good news for SQL Server fans. The top score proves that Windows is scalable and reinforces Microsoft's scale-out strategy. But IBM posted TPC-C numbers that almost doubled the recently withdrawn Microsoft cluster-based score of 227,079 tpmC. (For more information about Microsoft's previous TPC-C score, see "SQL Server 2000 Sets TPC-C Record," http://www.sqlmag.com/articles/index.cfm?articleid=8220. For more information about why Microsoft withdrew the scores, see "Thanks to Oracle, Microsoft Enhances Distributed Partitioned Views," http://www.sqlmag.com/articles/index.cfm?articleid=9130.) To explore the specifics of IBM's and Microsoft's configurations, you need to read the full TPC-C disclosure reports at http://www.tpc.org/. But note that you can't easily compare the scores. IBM produced its score using 32 4-way boxes running 700MHz chips for a total of 128 processors. Microsoft produced its scores using 12 8-way boxes running 533MHz chips for a total of 96 processors. In addition, IBM's TPC-C score was achieved at a cost of $32.28 per tpmC compared to Microsoft's cost of $19.12 per tpmC. In other words, IBM used more processing power and at a per-transaction cost that is 68 percent more than the cost of the withdrawn Microsoft number.

The real test will come when IBM and Microsoft publish TPC-C scores on identical platforms. But we may never see such published scores. Both IBM and Microsoft are probably running these tests right now to figure out who would win that battle. But only if one vendor has a strong lead in raw performance are we likely to see published figures. If neither company publishes scores on identical hardware platforms, we can infer that neither had a strong competitive advantage.

The numbers are starting to speak for themselves. Today, Win2K is a highly scalable database platform. And when Microsoft ships 32-way Datacenter servers later this year, you'll see Win2K's scalability ceiling rise dramatically.