At PASS Summit 2009, Tom Casey, the Microsoft General Manager for SQL Server Business Intelligence chatted with Michael Otey and me about the new editions for SQL Server 2008 R2 and what Microsoft is doing to make it easier for developers and DBAs to try SQL Azure.

Michael Otey: Can you fill us in on the new SQL Server 2008 R Datacenter edition?

Tom Casey: People were telling us they wanted it. People wanted more capabilities and differentiation at the high end. Virtualization and consolidation are a big theme in IT right now. People are trying to get the utilization of CPUs up about 50% in the datacenter. This is an important task for many including Microsoft’s own IT department. They’re looking at deploying much more massive infrastructure and having fewer physical units to manage and deploy. The feedback around Windows Server’s work with Hyper-V and virtualization was so positive that people were saying “Help me do the same class of things with the database as well.”

What is the cloud really? A cloud is really just a collection of compute power that’s available to you in an environment that you can go to do that. But we build from a datacenter, because a well-run and virtualized datacenter is in essence a model of this capability.

Michael Otey: What differentiates the Datacenter Edition from the Enterprise Edition?

Tom Casey: We previously had a physical limit of about 64 logical processors. The big news is support for up to 256 logical processors. The Datacenter edition has the ability to scale an application multi-server management hub. You can get a lot of big apps and a lot of little apps together in an environment and make those more manageable. There will be scale limitations in the Enterprise edition relative to that kind of function. The Datacenter edition applies to the very highest workloads that an enterprise might have around big projects like consolidation.

Virtualization is definitely a feature differentiator for Datacenter Edition. Virtualization is on a lot of CIOs’ minds. Whether they’re running the occasionally used application in a virtual machine so that they can keep it offline and bring it online for the couple months or weeks that they need it, or whether they’re trying to reclaim space for operations, surface area within their environment has shrunk down.

CIOs and IT directors view BI as an area of spend so that they can understand their data better and get more value out of what they have. We’re seeing spend on virtualization continuing to be a hot area as well; people are trying to do the same thing with their physical infrastructure. Typically virtualization is one of the top three areas of focus.

Michael Otey: I remember in the Enterprise edition that if you licensed all the processors in a physical system you could run an unlimited number of virtual instances in SQL Server. Has that changed at all with the Datacenter Edition?

Tom Casey: Yeah it has. With the new capabilities in R2 and with going to a larger set of processors, we’ve put a couple of caps in place in the Enterprise Edition around thing like number of processors, physical memory, and virtual machines. There aren’t a lot of boxes out there that have 256 processors, so we’ve accommodated and set those limits in such a way that for the majority of they’re not at a place where they’re going to notice a transition point unless they’re opting into much larger workloads.

Sheila Molnar: Can you tell us more about the new SQL Server 2008 R2 Parallel Data Warehouse edition?

Tom Casey: The Madison edition is now the Parallel Data Warehouse Edition of SQL Server. This is an appliance-like offering that builds on the success we’ve had with the fast track and reference architectures from Dell, HP, Bully, Unisys, and now IBM. These partners are signed up to deliver the Parallel Data Warehouse appliances that will have the Parallel Data Warehouse edition of SQL Server installed on them. They will be optimized for that. They’ll have the right match of hardware, software, reference guidance, configuration, etc. to go scale. People can choose what they want by the capacity that they need, and so there’s everything from a terabyte all the way up to 100s of terabytes with the advent of Parallel Data Warehouse Edition.

Michael Otey: Customers would buy these from the vendors, correct?

Tom Casey: Yes.

Michael Otey: How do vendors differentiate these offerings from one another?

Tom Casey: Many IT shops have a preferred hardware vendor. Rather than go down the path others have where they stick a brand on something and sell you one thing, we’re responsive to the fact that people want choice. The vendors’ normal diffentiations in their service, their architecture, and the other things they provide will still apply. Within the Parallel Data Warehouse Edition these appliance vendors will offer different sets of scale up and reference configurations that they’ll support. And then they’ll augment that with their own configuration references and services and other things that will help with implementations and optimizations for certain domains or vertical markets. We think it creates another opportunity for customers to be successful with SQL Server and these very, very high end mission critical workloads that the large data warehouses have become. It creates a great partner opportunity for partners to accelerate their existing practices or build new practices around this domain.

Sheila Molnar: Who’s the customer? Organizations who want to do hosted services in the cloud?

Tom Casey: The more typical customer would be your larger enterprises that are seeking to deploy a significant-sized data warehouse. Last time I checked, there was something like 85% of the world’s data warehouses under 6 terabytes. Not everyone is going to need 100 terabytes of information. But there are very large mission critical data warehouses that do exist in places like telecommunications, for example. We see a lot of call data, individual discrete transactions that grow rapidly. In the government sector you see a lot of this where you’re tracking people. You have compliance requirements. Another area is sensor networks in energy: Where you’re taking information off of oil rigs you could have thousands of feeds every second. Those things coming together and needing to be archived become really important. Many of those industries will lead the way to the largest sized deployments. But if your business has accumulated a bunch of data in your data warehouse you deserve great response out of your 20 or 50 terabytes as well. And that’s the reason that there are these scalable offerings, so it will scale up and down across the spectrum of what people want in the enterprise.

Sheila Molnar: Can you elaborate on the SQL Azure announcements at PASS? What do customers have to look forward to in January?

Tom Casey: The SQL Azure database that we talked about before is now feature complete and has been made available live. Customers can go take advantage of SQL Azure and Windows Azure as they’ve been doing during our beta programs but it’s now more broadly available. In January customers who choose to do so can actually register for the service and start to pay subscriptions for the service. It moves from something that was in effect a pilot. People can get a sense of what their subscription level needs to be. Even when a subscription starts in January we’re going to delay the cycle so that you can get a look at your January utilization and really understand what level of access you truly need. What size data do you need? What are your volumes and transactions, and so forth. We want customers to have insight into what their real utility will be in the cloud. I’m really excited about it. We’ve come a long way towards offering something that works in a very similar manner on premises and off premises.

Michael Otey: I’ve recently done a development article, and I’ve gotten to use SQL Azure. I got the service accounts and connected to it and connected apps that were using it as a back end. I was really impressed. I thought it would be way different and the similarity to SQL Server really struck me. It was really comfortable to use.

Tom Casey: IT professionals and developers sometimes think “there’s this new thing called the cloud and I have to completely retool myself. I feel threatened by it.” You don’t need to feel threatened and you don’t have to completely retool yourself. We talk TDS to the thing. It’s just SQL Server in the cloud to your application, for the most part. There are minor changes to revector yourself to the connection string through the URL and you’re good to go. It’s very accessible and it’s another example trying to help IT professionals not just do more of the same thing with less but do some different things. Deliver new experiences to people. Deliver experiences that are coherent both on and off premises.

Michael Otey: With SQL Azure you now have what’s essentially a globally accessible database backend. That’s a new capability for most developers.

Tom Casey: It is. Remember too that it’s a complementary capability to what people have on premises. The fact that we have a common architecture, programming model, and administrative experience wherever possible between what people do on premises and what they do in the cloud is really important. It’s different than the cloud approach taken by many others, and we’ve evolved to it from when we first started on this project. We feel like we have the right thing and people like you and others have started responding. So that’s good.

Michael Otey: I was concerned when I saw the SQL Server Data Services as it was first announced. It had a different interface and connected with SOAP. I said “I don’t know about this.” So when I went into Azure I said “Well what’s this going to be like?”

Tom Casey: So you were a skeptic coming in.

Michael Otey: I was happy to see that it was so familiar.

Tom Casey: Hey! We take feedback. Never let it be said that we don’t take feedback.