Without a doubt, the most important new feature in SQL Server 2014 is the new In-Memory OLTP engine. With CPU speeds hitting a plateau, Microsoft has been working at increasing performance through memory optimization. This trend began with the VertiPaq technologies that Microsoft first released with Power Pivot and the SQL Server 2008 release.
Related: The In-Memory Revolution
Power Pivot used memory compression techniques that let Microsoft Excel perform business intelligence (BI) analyses over multi-million-row data sets. These in-memory technologies later evolved into the columnar index technology that Microsoft released with SQL Server 2012.
Related Video: What Is SQL Server 2014's In-Memory OLTP Database Engine?
Unlike the original VertiPaq technology that was hosted in Excel, the SQL Server 2012 columnar index is a separate processing engine that's hosted in the SQL Server instance itself. Designed for data warehousing implementations, the columnar index provides performance improvements by compressing column data, moving it into memory, and running columnar queries over that data.
With SQL Server 2014, Microsoft has continued to evolve its in-memory technologies with the all-new In-Memory OLTP engine, which lets you move select tables entirely in memory for high-performance, low-latency data access. I'll walk you through the In-Memory OLTP engine's basic architecture so that you can see how it differs from SQL Server 6.5's old pinned tables as well as how it provides better performance than just putting databases on solid state disks (SSDs). In addition, I'll tell you about the requirements and limitations for using the In-Memory OLTP engine, as well as how you can get started using it to rev up application performance.
SQL Server 2014's In-Memory OLTP engine was previously code-named Hekaton—the Greek word for "100"—because it represented Microsoft's goal of improving performance by 100x. Microsoft built this query processing engine from the ground up, using an all-new lock-free design. It doesn't use any locks or internal latches to maintain transactional data integrity.
The lock-free design is one of the key points that differentiates the In-Memory OLTP engine from pinning tables with DBCC PINTABLE or from putting databases on SSDs. DBCC PINTABLE holds a table in SQL Server's buffer pool, but it uses the same relational engine with its locks and latches, so it doesn't offer the scalability found in the In-Memory OLTP engine. The same is true for SSDs, which can provide higher I/O but use the same relational engine.
In contrast, the In-Memory OLTP engine uses a new optimistic multi-version concurrency control mechanism in conjunction with algorithms that have been optimized for memory-resident data. When a row in a shared buffer is modified, the In-Memory OLTP engine makes an entirely new version of that row and timestamps it. This process is very fast because it's done entirely in memory. The engine then analyzes and validates any updated rows before committing them. This design is faster and more scalable than the traditional locking mechanism used by SQL Server's relational database engine because there are no locks or other wait states that prevent the processor from running at full speed.
Because this optimistic processing creates a lot of different row versions, it leaves a number of discarded rows in memory. To handle the discarded row versions, Microsoft implemented a new lock-free garbage collection process as part of the In-Memory OLTP engine. The garbage collection process periodically cleans up all the unneeded rows.
In conjunction with the new lock-free design, Microsoft introduced a stored procedure compilation process that takes interpreted T-SQL code and compiles it into native Win64 code. The goal is to reduce the number of instructions that the CPU must execute to process the query. A shorter code path equates to faster code execution. The combination of the new query processing engine and the compiled stored procedures are the primary factors driving the high-performance In-Memory OLTP engine.
Figure 1 shows the basic architecture of the In-Memory OLTP engine.
As you can see, the engine doesn't work in isolation. Instead, it interoperates with the SQL Server relational engine, allowing you to combine memory-optimized objects and standard database objects in the same queries.
Tables, Stored Procedures, and Indexes
To make the In-Memory OLTP engine work, Microsoft modified the way in which tables, stored procedures, and indexes are handled. Tables are copied wholly into memory and are made durable by transaction log writes to disk. The lock-free engine processes transactions for the memory-resident tables. Stored procedure performance is improved by compiling the stored procedures into native code. Typically, T-SQL stored procedures are interpreted, which adds overhead to the execution process. That's not such a big issue when the stored procedure is processing data from disk. However, when the data is memory-resident, it can hold back performance. Compiling the stored procedures to native Win64 code makes them directly executable, thereby maximizing their performance and minimizing execution time.
As you might expect, indexes for memory-resident tables are different. They don't use the B-tree structures used by on-disk indexes. SQL Server 2014 uses either the new hash indexes or memory-optimized nonclustered indexes for memory-optimized tables. Hash indexes are efficient for item lookups but don't perform the best for range values. Memory-optimized nonclustered indexes are best at retrieving ranges of values. They also support retrieving the table rows in the order that was specified when the index was created.
Each memory-optimized table must have at least one index. Memory-optimized indexes must be created as a part of the CREATE TABLE statement. You can't use the CREATE INDEX statement to create an index for a memory-optimized table after the table has already been created. Memory-optimized indexes exist only in memory. The index structures aren't persisted to disk, and index operations aren't logged in the transaction log. Essentially, all memory-optimized indexes are covering indexes and all columns are included in the index. It's also important to know that memory-optimized tables can only support a maximum of eight indexes. You can find more information on memory-optimized indexes in Microsoft's Guidelines for Using Indexes on Memory-Optimized Tables.
Hardware-wise the In-Memory OLTP engine requires an x64 server that supports the cmpxchg16b instruction. All modern x64 processors support this instruction. The only time this might be a problem is if you implement the In-Memory OLTP engine in a 64-bit virtual machine (VM) that's using an older virtual processor. If that's the case, you'll need to update the VM's virtual processor. In addition, SQL Server needs enough memory to store all the memory-optimized tables and indexes. To make sure there's enough room for row versions, Microsoft recommends that you provide an amount of memory that's two times the on-disk size of the memory-optimized tables and indexes. The recommended maximum size for memory-optimized tables is 256GB. In addition, Microsoft recommends that you have an amount of free disk space that's two times the size of your memory-optimized tables.
The In-Memory OLTP engine is supported on Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2 SP2. It requires a 64-bit OS. In addition, you need to be using the Enterprise, Developer, or Evaluation edition of SQL Server 2014. The engine isn't supported in the SQL Server 2014 Standard edition.
When you install one of the supported SQL Server 2014 editions, you need to be certain to select Database Engine Services to install support for the In-Memory OLTP engine. Notably, the In-Memory OLTP engine supports several high availability technologies, including failover clustering, AlwaysOn Availability Groups, and log shipping.
Like you might expect given its memory-intensive nature, there are a number of restrictions for using the In-Memory OLTP engine. First, not all of the SQL Server 2014 data types are supported. The following data types aren't supported by memory-optimized tables:
- User data types (UDTs)
In addition, there are a number of database features that aren't supported. Here are some of the most important database and table limitations:
- Database mirroring isn't supported.
- The AUTO_CLOSE database option isn't supported.
- Database snapshots aren't supported.
- DBCC CHECKDB and DBCC CHECKTABLE don't work.
- Computed columns aren't supported.
- Triggers aren't supported.
- FOREIGN KEY, CHECK, and UNIQUE constraints aren't supported.
- IDENTITY columns aren't supported.
- FILESTREAM storage isn't supported.
- ROWGUIDCOL isn't supported.
- Clustered indexes aren't supported.
- Memory-optimized tables support a maximum of eight indexes.
- COLUMNSTORE indexes aren't supported.
- ALTER TABLE isn't supported. In-Memory OLTP tables must be dropped and re-created.
- Data compression isn't supported.
- Multiple Active Result Sets (MARS) aren't supported.
- Change Data Capture (CDC) isn't supported.
You shouldn't manually delete any checkpoint files for In-Memory OLTP database objects. SQL Server will delete all unused checkpoint files when you back up a database or log. If you don't perform backups, the available space on your hard disk storage will decrease.
If you use In-Memory OLTP, Microsoft recommends that you enable Instant File Initialization, which is a Windows Server policy. To enable it, you need to grant the SQL Server Service startup account the SE_MANAGE_VOLUME_NAME user right. Note that the only way to remove a memory-optimized file group from a database is to drop the database. For a complete list of In-Memory OLTP limitations, see the Transact-SQL Constructs Not Supported by In-Memory OLTP and Supported SQL Server Features web pages.
Identification of Candidates for In-Memory OLTP
To help you evaluate whether the In-Memory OLTP engine will improve your database performance, Microsoft includes the new Analysis, Migrate, and Report (AMR) tool. Like its name suggests, the AMR tool can help you identify the tables and stored procedures that would benefit by moving them into memory. In addition, it can help you perform the actual migration of those database objects. The AMR tool is installed when you select the Management Tools, Complete option with the SQL Server 2014 setup. You access the AMR tool through SQL Server Management Studio (SSMS).
The AMR tool bases its recommendation on your actual system workload, so using it is a multi-step process. Essentially, the process requires seven steps:
- Establish your baseline performance.
- Configure Management Data Warehouse (MDW).
- Run your production workload.
- Run AMR reports.
- Migrate your database objects.
- Run your production workload.
- Establish a new performance baseline.
Microsoft recommends that you run the AMR tool for at least one hour to capture your performance baselines. However, the longer you run the tool, the more representative your captured statistics are likely to be. When the AMR tool is monitoring performance, it collects data every 15 minutes. It stores the data in two collections named Table Usage Analysis and Stored Procedure Analysis.
Before you use the AMR tool, you must first configure MDW. To configure it, open Object Explorer in SSMS, expand Management, right-click Data Collection, select Tasks, and choose Configure Management Data Warehouse.
To start collecting data with the AMR tool, open Object Explorer, expand Management, right-click Data Collection, select Tasks, and choose Configure Data Collection. This will start the Configure Data Collection Wizard. The wizard displays a dialog box that lets you select the current SQL Server instance and an MDW database on that instance, as Figure 2 shows. After those selections are made, select the Transaction Performance Collection Sets check box and click Next. After verifying the selections, click Finish to start the data collection process.
You can generate an AMR Transaction Performance Analysis Overview report by right-clicking the MDW database, selecting Reports, choosing Management Data Warehouse, and selecting Transaction Performance Analysis Overview. This AMR report provides information about all user databases on the workload server.
Creation of In-Memory OLTP Databases
You're not restricted to running the AMR tool to create In-Memory OLTP databases or to enable the new in-memory capabilities for existing databases. To create a new database that utilizes the In-Memory OLTP engine, you can use the T-SQL CREATE DATABASE statement with the new CONTAINS MEMORY_OPTIMIZED_DATA keywords, as shown in Listing 1.
(NAME = [MyIMOLTP_data],
FILENAME = 'C:\temp\MyIMOLTP_data.mdf', size=500MB),
FILEGROUP [MyIMOLTP_fg] CONTAINS MEMORY_OPTIMIZED_DATA
(NAME = [MyIMOLTP_dir],
FILENAME = 'C:\temp\MyIMOLTP_dir')
(NAME = [MyIMOLTP_log],
All databases that use memory-optimized data must have at least one file group that's created using the CONTAINS MEMORY_OPTIMIZED_DATA option. SQL Server 2014 uses this file group for the checkpoint and delta files needed to recover any memory-optimized tables. Note that the code in Listing 1 uses the C:\temp path. To run this code on your own system, you'd need to change this path to reflect the appropriate data and log directories in your system.
You can also add an In-Memory OLTP file group to an existing database. For example, Listing 2 shows how to add one to an existing AdventureWorks2012 database.
ADD FILEGROUP INMOLTP_fg CONTAINS MEMORY_OPTIMIZED_DATA;
ALTER DATABASE AdventureWorks2012
ADD FILE (NAME='INMOLTP_fg', FILENAME='c:\temp\INMOLTP_fg')
TO FILEGROUP INMOLTP_fg;
Alternatively, you can use SSMS to add a file group.
In Figure 3, you can see how SSMS in SQL Server 2014 displays the file groups for a memory-optimized database.
Creation of In-Memory OLTP Tables
After you've enabled the database to support In-Memory OLTP, the next step is to create In-Memory OLTP tables. In SQL Server 2014, the CREATE TABLE statement supports the MEMORY_OPTIMIZED = ON option, which specifies that the table be memory-optimized. Specifying this option results in the creation of a table DLL, which is loaded into memory.
Listing 3 shows how to create a simple memory-optimized table.
CREATE TABLE MyINMOLTPTable
( [Col1] int not null PRIMARY KEY
NONCLUSTERED HASH WITH (BUCKET_COUNT = 1024),
[Col2] varchar(20) null )
WITH (MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA);
As you can see, creating a memory-optimized table is very much like creating a regular table, with a few small changes. Here, the table MyINMOLTPTable is created with two columns. The first column, Col1, is the primary key and it's using the new hash index. The BUCKET_COUNT keyword specifies the size of the hash table. Microsoft recommends that the BUCKET_COUNT value be set to between one and two times the maximum expected number of distinct values in the index. For more information about setting the BUCKET_COUNT value, check out the Determining the Correct Bucket Count for Hash Indexes web page.
The MEMORY_OPTIMIZED = ON clause in Listing 3 indicates that the table will be memory-optimized. The DURABILITY keyword specifies whether the data will be persisted in the event the server either fails or is turned off. There are two possible values:
- SCHEMA_AND_DATA. If you specify the value of SCHEMA_AND_DATA, both the schema and the data will be persisted. When you select this option, there are two possible levels of durability: full durability and delayed durability. The full durability option writes the transaction to disk, whereas the delayed durability option will first store the transaction in memory and later write it to disk. Delayed durability can improve performance but can also result in data loss in the event of a server crash or power outage. The durability level is controlled through the DELAYED_DURABILITY setting at the database level.
- SCHEMA_ONLY. If you specify the value of SCHEMA_ONLY, the data won't be persisted. Selecting this option provides the best performance. However, because the data isn't durable, this option is best for transient data or in situations where some data can be lost.
Creation of In-Memory OLTP Stored Procedures
Creating memory-optimized stored procedures is similar to creating regular stored procedures. Listing 4 shows how to create and execute a memory-optimized stored procedure.
IF EXISTS (SELECT * FROM sys.procedures
DROP PROCEDURE usp_DemoINMOLTPsp
CREATE PROCEDURE dbo.usp_DemoINMOLTPsp
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
WITH (TRANSACTION ISOLATION LEVEL
= SNAPSHOT,LANGUAGE = N'us_english')
DECLARE @Col1ID int = 1
WHILE @Col1ID < 1000
INSERT INTO dbo.MyINMOLTPTable VALUES
(@Col1ID, 'Data ' + CAST(@Col1ID AS VARCHAR))
SET @Col1ID += 1
SELECT COUNT(*) FROM dbo.MyINMOLTPTable
If you look carefully at the code, you'll notice some of the differences between creating a memory-optimized stored procedure and a regular stored procedure. The first thing you'll probably notice is that the CREATE PROCEDURE statement uses the NATIVE_COMPILATION keyword, which is new in SQL Server 2014. When you include this keyword, the stored procedure will be compiled into a native Win64 DLL that contains native processor instructions that are executed directly by the CPU, without the need for interpretation.
When you use the NATIVE_COMPILATION keyword, you must also include:
- SCHEMABINDING. The SCHEMABINDING keyword indicates that tables referenced by this stored procedure can't be dropped. This keyword is only valid for natively compiled stored procedures in SQL Server 2014.
- EXECUTE AS. This option controls the user context under which the stored procedure should be executed.
- BEGIN ATOMIC. One requirement for memory-optimized stored procedures is that the natively compiled stored procedure body must consist of exactly one atomic block, which you specify with the BEGIN ATOMIC keywords. The atomic block requires that both the TRANSACTION ISOLATION LEVEL and LANGUAGE options be specified. In addition, any tables that are referenced must use a two-part naming scheme.
There are a number of T-SQL features that aren't supported by memory-optimized stored procedures. For a list of those features, see the Transact-SQL Constructs Not Supported by In-Memory OLTP web page. For more information about how to create memory-optimized stored procedures, check out the Supported Constructs in Natively Compiled Stored Procedures web page.
Warp Speed, Scotty!
SQL Server 2014's In-Memory OLTP engine will take SQL Server OLTP performance to places where it has never gone before. Put simply, it's a new revolution in relational database OLTP processing that can deliver orders of magnitude better performance for business-critical databases. For a great detailed look inside the architecture of SQL Server 2014's In-Memory OLTP engine, check out Kalen Delaney's white paper "SQL Server In-Memory OLTP Internals Overview for CTP1." You also might want to dive into the "Hekaton: SQL Server's Memory-Optimized OLTP Engine" research paper, which was written by the Microsoft Research Hekaton team. In addition, you should check out the SQL Server 2014 CTP2 In-Memory OLTP Sample on CodePlex.