Introducing SQL Server In-Memory OLTP

Comments 0

Share to social media

1. Introduction

In-Memory OLTP, also known as ‘Hekaton’ and ‘In-Memory Optimization’, is Microsoft’s latest in-memory processing technology. In-Memory OLTP is optimized for Online Transaction Processing (OLTP). It is integrated into SQL Server’s Database Engine and can be used in the exact same manner as any other Database Engine component.

In-Memory OLTP originally shipped with SQL Server 2014 and it mainly features two new data structures which are Memory-Optimized Tables, and Natively-Compiled Stored Procedures.

Memory-optimized tables

Memory-optimized tables store their data into memory using multiple versions of each row’s data. This technique is characterized as ‘non-blocking multi-version optimistic concurrency control’ and eliminates both locks and latches, thereby achieving significant performance advantages.

The main features of memory-optimized tables are:

  • Rows in the table are read from, and written to, memory
  • The entire table resides in memory
  • Non-blocking multi-version optimistic concurrency control
  • The option of durable & non-durable data
  • A second copy is maintained on disk for durability (if enabled)
  • Data in memory-optimized tables is only read from disk during database recovery
  • It is interoperable with disk-based tables

Natively-compiled stored procedures

A natively-compiled stored procedure is a SQL Server object that can access only memory-optimized data structures such as memory-optimized tables, table variables, etc. The main features of a natively-compiled stored procedure are:

  • It is compiled to native code (DLL) upon its creation (the interpreted stored procedures are compiled at first execution)
  • Aggressive optimizations take time at compile time
  • It can only interact with memory-optimized tables
  • The call to a natively-compiled stored procedure is actually a call to its DLL entry point

2. Best-Suited Workloads

I’ve already mentioned that In-Memory OLTP in SQL Server is optimized for OLTP processing. This means that it performs best for certain specific types of workload. It does not mean, however, that if it is used against other types of workload it won’t perform well; but there is a recommendation from Microsoft which specifies the main workload areas that are most likely to see the greatest benefits from using In-Memory OLTP. This can help you identify if you have a workload that can be potentially optimized using SQL Server’s In-Memory Optimization.

The below table summarizes these workload types.

Workload Type

Examples

Main Benefits of In-Memory OLTP

High Data Insert Rate

  • Smart Metering
  • System Telemetry
  • eliminate contention
  • minimize i/o logging

Read Performance and Scale

Social Network Browsing

  • Eliminate contention
  • Efficient data retrieval
  • Minimize code execution time
  • CPU efficiency for scale

Compute Heavy Data Processing

Manufacturing supply chains or retailers

  • Eliminate contention
  • Minimize code execution time
  • Efficient data processing

Low Latency

  • Online Gaming Platforms
  • Capital Markets
  • Eliminate contention
  • Minimize code execution time
  • Efficient data retrieval

Session State Management

Managing sessions (i.e. user requests, etc.) for a heavily-visited websites

  • Eliminate contention
  • Efficient data retrieval
  • Optional I/O reduction/removal

Table1: Best-Suited Workload Types for In-Memory OLTP.

3. Memory-Optimized Tables

Memory-optimized tables are by default durable, meaning that in case of a server crash or failover the
data will be recovered from the transaction logs. However, you can also define a memory-optimized table as non-durable. That means that transactions on these tables do not require any disk I/O, however in the case of server crash or failover, the data in these tables will not be recovered because the memory buffers will be flashed.

The decision to use durable or non-durable memory-optimized tables depends on the business requirements. For example, if you want to use a memory-optimized table as part of data warehouse staging process where it gets deleted every day and re-populated with millions of records in order to perform computations and pass the results to other tables, then it would not be a bad idea to use a non-durable memory-optimized table. If however you have a table with permanent data that gets updated daily and you decide to migrate it to a memory-optimized table then you should consider using a durable table memory-optimized table.

Let’s see how a disk-based table can be defined/migrated as a memory-optimized table in SQL Server. For example consider a table named “Product” with the following definition

Listing 1: The Disk-Based Table “Product”.

The corresponding non-durable memory-optimized table for “Product” would be defined as below:

Listing 2: The Memory-Optimized Non-Durable Table “Product”.

The Durability setting for the above table is set to “SCHEMA_ONLY” meaning that only the schema of the table will be recovered in the case of a server crash or failover.

Also, the corresponding durable memory-optimized table for “Product” would be defined as below, having as the only difference from the previous one the value “SCHEMA_AND_DATA” for the Durability setting:

Listing 3: The Memory-Optimized Durable Table “Product”.

As well as the “Durability” setting in the memory-optimized table definitions, you also need to take non-clustered indexes into consideration. Memory-optimized tables do not support clustered indexes but do however support non-clustered indexes (currently up to eight). Along with each index specification you will also need to specify the BUCKET_COUNT value. The recommended value for this, is to be between 1.5 and 2 times the estimated number of unique values (500 unique products in this example) for the column indexed by the non-clustered index. If you estimate that you are going to have large tables (i.e. with over 5 million unique values or more), then for saving up memory consumption you can set the Bucket_Count value to 1.5 times the number of unique values. In the opposite case you can set the Bucket_Count value to 2 times the number of unique values.

In SQL Server 2014, there is was a restriction on memory-optimized tables that requires that every string column that participates in a non-clustered index must have a *_BIN2 collation. Either you need to have the default collation set as *_BIN2 or should explicitly specify a *_BIN2 collation for the column that will participate in the index. An example is shown below:

Listing 4: Specifying a *_BIN2 Collation for the Memory-Optimized Table “Product”.

In SQL Server 2016 In-Memory OLTP, these limitations do not exist as there is full support for all collations.

4. Natively-Compiled Stored-Procedures

Natively-compiled stored procedures can only be used for processing memory-optimized tables, as well as other T-SQL constructs that are supported such as subqueries, scalar user-defined functions, built-in math functions, etc. They are compiled to native code and execute without the need for any further compilation or interpretation.

Based on the previous examples, the listing below has the definition of a natively-compiled stored procedure that updates the “Price” column of the memory-optimized table “Product”.

Listing 5: The Natively-Compiled Stored Procedure for Updating the Table “Product”.

Some points to note in the above definition of the natively-compiled stored procedure are:

  • The WITH NATIVE_COMPILATION clause is used
  • The SCHMABINDING clause is required because it bounds the stored procedure to the schema of the objects it references
  • The BEGIN_ATOMIC clause is required because the stored procedure must consist of exactly one block along with the Transaction Isolation Level.

5. Limitations

In-Memory OLTP had certain limitations when originally shipped with SQL Server 2014. Examples of such limitations are: not being able to create Indexes on NULLable columns or use OUTER JOIN, UNION, DISTINCT etc. However, in SQL Server 2016 we see that many of these limitations are lifted.

The below table compares the main limitations and features of In-Memory OLTP in SQL Server 2014 in comparison to SQL Server 2016 (CTP 2).

Feature / Limitation

SQL Server 2014

SQL Server 2016 CTP2

Maximum memory for memory-optimized tables

Recommendation (not hard limit): 256 GB

Recommendation (not hard limit): 2TB

Collation support

Must use a *_BIN2 collation for:

(i) Character columns used as all or part of an index key.

(ii) All comparisons/sorting between character values in natively-compiled modules.

 

Must use Latin code pages for char and varchar columns.

All collations are fully supported

Alter memory-optimized tables (after creation)

Not Supported

Supported

Alter natively-compiled stored procedures

Not Supported

Supported

Parallel plan for operations accessing memory-optimized tables

Not Supported

Supported

Transparent Data Encryption (TDE)

Not Supported

Supported

Use of the below language constructs in natively-compiled stored procedures:

  • LEFT and RIGHT OUTER JOIN
  • SELECT DISTINCT
  • OR and NOT operators
  • Subqueries in all clauses of a SELECT statement
  • Nested stored procedure calls
  • UNION and UNION ALL
  • All built-in math functions

Not Supported

Supported

DML triggers in memory-optimized tables

Not Supported

Supported

(AFTER triggers, natively-compiled)

Multiple Active Result Sets (MARS)

Not Supported

Supported

Large Objects (LOBs):

  • varchar(max)
  • nvarchar(max)
  • varbinary(max)

Not Supported

Supported

Offline Checkpoint Threads

1

Multiple Threads

Natively-compiled, scalar user-defined functions

Not Supported

Supported

Indexes on NULLable columns

Not Supported

Supported

Table2: Comparison of In-Memory OLTP Limitations/Features in SQL Server 2014 vs SQL Server 2016.

6. Conclusions

In this article we discussed about memory-optimized tables and natively-compiled stored procedures which are provided in SQL Server 2014 or later. These new data structures are part of the In-Memory OLTP Engine of SQL Server which can be used in order to achieve significant performance over processing that uses “traditional” disk-based data structures.

We have also discussed about the workload types which can benefit the most in terms of performance when In-Memory OLTP is used. Furthermore, we saw simple examples on how we can define memory-optimized tables and natively-compiled stored procedures. Last, we discussed about the limitations and features of In-Memory OLTP in SQL Server 2014 in comparison to SQL Server 2016 (CTP2).

As we can see, in SQL Server 2016 there is more support for In-Memory OLTP and a more seamless integration with SQL Server’s Database Engine. In-Memory OLTP is here to stay and following the momentum of its predecessor technologies (PowerPivot engine/Vertipaq, In-Memory Columnstore Index) it is something that constantly evolves, aiming at providing a powerful processing engine for intensive database operations.

References

See also…

Load comments

About the author

Artemakis Artemiou

See Profile

Artemakis Artemiou is a Senior SQL Server and Software Architect, Author, and a former Microsoft Data Platform MVP (2009-2018). He has over 15 years of experience in the IT industry in various roles. Artemakis is the founder of SQLNetHub and TechHowTos.com. Artemakis is the creator of the well-known software tools Snippets Generator, DBA Security Advisor and In-Memory OLTP Simulator. Moreover, he is the author of many eBooks on SQL Server. Artemakis currently serves as the President of the Cyprus .NET User Group (CDNUG) and the International .NET Association Country Leader for Cyprus (INETA). Artemakis's official website can be found at aartemiou.com. You can follow Artemakis on Twitter.

Artemakis Artemiou's contributions