An Introduction to In-Memory OLTP Database Objects

In-memory OLTP was introduced in SQL Server 2014, but greatly improved in SQL Server 2016. If you have a heavily-used transaction-processing that is processing a lot of data, it is definitely worth considering the new 'In-Memory' features of SQL Server 2016. Grant Fritchey explains why.

If your number one wait type, by far, is io_latch, if you find yourself relying on table variables, if you’re collecting massive amounts of data in SQL Server through inserts, the chances are that you’ve already been considering the use of In-Memory OLTP tables and indexes. These were introduced in SQL Server 2014, and were then radically improved in SQL Server 2016. With In-Memory OLTP (shortened from here on out to just In-Memory), the tables are a completely different way of storing and retrieving information within SQL Server. While the name ‘table’ implies that these are just old-fashioned Pin Tables, they are absolutely not. The one commonality between the old Pin Table and the new In-Memory table is that you need to have enough memory to manage the data that you are attempting to put into memory. After that point, all similarities end. Let’s talk just a little bit about everything that is on offer through In-Memory tables.

Memory is generally faster than disk, even many of the new disk technologies. This is the first attraction of In-Memory tables. However, there are more. In-Memory tables use optimistic locking. Standard SQL Server tables use pessimistic locking. Basically, the old locks and latches that were used to ensure transactional consistency are gone. In their place Microsoft is using a type of versioned data that allows for no locks or latches to be taken. The lack of locks and latches results in a radical improvement in performance. This is similar to the behavior of the database when using Snapshot Isolation: However it goes further and doesn’t take out locks even during data modification.

Another great thing about In-Memory tables is that you are able to choose your durability. While the In-Memory tables are just that- all in your server’s memory, data is persisted to disk so that during a reboot or other operations, the data is still stored and retrievable later. However, you can choose to have non-durable tables, where the possibility of real data loss exists. This increases the speed of the In-Memory table even more. Again, it does mean the possibility of data loss, but let’s say you’re collecting thousands of data points a minute and you won’t negatively impact the business by losing some of that data. Being able to collect the data faster overrides any concerns around data loss.

There are also new index types, specific to In-Memory tables. The Hash Index uses a hash table as a means of data lookup, which, when properly configured, results in radical performance improvements over traditional B+ Tree indexes within SQL Server. Improved indexing combined with the mechanics of storage and retrieval from memory add to the overall performance enhancements of the In-Memory tables.

Finally, on top of everything else, the ability to natively compile stored procedures so that any procedure that is accessing only In-Memory tables can be compiled as a DLL and run from within the memory of the SQL Server engine itself, a real improvement over traditional stored procedures. This additional functionality adds to all the benefits of In-Memory OLTP objects.

Before you start transferring all your tables to In-Memory tables, be aware of the limitations.

  • First, you must have enough memory; not only for the data you’re loading into the tables, but also for the multiple versions of that data needed by the optimistic locking scheme. This can require between 3 and 5 times the storage according to Microsoft’s recommendations.
  • There are also limits on the data types supported. Most importantly, CLR data types such as Spatial is not supported: There are others.
  • Finally, not all of the T-SQL language itself is supported with In-Memory tables.

All these limits can keep you from implementing In-Memory within your system. If you’re not experiencing the types of workloads suggested by Microsoft, you may not need to. The one exception here is the improvement in performance that the In-Memory tables give to table variables. That could be an independent driver for migrating your database to support In-Memory tables. Note, that I said “migrate the database.” I used that phrase because, enabling In-Memory storage on a database changes that database permanently. It will always be an In-memory database from that point forward. Be sure you need it.

However, back to those io_latch waits and those table variables, assuming you have the memory to spare, the performance enhancements that In-Memory OLTP tables offer is worth every single issue that you have to deal with.

If you’re thinking about implementing In-Memory tables, and you’re looking for a reliable way of migrating your databases, Redgate SQL Compare can help. For a walkthrough on exactly how SQL Compare handles comparison and deployments of In-Memory tables, take a look at my blog post Using SQL Compare with In-Memory OLTP tables.