When SQL Server 2014 was launched in April 2014, it brought with it a much talked about bonus. Alongside the standard relational engine and its traditional disk-based tables, SQL Server 2014 includes In-Memory OLTP, a high performance, memory-optimized engine that allows users to create data in memory-optimized tables that reside permanently in-memory.
This is not just another “DBCC PINTABLE”: memory-optimized tables and indexes are completely new data structures, and very different from their disk-based counterparts. SQL Server takes no locks or latches on these structures during reading or writing, and they have the potential to improve online transaction processing performance significantly.
In development, Microsoft code-named the project ‘Hekaton’, the Greek word for ‘100’. Their goal was to make In-Memory OLTP 100 times faster than disk-based tables. So far, they’ve actually made it around 30 times faster. Not 100, but still quite some achievement.
This gets better. As well as providing tables and indexes on which the in-memory engine has to acquire no latches or locks during concurrent access, the new engine also provides natively compiled stored procedures.
Transact-SQL code can be migrated to natively-compiled stored procedures, which are essentially machine code that can be executed immediately by the CPU, without the need for any further compilation or interpretation. While only a limited subset are supported at the moment, these natively-compiled procedures generate far fewer CPU instructions for a given task than the equivalent interpreted T-SQL stored procedure.
All of which gives organizations the opportunity to create high speed and scalable enterprise and web-scale applications and databases.
Why now?
Good question. If In-Memory OLTP is this good, why have products like SQL Server previously focused on disk-based tables? The answer lies in two trends that have been happening in the technology arena for the past 20 years and more.
The first is the increase in memory capacity, the rise of 64-bit processing, and the dramatic fall in memory prices. The cost per gigabyte of RAM, as an example, has fallen from around $100,000 in 1990 to a few dollars today.
The second reason is that, although CPU clock speeds aren’t getting any faster, the number of cores on processors is increasing, and Hekaton’s new in-memory data structures are designed to fully exploit very large numbers of cores.
With the majority of production OLTP databases taking up less than one terabyte, In-Memory OLTP has now become an attractive option. So much so that a recent survey of IT and data managers by Database Trends and Applications found that one third of enterprises already have in-memory databases in some capacity – and three quarters expect to expand their use of the technology in some way in the next three years.
So what’s the promise?
In-Memory OLTP is based on four pillars, each of which contributes an important element to the whole.
- With tables in memory rather than on disk, the time to access those tables is considerably reduced, leading to higher performance.
- Because T-SQL is compiled to machine code, natively compiled stored procedures use fewer instructions, so business logic processing is considerably faster.
- Optimistic multi-version concurrency control removes the need for the latches and locks that are necessary in disk-based tables to ensure transactional integrity, but can cause contention and blocking for highly concurrent workloads. Instead, when rows are modified, new versions are created that use time-stamps for validation to ensure full ACID support.
- In-Memory OLTP is fully integrated with SQL Server and is managed with a similar set of tools.
The end result of these four pillars is that using In-Memory OLTP, and memory-optimized tables, can be a lot faster than disk-based tables, and yet the two can happily co-exist in the same database and a client application can access data in both without needing to be aware of this fact.
To date, Microsoft is stating that organizations testing In-Memory OLTP have seen transaction speeds improve by up to 30 times compared to past performance, with the best performance gains achieved when the business logic resides in the database and not in the applications.
Where are those pitfalls?
These potential increases in performance do, however, come at a cost. Quite simply, the challenge of making in-memory tables work has meant that some features of disk-based tables just aren’t available.
Functions like replication, mirroring, and database snapshots are not supported, for example, and the data in in-memory tables is limited to 250 GB per server.
Similarly, In-Memory OLTP does not recognize statements such as ALTER TABLE, CREATE INDEX, DROP INDEX and ALTER INDEX. This means that we cannot simply alter existing disk-based tables to become memory-optimized tables, and once we’ve created a memory-optimized table, we can’t subsequently alter its structure or add more indexes.
FOREIGN KEYS and CHECK CONSTRAINTS have also been shelved in order to improve performance, as have many classic T-SQL operators and functionalities such as OR, LIKE, BETWEEN, and OUTER JOIN. This poses probably the biggest challenge because it restricts what developers can do in a stored procedure, seriously limiting business logic.
Finally, disk-based tables cannot be accessed from natively compiled stored procedures, reducing the performance gains when accessing disk-based tables alongside In-Memory memory tables. Instead, a capability called interop needs to be employed instead, where interpreted Transact-SQL is used to access the memory-optimized table. In this instance, the improvement in performance is typically a factor of three.
Now let’s talk deployment
One of the biggest issues with In-Memory tables is deploying them initially, and then deploying subsequent changes later on. Both have become longer, more complicated tasks, even with a deployment tool like SQL Compare.
When deploying an In-Memory table, for example, memory-optimized objects cannot be deployed inside a transaction. Instead, a two-step process is called for where they are deployed without transactions first, before deploying the remaining objects with transactions.
Similarly, memory-optimized tables and natively compiled stored procedures cannot be deployed if DDL triggers and event notifications are being used to monitor certain schema operations, something which is fairly common in audited environments. Instead, the DDL triggers and event notifications need to be removed and recreated once the In-Memory OLTP objects are deployed, with a subsequent impact on auditing procedures.
If a change to the schema or index is required, the same problem arises. That’s because no changes can be made to a memory-optimized table once it’s created. Without support for statements like ALTER TABLE and ALTER INDEX, the table has to be rebuilt – again, using a two-step process.
Fortunately, the latest version of SQL Compare, which was updated specifically to support SQL Server 2014, was developed to accommodate this new deployment issue. That two-step process is still required, but options like ‘Do not use transaction in deployment scripts’ mean that the correct deployment script can be created for each step.
The final verdict
In-Memory OLTP in SQL Server 2014 gives users the possibility of greatly enhanced performance. It even includes a new Analysis, Migrate and Report tool to evaluate the current performance of your database and tell you whether it can be improved.
Typical instances of businesses that could benefit include those with a high data insert rate such as power companies recording a constant stream of information from smart meters; websites with a large number of users accessing the same data from, say, online magazines; or website gaming platforms where low latency is business-critical.
For those who want, perhaps need, those performance advantages, the journey to In-Memory OLTP is not a straightforward migration. It is a worthwhile one, however, provided the deployment issues are understood first.
Load comments