{"id":69751,"date":"2017-01-26T15:16:02","date_gmt":"2017-01-26T15:16:02","guid":{"rendered":"https:\/\/www.simple-talk.com\/?p=69751"},"modified":"2021-08-24T13:39:26","modified_gmt":"2021-08-24T13:39:26","slug":"migrating-disk-based-table-memory-optimized-table-sql-server","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/database-administration-sql-server\/migrating-disk-based-table-memory-optimized-table-sql-server\/","title":{"rendered":"Migrating to SQL Server Memory-Optimized Tables: Indexes, Computed Columns, and Triggers"},"content":{"rendered":"<p>In-Memory OLTP, also known as Hekaton, can significantly improve the performance of OLTP (Online transaction processing) database applications. It improves throughput and reduces latency for transaction processing, and can help improve performance where data is transient, such as in temporary tables and during ETL (Extract Transfer and Load). In-Memory OLTP is a Memory-optimized database engine that is integrated into the SQL Server engine, and optimized for transaction processing.<\/p>\n<p>In order to use In-Memory OLTP, you define a heavily-accessed table as memory-optimized. Memory-optimized tables are fully transactional, durable, and are accessed using Transact-SQL in the same way as disk-based tables. A single query can reference both Hekaton memory-optimized tables and disk-based tables. A transaction can update data in both Hekaton tables and disk-based tables. Stored procedures that only reference memory-optimized tables can be natively compiled into machine code for further performance improvements. The In-Memory OLTP engine is designed for an extremely high session concurrency for the OLTP type of transactions driven from a highly scaled-out middle-tier. To achieve this, it uses latch-free data structures and optimistic, multi-version concurrency control. The result is the predictable, sub-millisecond low latency and high throughput with linear scaling for database transactions. The actual performance gain depends on many factors, but it is common to find 5 to 20 times performance improvements.<\/p>\n<p>Here is the downloadable link to the detailed overview of a memory-optimized table available from Microsoft White Paper: <a href=\"http:\/\/download.microsoft.com\/download\/8\/3\/6\/8360731A-A27C-4684-BC88-FC7B5849A133\/SQL_Server_2016_In_Memory_OLTP_White_Paper.pdf\">SQL Server In-Memory OLTP Internals for SQL Server 2016 &#8211; by Kalen Delaney<\/a><\/p>\n<p>However, in this section I will demonstrate how to migrate a disk-based table to a memory-optimized table.<\/p>\n<div class=\"note\">\n<p><code>Note<\/code>: In-Memory Optimized OLTP table was introduced in SQL Server 2014. Unfortunately, it had many limitations which made it quite impractical to use. In SQL Server 2016 the memory-optimized table was dramatically improved and the restraints were significantly reduced. Only a few limitations still stay in the SQL Server 2016 version. All samples and techniques provided in this section will work for SQL Server 2016 version only.<\/p>\n<\/div>\n<p>Before the beginning to work with memory-optimized tables, a database has to be created with one <code>MEMORY_OPTIMIZED_DATA<\/code> filegroup. This filegroup is used for storing the data and delta file pairs needed by SQL Server to recover the memory-optimized tables. Although the syntax for creating them is almost the same as for creating a regular filestream filegroup, it must also specify the <code>CONTAINS MEMORY_OPTIMIZED_DATA <\/code>option<code>.<\/code><\/p>\n<p>To make the code below compile, you will need to either replace \u2018<em>C:\\SQL2016\u2019 <\/em>with your disk settings, or create the folder.<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE DATABASE [TestDB]\r\n ON  PRIMARY \r\n( NAME = N'TestDB_data', FILENAME = N'C:\\SQL2016\\TestDB_data.mdf'),\r\n FILEGROUP [TestDBSampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT\r\n( NAME = N'TestDB_mod_dir', FILENAME = N'C:\\SQL2016\\TestDB_mod_dir' , MAXSIZE = UNLIMITED)\r\n LOG ON \r\n( NAME = N'TestDBSampleDB_log', FILENAME = N'C:\\SQL2016\\TestDB_log.ldf' )\r\n<\/pre>\n<p>If you wish to enable the <code>MEMORY_OPTIMIZED_DATA<\/code> option for an existing database, you need to create a filegroup with the <code>MEMORY_OPTIMIZED_DATA <\/code>option, and then files can be added to the filegoup.<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">ALTER DATABASE [TestDB] \r\nADD FILEGROUP [TestDBSampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA; \r\n\r\nALTER DATABASE [TestDB] \r\nADD FILE (NAME='TestDB_mod_dir', FILENAME='C:\\SQL2016\\TestDB_mod_dir') \r\n\tTO FILEGROUP [TestDBSampleDB_mod_fg];\r\n<\/pre>\n<p>Execute following the SQL code, to verify that a database <code>MEMORY_OPTIMIZED_DATA<\/code> option is enabled.<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">USE TestDB\r\nSELECT g.name, g.type_desc, f.physical_name \r\n FROM sys.filegroups g JOIN sys.database_files f ON g.data_space_id = f.data_space_id \r\n WHERE g.type = 'FX' AND f.type = 2\r\n<\/pre>\n<p>from PowerShell, it is easy to determine whether a database is memory-optimised<\/p>\n<pre class=\"theme:powershell-ise lang:ps decode:true\">try { Import-Module SQLServer -Global -ErrorAction Stop }\r\ncatch { Import-Module SQLPS } #loading the assemblies\r\n\r\n$Server = New-Object 'Microsoft.SqlServer.Management.Smo.Server' 'MyServer'\r\nif ($server.databases['TestDB'].HasMemoryOptimizedObjects)\r\n{ 'has memory-optimised tables' }\r\nelse\r\n{ 'no memory-optimised tables' }\r\n<\/pre>\n<p>As an alternative option to check the <code>MEMORY_OPTIMIZED_DATA<\/code>: open the database property, select Filegroups, the filegroup name displays on the <code>MEMORY_OPTIMIZED_DATA<\/code> option.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"706\" height=\"471\" class=\"wp-image-69752\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2017\/01\/c-users-agrinb1-appdata-local-temp-snaghtml1a68a.png\" alt=\"C:\\Users\\AGRINB~1\\AppData\\Local\\Temp\\SNAGHTML1a68a483.PNG\" \/><\/p>\n<p>Here is the list of differences between disk-based tables and memory-optimized tables:<\/p>\n<ul>\n<li><code>MEMORY_OPTIMIZED property<\/code> \u2013 the pages do not need to be read into cache from the disk when the memory-optimized tables are accessed. All of the data is stored in memory, at all times.<\/li>\n<li><code>DURABILITY property<\/code> &#8211; memory-optimized tables can be either durable (SCHEMA_AND_DATA) or non-durable (SCHEMA_ONLY). The default is for these tables to be durable (SCHEMA_AND_DATA), and these durable tables also meet all the other transactional requirements; they are atomic, isolated, and consistent. A set of checkpoint files (data and delta file pairs), which are only used for recovery purposes, is created using operating system files residing in memory-optimized filegroups that keep track of the changes to the data in the durable tables. These checkpoint files are append-only. Non-durable and not logged, using an option SCHEMA_ONLY. As the option indicates, the table schema will be durable, even though the data is not. These tables do not require any IO operations during transaction processing, and nothing is written to the checkpoint files for these tables. The data is only available in memory while SQL Server is running.<\/li>\n<li><code>Indexes<\/code> \u2013 no <code>CLUSTERED<\/code> indexes are implemented on memory-optimized tables. Indexes are not stored as traditional B-trees. Memory-optimized tables support hash indexes, stored as hash tables with linked lists connecting all the rows that hash to the same value and \u201crange\u201d indexes, which for memory-optimized tables are stored using special Bw-trees. The range index with Bw-tree can be used to quickly find qualifying rows in a range predicate just like traditional a B-tree but it is designed with optimistic concurrency control with no locking or latching. (See <a href=\"http:\/\/download.microsoft.com\/download\/8\/3\/6\/8360731A-A27C-4684-BC88-FC7B5849A133\/SQL_Server_2016_In_Memory_OLTP_White_Paper.pdf\">SQL Server In-Memory OLTP Internals for SQL Server 2016 &#8211; by Kalen Delaney<\/a>)<\/li>\n<\/ul>\n<h2>Migrating Disk-Based to Memory Optimized OLTP Table<\/h2>\n<p>A table is a collection of columns and rows. It is necessary to know the limitations of the memory-optimized table columns data types in order to migrate a disk-based table to the memory-optimized table.<\/p>\n<p>The following data types are not supported: <code>datetimeoffset<\/code>, <code>geography<\/code>, <code>geometry<\/code>, <code>hierarchyid<\/code>, <code>rowversion<\/code>, <code>xml<\/code>, <code>sql<\/code>_<code>variant<\/code>, all User-Defined Types and all legacy LOB data types (including <code>text<\/code>, <code>ntext<\/code>, and <code>image<\/code>)<\/p>\n<p>Supported datatypes include:<\/p>\n<ul>\n<li><code>bit<\/code>, <code>tinyint<\/code>, <code>smallint<\/code>, <code>int<\/code>, <code>bigint<\/code>. <code>Numeric <\/code>and <code>decimal<\/code><\/li>\n<li><code>money<\/code> and <code>smallmoney<\/code><\/li>\n<li><code>float<\/code> and <code>real<\/code><\/li>\n<li>date\/time types: <code>datetime<\/code>, <code>smalldatetime<\/code>, <code>datetime2<\/code>, <code>date<\/code> and <code>time<\/code><\/li>\n<li><code>char<\/code>(n), <code>varchar<\/code>(n), <code>nchar<\/code>(n), <code>nvarchar<\/code>(n), <code>sysname<\/code>, <code>varchar<\/code>(MAX) and <code>nvarchar<\/code>(MAX)<\/li>\n<li><code>binary<\/code>(n), <code>varbinary<\/code>(n) and <code>varbinary<\/code>(MAX)<\/li>\n<li>Uniqueidentifier<\/li>\n<\/ul>\n<p>The syntax for creating memory-optimized tables is almost identical to the syntax for creating disk-based tables, with a few restrictions, as well as a few required extensions. A few of the differences are:<\/p>\n<ul>\n<li>The <code>MEMORY_OPTIMIZED<\/code> property is set to <code>ON<\/code> (<code>MEMORY_OPTIMIZED<\/code> = <code>ON<\/code>)<\/li>\n<li>The <code>DURABILITY<\/code> property is set to <code>SCHEMA_AND_DATA<\/code> or <code>SCHEMA_ONLY<\/code> (<code>SCHEMA_AND_DATA<\/code> is default)<\/li>\n<li>The memory-optimized table must have a <code>PRIMARY KEY<\/code> index. If <code>HASH<\/code> index is selected for the primary key, then <code>BUCKET_COUNT<\/code> must be specified.<\/li>\n<li>Only 8 indexes including the primary key are allowed in the memory-optimized table.<\/li>\n<li><code>IDENTITY<\/code> properties have to be set as seed = 1 and increment = 1 only.<\/li>\n<li>No Computed Columns are allowed in the memory-optimized tables.<\/li>\n<\/ul>\n<p>Row lengths can exceed 8060 bytes using only regular variable length columns, with the largest variable length columns stored off-row, similar to row-overflow data for disk-based tables.<\/p>\n<p>It is a bad idea to create memory-optimized tables indiscriminately. Whilst, the memory needed for OS and other SQL Server processes as well, it\u2019s not a good idea to migrate as many tables as possible into memory-optimized tables. Since the memory-optimized tables have been designed with optimistic concurrency control with no locking or latching, the best tables to choose for conversion should be the tables that have a \u2018locking and latching profile\u2019 (the tables that detected as sessions \u201cblocker\u201d), which would include the most writable tables (<code>INSERT<\/code>, <code>UPDATE<\/code>, and <code>DELETE<\/code>) and the most readable tables. Yet this list is not quite complete for the migration. However, the tables that should not be migrated are the static metadata tables; the tables that violate the memory-optimized tables\u2019 limitations; the tables with fewer rows.<\/p>\n<p>In SQL Server 2016 it is possible to <a href=\"https:\/\/msdn.microsoft.com\/en-us\/library\/dn957571.aspx\">generate a migration checklist using SQL Server PowerShell<\/a>. In Object Explorer, right-click on a database and then click Start PowerShell; verify that the following prompt appears, execute following code:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">  PS SQLSERVER:\\SQL\\{Instance Name}\\DEFAULT\\Databases\\{DB Name}&gt;  \r\n<\/pre>\n<p>Enter the following command (replace C:\\Temp with your target folder path. If you prefer to use more generic approach $Env:Temp or $Env:Path for the report output then verify PowerShell path for those command. Simply run $Env:Temp or $Env:Path in PowerShell command window, your local path will be returned). The checklist PowerShell command example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">  Save-SqlMigrationReport \u2013FolderPath \u201cC:\\Temp\u201d\r\n<\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"792\" height=\"100\" class=\"wp-image-69753\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2017\/01\/word-image-99.png\" \/><\/p>\n<div class=\"note\">\n<p><code>Note:<\/code> If you need to run the Migration Report on the single table then expand the Database node, expand the Tables node, right click on the table, and then select Start PowerShell from pop up menu.<\/p>\n<\/div>\n<p>The folder path will be created in case it does not exist. The migration checklist report will be generated for all tables and stored procedures in the database, and the report will appear in the location specified by the <code>FolderPath<\/code>. Therefore, the folder path to the reports will be specified as <code>FolderPath<\/code> in the PowerShell script plus the database name that the checklist has been executed for. In this example it\u2019s C:\\Temp\\Northwind.<\/p>\n<div class=\"note\">\n<p><code>Note<\/code>: Also the In-Memory OLTP Migration Checklist could be generated from SQL Server 2016 SSMS. Please follow link https:\/\/msdn.microsoft.com\/en-gb\/library\/dn284308.aspx<\/p>\n<\/div>\n<p>The checklist reports can indicate that one or more of the Datatype restrictions for the memory-optimized tables has been exceeded. However, it does not mean that the table cannot be migrated to the memory-optimized table. The report states whether each column met the criteria for success, and if not, then a hint how to correct the problem if the table is important for the migration. For example, a database has a table <code>TEST_Disk<\/code>. For demo purposes, the table has been created with a number of migration violations that we will see in the report later.<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TABLE dbo.TEST_Disk(\r\n\tID  int IDENTITY(10000, 1),\r\n\tProductID int NOT NULL,\r\n\tOrderQty int NOT NULL,\r\n\tSumOrder as ProductID + OrderQty,\r\n\tXMLData XML NULL,\r\n\tDescription varchar(1000) SPARSE,\r\n\tStartDate datetime CONSTRAINT DF_TEST_DiskStart DEFAULT getdate() NOT NULL,\r\n\tModifiedDate datetime CONSTRAINT DF_TEST_DiskEnd DEFAULT getdate() NOT NULL,\r\n CONSTRAINT PK_TEST_Disk_ID PRIMARY KEY CLUSTERED\r\n\t(\r\n\t\tID \r\n\t) \r\n)\r\n<\/pre>\n<p>After migration checklist completed, we have following report:<\/p>\n<ul>\n<li>XMLData column have XML data type<\/li>\n<li>SumOrder is Computed Column<\/li>\n<li>Description column is SPARSE<\/li>\n<li>ID have IDENTITY seed value 10000<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"756\" height=\"477\" class=\"wp-image-69754\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2017\/01\/word-image-100.png\" \/><\/p>\n<p>According to the report all listed violations have to be corrected or the table cannot be migrated. Let\u2019s fix all of those violations:<\/p>\n<ul>\n<li><code>XMLData<\/code> column will be converted to <code>NVARCHAR<\/code>(<code>MAX<\/code>) data type; that is the nature of XML. However, when an application or the database does not implement UNICODE, then <code>VARCHAR<\/code>(<code>MAX<\/code>) data type can be considered as well.<\/li>\n<li><code>SumOrder<\/code> is computed column, where the value calculates by formula ProductID column summarized with OrderQty column (formula ProductID + OrderQty was created for demo purpose only). Both ProductID and OrderQty columns have <em>int<\/em> data type. Therefore, SumOrder column inherited <em>int<\/em> data type from the ProductID and OrderQty columns (How to correct the computed column issue will be explained in section \u201cFixing the Computed column issue\u201d).<\/li>\n<li><code>Description<\/code> column, to correct this issue, simply remove <code>SPARSE <\/code>option.<\/li>\n<li><code>ID<\/code> column <code>IDENTITY<\/code> seed value will be 1 then seed value 10,000 will be forced with explicit <code>IDENTITY<\/code> <code>INSERT<\/code>.<\/li>\n<\/ul>\n<p>After all corrections have been implemented, the DDL script for TEST_Memory memory-optimized table will be:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TABLE dbo.TEST_Memory(\r\n\tID  int IDENTITY(1, 1),\r\n\tProductID int NOT NULL,\r\n\tOrderQty int NOT NULL,\r\n\tSumOrder int NULL,\r\n\tXMLData nvarchar(MAX) NULL,\r\n\tDescription varchar(1000) NULL,\r\n\tStartDate datetime CONSTRAINT DF_TEST_MemoryStart DEFAULT getdate() NOT NULL,\r\n\tModifiedDate datetime CONSTRAINT DF_TEST_MemoryEnd DEFAULT getdate() NOT NULL,\r\n CONSTRAINT PK_TEST_Memory_ID PRIMARY KEY NONCLUSTERED HASH\r\n\t(\r\n\t\tID \r\n\t)WITH (BUCKET_COUNT = 1572864) \r\n) WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA )\r\n<\/pre>\n<p>Now, we need to set the IDENTITY seed to 10,000, but, the memory-optimized table does not support the DBCC command to reset IDENTITY. SET IDENTITY_INSERT TEST_Memory ON will do it for us.<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">-- 1. Insert dummy row\r\nSET IDENTITY_INSERT TEST_Memory ON\r\n\tINSERT TEST_Memory (ID,ProductID, OrderQty, SumOrder)\r\n\tSELECT 10000, 1,1,1\r\nSET IDENTITY_INSERT TEST_Memory OFF\r\n\r\n-- 2. Remove the record\r\nDELETE TEST_Memory WHERE ID = 10000\r\n\r\n-- 3. Verify Current Identity\r\nSELECT TABLE_NAME, IDENT_SEED(TABLE_NAME) AS Seed, IDENT_CURRENT(TABLE_NAME) AS Current_Identity\r\nFROM INFORMATION_SCHEMA.TABLES\r\nWHERE OBJECTPROPERTY(OBJECT_ID(TABLE_NAME), 'TableHasIdentity') = 1\r\nAND TABLE_NAME = 'TEST_Memory'\r\n<\/pre>\n<p>When all these three steps have been applied, you will have the IDENTITY set to the required value of 10,000.<\/p>\n<table>\n<tbody>\n<tr>\n<td>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"274\" height=\"71\" class=\"wp-image-69755\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2017\/01\/word-image-101.png\" \/> before<\/p>\n<\/td>\n<td>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"268\" height=\"74\" class=\"wp-image-69756\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2017\/01\/word-image-102.png\" \/> after<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>We\u2019ll now load some test data. In order to prepare a good amount of rows, we will create a TEST_DataLoad table by the execution of the SQL script below in order to load 1 million rows into the table. All SQL syntax will be covered in this article.<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">;With ZeroToNine (Digit) As \r\n(Select 0 As Digit\r\n        Union All\r\n  Select Digit + 1 From ZeroToNine Where Digit &lt; 9),\r\n    OneMillionRows (Number) As (\r\n        Select \r\n          Number = SixthDigit.Digit  * 100000 \r\n                 + FifthDigit.Digit  *  10000 \r\n                 + FourthDigit.Digit *   1000 \r\n                 + ThirdDigit.Digit  *    100 \r\n                 + SecondDigit.Digit *     10 \r\n                 + FirstDigit.Digit  *      1 \r\n        From\r\n            ZeroToNine As FirstDigit  Cross Join\r\n            ZeroToNine As SecondDigit Cross Join\r\n            ZeroToNine As ThirdDigit  Cross Join\r\n            ZeroToNine As FourthDigit Cross Join\r\n            ZeroToNine As FifthDigit  Cross Join\r\n            ZeroToNine As SixthDigit\r\n)\r\nSelect   Number+1 ID,ABS(CHECKSUM(NEWID())) % 50 ProductID, ABS(CHECKSUM(NEWID())) % 55 OrderQty\r\n, (SELECT Number+1 as ProductID,ABS(CHECKSUM(NEWID())) % 50 as OrderQty FROM master.dbo.spt_values as data \r\n\t\tWHERE type = 'p' and data.number = v.number % 2047 FOR XML AUTO, ELEMENTS, TYPE  ) XMLData\r\nINTO TEST_DataLoad\r\nFrom OneMillionRows v\r\n<\/pre>\n<p>When <code>TEST<\/code>_<code>DataLoad<\/code> is ready, let\u2019s run a test load for disk-based and the memory-optimized table. That was done on the server with 32 CPU; 512 GB of memory; Fusion (Solid State) Drive. However, the memory-optimized table performed more than twice as fast as the disk table.<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">---- Load disk-based table\r\nSET STATISTICS TIME ON; \r\nINSERT [dbo].[TEST_Disk] ( ProductID, OrderQty )\r\nselect ProductID, OrderQty from TEST_DataLoad\r\nSET STATISTICS TIME OFF; \r\n SQL Server Execution Times:\r\n   CPU time = 5968 ms,  elapsed time = 6322 ms.\r\n\r\n---- Load the memory-optimized table\r\nSET STATISTICS TIME ON; \r\nINSERT [dbo].[TEST_Memory](ProductID, OrderQty, SumOrder)\r\nselect ProductID, OrderQty,ProductID + OrderQty from TEST_DataLoad\r\nSET STATISTICS TIME OFF; \r\n SQL Server Execution Times:\r\n   CPU time = 2500 ms,  elapsed time = 2561 ms.\r\n<\/pre>\n<h2>Indexing memory-optimized tables<\/h2>\n<p>Having created the table, you will have to plan a new indexing strategy and choose effective indexes that are appropriate for the way the table will be used. As mentioned earlier, memory optimized table indexes cannot be created as <code>CLUSTERED<\/code>, and are not stored as traditional B-trees: They are stored using special Bw-trees. A maximum of 8 indexes can be created on the memory-optimized tables. There two types of memory optimized table indexes:<\/p>\n<ul>\n<li>HASH<\/li>\n<li>RANGE<\/li>\n<\/ul>\n<p>HASH index requires the user to specify the <code>BUCKET_COUNT<\/code> property value. The <code>HASH<\/code> indexes are more efficient with the equal (=) operator. The <code>BUCKET_COUNT<\/code> is calculated with the number of unique values multiplied by 2 for tables with fewer than 1 million rows, or multiplied by 1.5 for tables that have more than 1 million rows. The Query below, returns the estimated <code>BUCKET_COUNT<\/code> value for the table <code>TEST_Memory<\/code> and the column <code>ProductID<\/code>.<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">;WITH CTE AS\r\n(\r\nSELECT COUNT(DISTINCT ProductID) CntID FROM TEST_Memory\r\n)\r\nSELECT POWER(2,CEILING(LOG(CntID)\/LOG(2))) AS [BUCKET COUNT]\r\nFROM CTE\r\nALTER TABLE DDL need to be execute to create additional index on the memory-optimized table.\r\nALTER TABLE TEST_Memory ADD INDEX [IX_ TEST_Memory_ProductID] NONCLUSTERED HASH  \r\n(\r\n\t[ProductID] \r\n) WITH ( BUCKET_COUNT = 1048576)\r\n<\/pre>\n<p>Beware that each <code>ALTER TABLE<\/code> command rebuilds the entire memory-optimized table and makes the table unavailable during the rebuild process.<\/p>\n<p>To review the bucket count for <code>HASH<\/code> statistics, you can run the SQL code.<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SELECT object_name(hs.object_id) AS [Object Name],\r\n   i.name as [Index Name],\r\n   hs.total_bucket_count,\r\n   hs.empty_bucket_count,\r\n   (hs.total_bucket_count-hs.empty_bucket_count) * 1.3 as NeededBucked,\r\n   FLOOR(his.empty_bucket_count*1.0\/his.total_bucket_count * 100) AS [Empty Bucket %],\r\n   hs.avg_chain_length,\r\n   hs.max_chain_length\r\nFROM sys.dm_db_xtp_hash_index_stats AS hs\r\n   JOIN sys.indexes AS i ON hs.object_id=i.object_id AND hs.index_id=i.index_id\r\n<\/pre>\n<p><code>RANGE<\/code> indexes are optimized for the <code>BETWEEN<\/code> predicate. If are unsure of the number of buckets you\u2019ll need for a particular column, or if you know you\u2019ll be searching for your data based on a range of values, you should consider creating a range index instead of a hash index. Range indexes are implemented using a new data structure called a Bw-tree, originally envisioned and described by Microsoft Research in 2011. A Bw-tree is a lock- and latch-free variation of a B-tree.<\/p>\n<p>When no <code>HASH<\/code> is specified, the index creates as <code>RANGE<\/code>. For example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">ALTER TABLE TEST_Memory ADD INDEX [IX_ TEST_Memory_OrderQty] NONCLUSTERED  \r\n(\r\n\tOrderQty \r\n)\r\n<\/pre>\n<p>Both HASH and RANGE indexes can be created as standalone DDL or the table in-line DDL, for example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TABLE dbo.TEST_Memory(\r\n\tID  int IDENTITY(1, 1),\r\n\tProductID int NOT NULL,\r\n\tOrderQty int NOT NULL,\r\n\tSumOrder int NOT NULL,\r\n\tXMLData nvarchar(MAX) NULL,\r\n\tDescription varchar(1000) NULL,\r\n\tStartDate datetime CONSTRAINT DF_TEST_MemoryStart DEFAULT getdate() NOT NULL,\r\n\tModifiedDate datetime CONSTRAINT DF_TEST_MemoryEnd DEFAULT getdate() NOT NULL,\r\nINDEX [IX_ TEST_Memory_ProductID] NONCLUSTERED HASH ([ProductID]) WITH (BUCKET_COUNT = 1048576),\r\nINDEX [IX_ TEST_Memory_OrderQty] NONCLUSTERED (OrderQty),\r\n CONSTRAINT PK_TEST_Memory_ID PRIMARY KEY NONCLUSTERED HASH\r\n\t(\r\n\t\tID \r\n\t)WITH (BUCKET_COUNT = 1572864) \r\n)\r\n<\/pre>\n<h2>Fixing the Computed column issue<\/h2>\n<p>One of the reported issues was that In-Memory OLTP table does not allow Computed columns. Depending on how the table is utilizing a database or the applications, the solutions can vary from very simple to complex. For example, do not include the computed column when the In-Memory OLTP table is created or keep the column name as nullable with the appropriate data type (a type of dummy column just to preserve the legacy table structure, as I did for the TEST_Memory table). For a more complex solution such as create User-Defined Table Type, implement it in the Natively Compiled Stored Procedure.<\/p>\n<p>An easy option to avoid the computed column issues for In-Memory OLTP table is to create a view with a computed formula. For example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE VIEW vw_TEST_Memory\r\nAS\r\nSELECT ID\r\n\t,ProductID\r\n\t,OrderQty\r\n\t,SumOrder = ProductID + OrderQty\r\n\t,XMLData\r\n\t,StartDate\r\n\t,ModifiedDate\r\nFROM dbo. TEST_Memory\r\n<\/pre>\n<p>The next step is to create an <code>INSTEAD OF<\/code> trigger for the <code>vw_TEST_Memory <\/code>view, and use this view to insert new rows. For example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TRIGGER tr_TEST_Memory ON dbo.vw_TEST_Memory\r\nINSTEAD OF INSERT \r\nAS\r\nINSERT dbo.TEST_Memory ( ProductID, OrderQty, SumOrder, XMLData, StartDate, ModifiedDate)\r\nSELECT ProductID, OrderQty, ProductID + OrderQty, XMLData, StartDate, ModifiedDate\r\nFROM inserted\r\n<\/pre>\n<p>With this option, the SumOrder column will preserve the formula. However, the insert process will lose some speed.<\/p>\n<p>Let\u2019s review more complex scenario: an application code implementing a User-Defined Table Type (UDTT) and the Stored Procedure. To achieve the maximum benefit of an In-Memory table, we need to implement it in the Natively Compiled Stored Procedure. For a disk table, the User-Defined Table Type has following create syntax:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TYPE [dbo].[tt_TEST_Disk] AS TABLE(\r\n\t[ProductID] [int] NOT NULL,\r\n\t[OrderQty] [int] NOT NULL,\r\n\t[XMLData] [xml] NULL,\r\n\t[Description] [varchar](1000) NULL,\r\n\t[StartDate] [datetime] NOT NULL,\r\n\t[ModifiedDate] [datetime] NOT NULL\r\n)\r\n<\/pre>\n<p>For an In-Memory table, the UDTT option <code><em>MEMORY_OPTIMIZED<\/em><\/code> has to be enabled, and the UDTT must have an index. Without those two options, the UDTT can be created, but SQL Server will raise an error when the UDTT is bound to the Natively Compiled Stored Procedure. Here is a sample error massege:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">Msg 41323, Level 16, State 1, Procedure usp_NC_Insert_TEST_Memory, Line 2 [Batch Start Line 56]\r\nThe table type 'dbo.tt_TEST_Memory' is not a memory optimized table type and cannot be used in a natively compiled module.\r\n<\/pre>\n<p>Create UDTT DDL code:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TYPE tt_TEST_Memory as TABLE(\r\n\tProductID int NOT NULL,\r\n\tOrderQty int NOT NULL,\r\n\tXMLData varchar(max) NULL,\r\n\t[Description] varchar(1000)   NULL,\r\n\tStartDate datetime NOT NULL,\r\n\tModifiedDate datetime NOT NULL,\r\n\tINDEX IXNC NONCLUSTERED \r\n\t(\r\n\t\tStartDate ASC\r\n\t)\r\n)\r\nWITH ( MEMORY_OPTIMIZED = ON )\r\n<\/pre>\n<p>Once the UDTT is created, we can create a Natively Compiled Stored Procedure (link to see details about Natively Compiled Stored Procedure https:\/\/msdn.microsoft.com\/en-us\/library\/dn133184.aspx). For example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE PROCEDURE [dbo].[usp_NC_Insert_TEST_Memory]\r\n\t(@VRT dbo.tt_TEST_Memory READONLY)\r\nWITH NATIVE_COMPILATION, SCHEMABINDING\r\nAS\r\nBEGIN ATOMIC\r\nWITH (TRANSACTION ISOLATION LEVEL=snapshot, LANGUAGE=N'us_english')\r\n\tINSERT [dbo].[TEST_Memory] (ProductID, OrderQty, SumOrder, XMLData, Description, StartDate, ModifiedDate)\r\n\tSELECT  ProductID, OrderQty, [ProductID]+[OrderQty], XMLData, Description, StartDate, ModifiedDate\r\n\tFROM @VRT\r\n\r\nEND\r\n<\/pre>\n<p>To migrate the regular stored procedure to the Natively Compiled Stored Procedure, the following options are required need to be included:<\/p>\n<ol>\n<li>After parameters list (if it exists) add <em>WITH NATIVE_COMPILATION, SCHEMABINDING<\/em> options<\/li>\n<li>The T-SQL code body surrounded with <em>BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=snapshot, LANGUAGE=N&#8217;us_english&#8217;) &#8230; END<\/em> . Where the transaction isolation level can be selected from supported levels (<a href=\"https:\/\/msdn.microsoft.com\/en-us\/library\/dn133175.aspx\">https:\/\/msdn.microsoft.com\/en-us\/library\/dn133175.aspx<\/a>):\n<ul>\n<li>SNAPSHOT<\/li>\n<li>REPEATABLE READ<\/li>\n<li>SERIALIZABLE<\/li>\n<li>READ COMMITTED<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<h2>Migrating existing DML triggers<\/h2>\n<p>it is worth noting that an In-Memory OLTP table does not support <code>INSTEAD OF <\/code>triggers. However, <code>INSTEAD OF<\/code> triggers can be used for the view with In-Memory OLTP table. The migration rules for the triggers are the same as for the stored procedures.<\/p>\n<ul>\n<li><em>WITH NATIVE_COMPILATION, SCHEMABINDING<\/em> options must be added after ON [tableName] section<\/li>\n<li>The T-SQL code body surrounded with <em>BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=snapshot, LANGUAGE=N&#8217;us_english&#8217;) &#8230; END<\/em><\/li>\n<\/ul>\n<p>For example, the code for a disk table trigger:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TRIGGER tr_TriggerName\r\n ON TableName\r\nAFTER INSERT, UPDATE, DELETE \r\nAS\r\nBEGIN \r\n\/*\r\nThe trigger code here\r\n*\/\r\nEND\r\n<\/pre>\n<p>To migrate the trigger to an In-Memory OLTP table, you can use the code below:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TRIGGER tr_TriggerName\r\n ON TableName\r\n WITH NATIVE_COMPILATION, SCHEMABINDING\r\nAFTER INSERT, UPDATE, DELETE  \r\nAS\r\nBEGIN ATOMIC\r\nWITH (TRANSACTION ISOLATION LEVEL=snapshot, LANGUAGE=N'us_english')\r\n\/*\r\n\tThe trigger code here\r\n*\/\r\nEND\r\n<\/pre>\n<p>No DDL triggers support the In-Memory OLTP table. Books Online present the following message to SQL Server DBAs and Developers:<\/p>\n<ul>\n<li>You cannot create memory-optimized tables if the database or server has one or more DDL triggers defined on CREATE_TABLE or any event group that includes it. You cannot drop a memory-optimized table if the database or server has one or more DDL trigger defined on DROP_TABLE or any event group that includes it.<\/li>\n<li>You cannot create natively compiled stored procedures if there are one or more DDL triggers on CREATE_PROCEDURE, DROP_PROCEDURE, or any event group that includes those events.<\/li>\n<\/ul>\n<h2>Conclusion<\/h2>\n<p>The In-Memory OLTP table was introduced in SQL Server 2014. However, a massive number of limitations made In-Memory OLTP table usage practically impossible. Thankfully, In SQL Server 2016 many of those limitations were eliminated, which makes it possible to start implementing In-Memory OLTP table in the database. As you can read in this article, the process that is required to migrate the disk tables to In-Memory OLTP tables is not straightforward and requires analysis before you can make a final decision for migration. However, the benefits that an In-Memory OLTP table delivers is worth your effort. If you are looking to improve transactional speed and reduce blocking for your server, the In-Memory OLTP table is an excellent way to accomplish this task.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Migrate existing SQL Server disk-based tables to memory-optimized (In-Memory OLTP) tables &#8211; covering limitations to plan for, new indexing strategy (hash indexes vs nonclustered), handling computed columns that the memory-optimized engine doesn&#8217;t support directly, and migrating DML triggers (INSTEAD OF triggers need rework). Complete T-SQL migration walkthrough.&hellip;<\/p>\n","protected":false},"author":272577,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[143527],"tags":[],"coauthors":[41240],"class_list":["post-69751","post","type-post","status-publish","format-standard","hentry","category-database-administration-sql-server"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/69751","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/272577"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=69751"}],"version-history":[{"count":9,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/69751\/revisions"}],"predecessor-version":[{"id":92196,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/69751\/revisions\/92196"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=69751"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=69751"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=69751"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=69751"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}