{"id":92038,"date":"2021-07-29T19:28:39","date_gmt":"2021-07-29T19:28:39","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/?p=92038"},"modified":"2026-03-18T14:29:23","modified_gmt":"2026-03-18T14:29:23","slug":"sql-compilations-sec-is-not-what-you-think-it-is","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/performance-sql-server\/sql-compilations-sec-is-not-what-you-think-it-is\/","title":{"rendered":"SQL Compilations\/sec: What This Metric Really Means"},"content":{"rendered":"<p>The standard guideline that SQL Compilations\/sec should be less than 10% of Batch Requests\/sec is misleading for ad hoc workloads. When a SQL Server instance has \u201coptimize for ad hoc workloads\u201d enabled, ad hoc queries generate a \u201ccompiled plan stub\u201d on first execution, and a full compiled plan only on second execution. This means the Compilations\/sec counter fires on the first execution (for the stub) but does not fire on the second execution when the full plan is actually compiled and cached.<\/p>\n<p>The result: the counter underreports actual compilation work while the Cache Hit Ratio appears lower than it should be, because the stub counts as a cache miss on the second execution. Understanding this behavior changes how you interpret these performance counters for ad hoc-heavy environments.<\/p>\n<h2>Introduction<\/h2>\n<p>I was recently working in a SQL Server health check assessment, and the scenario I was investigating made me re-evaluate what I knew about performance counters SQL Compilations\/sec, Cache Hit Ratio and the relation of those to Batch Requests\/sec.<\/p>\n<p>Consider the following scenario:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92039\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-65.png\" alt=\"Perfmon counters showing high sql compliations\/sec\" width=\"726\" height=\"431\" \/><\/p>\n<p>As you can see, the number of SQL Compilations\/Sec is very high. It\u2019s important to step back and remember the general description and guideline for this counter and understand what I mean by \u201chigh\u201d:<\/p>\n<p><a href=\"https:\/\/docs.microsoft.com\/en-us\/sql\/relational-databases\/performance-monitor\/sql-server-sql-statistics-object?view=sql-server-ver15\">Official Description<\/a>: \u201cNumber of SQL compilations per second. Indicates the number of times the compile code path is entered.\u201d<\/p>\n<p>A <a href=\"https:\/\/documentation.red-gate.com\/sm8\/analyzing-performance\/analysis-graph\/list-of-metrics\">typical<\/a> description: \u201cThe number of times that Transact-SQL compilations occur, per second (including recompiles). The lower the value the better. Each compilation uses CPU and other resources, so SQL Server will attempt to reuse cached execution plans wherever possible. Not having to recompile the stored procedure reduces the overhead on the server and speeds up overall performance.\u201d<\/p>\n<p>Guidelines:<\/p>\n<ul>\n<li>Guideline from <a href=\"https:\/\/documentation.red-gate.com\/sm8\/analyzing-performance\/analysis-graph\/list-of-metrics\">Red-Gate\u2019s SQL Monitor<\/a> tool: \u201cIn general, Compilations\/sec should be less than 10% of the Batch requests\/sec. High values often indicate excessive adhoc querying and should be as low as possible.\u201d<\/li>\n<li>Guideline from <a href=\"https:\/\/www.brentozar.com\/archive\/2019\/01\/sql-server-perfmon-counters-that-are-still-interesting-in-2019\/\">Brent Ozar<\/a>: \u201cIn a transactional system, I expect to see 90% (or higher) query plan reuse \u2013 so Compilations\/sec should be 10% (or less) of the Batch Request\/sec measure.\u201d<\/li>\n<\/ul>\n<p>Consider the \u201c10% of batch requests\/sec guideline\u201d. The counters show 21374 compilations\/sec and 39693 batch requests\/sec, (21374\/39693) * 100 = 54%. Considering the 10% guideline, 54% looks like a very high number.<\/p>\n<p>Another thing that caught my attention is the low value (69%) of \u201cPlan Cache Hit ratio\u201d (Ratio between cache hits and lookups.). The general description for this counter is: \u201chow frequently a compiled plan is found in the plan cache (therefore avoiding the need to compile or a recompile)\u201d. A good guideline is: \u201cthe closer this number is to 100%, the better.\u201d<\/p>\n<p>Note: There are other things that I could highlight like, plan cache counts is close to the default <a href=\"https:\/\/support.microsoft.com\/en-us\/topic\/kb3026083-fix-sos-cachestore-spinlock-contention-on-ad-hoc-sql-server-plan-cache-causes-high-cpu-usage-in-sql-server-798ca4a5-3813-a3d2-f9c4-89eb1128fe68\">limit of 160,036<\/a> entries in a 64 bits system.<\/p>\n<p>As always, it is a good practice to ask <a href=\"https:\/\/en.wikipedia.org\/wiki\/Five_whys\">why<\/a>. Why those guidelines? Why is evaluating those counters important? Why is plan cache reuse important?<\/p>\n<h2>SQL Server plan cache \u2013 Compilation and optimization phases<\/h2>\n<p>Query plan compilation and optimization are known to be CPU-intensive operations; therefore, SQL Server will attempt to reuse cached execution plans wherever possible. Not having to compile (or recompile) and optimize a statement reduces the overhead on the server and speeds up overall performance. This is the main reason to review the counters I mentioned before.<\/p>\n<p>Notice that I mentioned \u201ccompilation\u201d and \u201coptimization.\u201d Those are two different things and are part of the execution plan creation process. Although the cost of a compilation is not low, the optimization phase usually uses more CPU.<\/p>\n<p>In environments with ad hoc intensive workloads, the analysis of performance counter numbers and CPU overhead cost may get tricky (more on that in the \u201cplan cache for ad hoc queries session\u201d of this article). Because of that, it is important to identify the query execution workload. The query plan cache reuse and query execution workload may be one of the following:<\/p>\n<ul>\n<li>Ad hoc queries<\/li>\n<li>Auto parameterized queries<\/li>\n<li>Forced parameterized queries<\/li>\n<li>Ad hoc with forced parameterized queries<\/li>\n<li>Prepared queries using either <code>sp_executesql<\/code> or the prepare and execute method invoked through the data access API (ADO.NET, OLE DB, ODBC, JDBC and etc.)<\/li>\n<li>Stored procedures or other compiled objects (triggers, TVFs, etc.)<\/li>\n<\/ul>\n<p>One way to identify the workload is to look at the SQL compilations\/sec counter and its relation to the batch requests\/sec. In general, the higher the number of compilations\/sec in relation to the batch requests\/sec, the more likely the environment is experiencing an ad hoc workload.<\/p>\n<p>Another quick option to check the workload is to look at <code>sys.dm_exec_cached_plans<\/code> DMV. For instance, shown in the following image (script based on <a href=\"https:\/\/sqlperformance.com\/2019\/05\/sql-plan\/perf-impact-adhoc-workload\">Erin\u2019s<\/a> script), the number of ad hoc plans with only one use is very high. The ratio of how many single-use count plans compared to all cached plans is 80%. This is a good indication that this is an ad hoc workload. Also, notice that SQL is using only 415MB to store 156883 ad hoc plans in cache; that is a good indication that the instance probably has the <a href=\"https:\/\/docs.microsoft.com\/en-us\/sql\/database-engine\/configure-windows\/optimize-for-ad-hoc-workloads-server-configuration-option?view=sql-server-ver15\">\u201coptimize for ad hoc workloads\u201d<\/a> server configuration option enabled.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92040\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-66.png\" alt=\"Results of sys.dm_exec_cached_plans\" width=\"1066\" height=\"518\" \/><\/p>\n<p>Based on those numbers, the server received many ad hoc queries (an ad hoc workload).<\/p>\n<p>Before I move on with the analysis, I\u2019ll recap how SQL Server manages the plan cache for ad hoc queries, as this is important to understand the scenario.<\/p>\n<p><strong>Read also: <br \/><\/strong><a href=\"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/performance-sql-server\/the-default-trace-in-sql-server-the-power-of-performance-and-security-auditing\/\">T<span data-sheets-root=\"1\">he default trace for performance auditing<\/span><\/a><br \/><a href=\"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/performance-sql-server\/identifying-and-solving-index-scan-problems\/\"><span data-sheets-root=\"1\">Identifying and solving index scan problems<\/span><\/a><br \/><br \/><\/p>\n<h2>Plan cache for ad hoc queries<\/h2>\n<p>As <a href=\"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/performance-sql-server\/fixing-cache-bloat-problems-with-guide-plans-and-forced-parameterization\/\">I wrote before on simple-talk<\/a>, the chances that a query plan for an ad hoc query to be reused is very low because of the way SQL Server caches ad hoc plans.<\/p>\n<p>In general, the cached plan for an ad hoc query will only be used only in the following conditions:<\/p>\n<ul>\n<li>Subsequent query statement matches exactly (that includes the query and the filter values).<\/li>\n<li>Query qualifies for simple parameterization (also referred to as auto-parameterization).<\/li>\n<li>Query qualifies for forced parameterization, and it is enabled at the database or via plan guide using the <code>PARAMETERIZATION FORCED<\/code> query hint.<\/li>\n<\/ul>\n<p>Take a look at a sample of all the mentioned conditions. To run the tests, I\u2019m using the following script to populate the sample tables:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">USE Northwind\nGO\n-- Run tests without ad hoc workloads\nEXEC sys.sp_configure N'show advanced options','1';\nGO\nRECONFIGURE WITH OVERRIDE;\nGO\nEXEC sys.sp_configure N'optimize for ad hoc workloads', N'0'\nGO\nRECONFIGURE WITH OVERRIDE\nGO\nIF OBJECT_ID('OrdersBig') IS NOT NULL\n  DROP TABLE OrdersBig\nGO\nSELECT TOP 500000\n       IDENTITY(Int, 1,1) AS OrderID,\n       ABS(CheckSUM(NEWID()) \/ 10000000) AS CustomerID,\n       CONVERT(Date, GETDATE() - (CheckSUM(NEWID()) \/ 1000000)) AS OrderDate,\n       ISNULL(ABS(CONVERT(Numeric(18,2), (CheckSUM(NEWID()) \/ 1000000.5))),0) AS Value,\n       CONVERT(VarChar(250), NEWID()) AS Col1\n  INTO OrdersBig\n  FROM master.dbo.spt_values A\n CROSS JOIN master.dbo.spt_values B CROSS JOIN master.dbo.spt_values C CROSS JOIN master.dbo.spt_values D\nGO\nALTER TABLE OrdersBig ADD CONSTRAINT xpk_OrdersBig PRIMARY KEY(OrderID)\nGO\nIF OBJECT_ID('CustomersBig') IS NOT NULL\n  DROP TABLE CustomersBig\nGO\nSELECT TOP 500000\n       IDENTITY(Int, 1,1) AS CustomerID,\n       SubString(CONVERT(VarChar(250),NEWID()),1,20) AS CompanyName, \n       SubString(CONVERT(VarChar(250),NEWID()),1,20) AS ContactName, \n       CONVERT(VarChar(250), NEWID()) AS Col1, \n       CONVERT(VarChar(250), NEWID()) AS Col2\n  INTO CustomersBig\n  FROM master.dbo.spt_values A\n CROSS JOIN master.dbo.spt_values B CROSS JOIN master.dbo.spt_values C CROSS JOIN master.dbo.spt_values D\nGO\nALTER TABLE CustomersBig ADD CONSTRAINT xpk_CustomersBig PRIMARY KEY(CustomerID)\nGO\nIF OBJECT_ID('ProductsBig') IS NOT NULL\n  DROP TABLE ProductsBig\nGO\nSELECT TOP 500000 IDENTITY(Int, 1,1) AS ProductID, \n       SubString(CONVERT(VarChar(250),NEWID()),1,20) AS ProductName, \n       CONVERT(VarChar(250), NEWID()) AS Col1\n  INTO ProductsBig\n  FROM master.dbo.spt_values A\n CROSS JOIN master.dbo.spt_values B CROSS JOIN master.dbo.spt_values C CROSS JOIN master.dbo.spt_values D\nGO\nUPDATE ProductsBig SET ProductName = 'Produto qualquer'\nWHERE ProductID = 1\nGO\nALTER TABLE ProductsBig ADD CONSTRAINT xpk_ProductsBig PRIMARY KEY(ProductID)\nGO\nIF OBJECT_ID('Order_DetailsBig') IS NOT NULL\n  DROP TABLE Order_DetailsBig\nGO\nSELECT OrdersBig.OrderID,\n       ISNULL(CONVERT(Integer, CONVERT(Integer, ABS(CheckSUM(NEWID())) \/ 1000000)),0) AS ProductID,\n       GetDate() -  ABS(CheckSUM(NEWID())) \/ 1000000 AS Shipped_Date,\n       CONVERT(Integer, ABS(CheckSUM(NEWID())) \/ 1000000) AS Quantity\n  INTO Order_DetailsBig\n  FROM OrdersBig\nGO\nALTER TABLE Order_DetailsBig ADD CONSTRAINT [xpk_Order_DetailsBig] PRIMARY KEY([OrderID], [ProductID])\nGO\nCREATE INDEX ixContactName ON CustomersBig(ContactName)\nCREATE INDEX ixProductName ON ProductsBig(ProductName)\nCREATE INDEX ixCustomerID ON OrdersBig(CustomerID) INCLUDE(Value)\nCREATE INDEX ixProductID ON Order_DetailsBig(ProductID) INCLUDE(Quantity)\nCREATE INDEX ixCol1OrderDate ON OrdersBig(Col1, OrderDate) INCLUDE(CustomerID, Value)\nGO<\/pre>\n<h2>Statement match<\/h2>\n<p>For example, execute the following four queries in the database.<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">-- Statement match\n-- SET PARAMETERIZATION to SIMPLE \nALTER DATABASE Northwind SET PARAMETERIZATION SIMPLE;\nGO\nDBCC FREEPROCCACHE()\nGO\nSELECT * FROM CustomersBig WHERE ContactName = 'Fabiano' OR CompanyName = 'A company'\nGO\nSELECT * FROM CustomersBig WHERE ContactName = 'Fabiano' OR CompanyName = 'A company'\nGO\nSELECT * \/*important comment*\/ FROM CustomersBig WHERE ContactName = 'Fabiano' OR CompanyName = 'A company'\nGO\nSELECT * FROM CustomersBig WHERE ContactName = 'Amorim' OR CompanyName = 'A company'\nGO<\/pre>\n<p>The first and second queries will use the same plan, but the third and fourth will need to generate a new plan. Notice that the only difference between second and third queries is the comment which is enough to invalidate plan reuse.<\/p>\n<p>Following are the details from the cache DMVs:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">SELECT a.plan_handle, \n       a.usecounts,\n       a.cacheobjtype,\n       a.objtype,\n       a.size_in_bytes,\n       b.text,\n       c.query_plan\n  FROM sys.dm_exec_cached_plans a\n OUTER APPLY sys.dm_exec_sql_text (a.plan_handle) b\n OUTER APPLY sys.dm_exec_query_plan (a.plan_handle) c\n WHERE b.\"text\" NOT LIKE '%sys.%'\n   AND b.\"text\" LIKE '%CustomersBig%'\n ORDER BY a.usecounts DESC\nGO<\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92041\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-67.png\" alt=\"Results of plan cache query\" width=\"996\" height=\"111\" \/><\/p>\n<p>As you can see, to take advantage of the reuse of ad hoc query plans, you need to make sure that the queries are identical, character for character. If one query has a new line or an extra space that another one doesn&#8217;t have, they will not be treated as the same. If one contains a comment that the other doesn&#8217;t have, they will not be identical. If one uses a different case for either identifiers or keywords, even in a DB with a case-insensitive collation, queries will not be considered the same.<\/p>\n<p>Since SQL considered those queries to be different, each call triggered a query plan compilation and optimization.<\/p>\n<h2>Simple parameterization<\/h2>\n<p>SQL Server may use \u2018simple parameterization\u2019 to turn the literal values into parameters and increase the ability to match SQL statements for different literal values. When this happens, subsequent queries that follow the same basic parameterized query can use the same plan. For example:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">-- Auto-param\n-- SET PARAMETERIZATION to SIMPLE \nALTER DATABASE Northwind SET PARAMETERIZATION SIMPLE;\nGO\nDBCC FREEPROCCACHE()\nGO\nSELECT * FROM OrdersBig WHERE OrderID = 1\nGO\nSELECT * FROM OrdersBig WHERE OrderID = 2\nGO\nSELECT * FROM OrdersBig WHERE OrderID = 3\nGO<\/pre>\n<p>Internally, SQL Server parameterizes the queries as follows:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">SELECT * FROM [OrdersBig] WHERE [OrderID]=@1<\/pre>\n<p>You can also see the parameterized query in the actual execution plan; notice that the @1 variable is used instead of the literal value:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92042\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-68.png\" alt=\"Execution plan showing parameter\" width=\"430\" height=\"188\" \/><\/p>\n<p>Following, you can see the details from the cache DMVs:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">SELECT a.plan_handle, \n       a.usecounts,\n       a.cacheobjtype,\n       a.objtype,\n       b.text,\n       c.query_plan\n  FROM sys.dm_exec_cached_plans a\n OUTER APPLY sys.dm_exec_sql_text (a.plan_handle) b\n OUTER APPLY sys.dm_exec_query_plan (a.plan_handle) c\n WHERE b.\"text\" NOT LIKE '%sys.%'\n   AND b.\"text\" LIKE '%OrdersBig%'\n ORDER BY a.usecounts DESC\nGO<\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92043\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-69.png\" alt=\"Cached plans\" width=\"990\" height=\"128\" \/><\/p>\n<p>Notice that the three individual queries with their distinct constants do get cached as ad hoc queries. However, these are only considered shell queries and don\u2019t contain the complete execution plan, only a pointer to the full plan in the corresponding prepared plan.<\/p>\n<p>Let me add more comments about those shell queries because they\u2019re the reason I\u2019m writing this article:<\/p>\n<ul>\n<li>A shell query compilation doesn\u2019t result in an optimization since only the prepared\/parameterized query gets optimized. This is good, but the side effect of this is that even those shell queries that don\u2019t get reused will often be cached and use memory. To minimize the impact of those queries, SQL caches them as zero-cost queries. That way, under a memory pressure condition, these would be among the first entries that would be freed from cache. You could look at sys.dm_os_memory_cache_entries to check the cost of each memory object entry.<\/li>\n<li>The shell queries are only cached to make it easier to find the prepared version of the query if the exact same query with the same constant is executed again later. It will only be useful if you actually re-run the same ad hoc query again, which, for most scenarios, is very unlikely (at least, considering the majority of the queries). Microsoft probably have a reason why they\u2019re doing it, but I\u2019ve noticed that most environments are hitting the plan cache limit due to the high number of ad hoc queries, and forced parameterization can do nothing to help with it. In the article <a href=\"https:\/\/techcommunity.microsoft.com\/t5\/sql-server\/4-0-useful-queries-on-dmv-8217-s-to-understand-plan-cache\/ba-p\/383220\">\u201cUseful Queries on DMV\u2019s to understand Plan Cache Behavior\u201d<\/a> they mentioned that \u201cIn some workloads, there is reuse of adhoc queries with the exact same parameter values. In such cases caching of the shell query proves gives better throughput.\u201d<\/li>\n<li>In my opinion, storing those garbage ad hoc queries in a limited cache space is not a good idea. I wish I had an option (trace flag, anyone?) to change this behavior and not cache the ad hoc queries when a prepared version of the query exists.<\/li>\n<\/ul>\n<p>Take a look at the XML for the second row (the one with the ad hoc plan with OrderID = 3):<\/p>\n<pre class=\"lang:c# theme:vs2012\">&lt;ShowPlanXML xmlns=\"http:\/\/schemas.microsoft.com\/sqlserver\/2004\/07\/showplan\" Version=\"1.539\" Build=\"15.0.4138.2\"&gt;\n  &lt;BatchSequence&gt;\n    &lt;Batch&gt;\n      &lt;Statements&gt;\n        &lt;StmtSimple StatementText=\"SELECT * FROM OrdersBig WHERE OrderID = 3\" StatementId=\"1\" StatementCompId=\"1\" StatementType=\"SELECT\" RetrievedFromCache=\"true\" ParameterizedPlanHandle=\"0x0600050024F5793700E98CD06C02000001000000000000000000000000000000000000000000000000000000\" ParameterizedText=\"(@1 tinyint)SELECT * FROM [OrdersBig] WHERE [OrderID]=@1\" \/&gt;\n      &lt;\/Statements&gt;\n    &lt;\/Batch&gt;\n  &lt;\/BatchSequence&gt;\n&lt;\/ShowPlanXML&gt;<\/pre>\n<p>Notice that there is an attribute ParameterizedPlanHandle with value \u201c0x0600050024F5793700E98CD06C02000001000000000000000000000000000000000000000000000000000000\u201d which is the pointer to the prepared query that has the full execution plan.<\/p>\n<p>By default, SQL Server simple-parameterizes a relatively small class of queries. The engine is very conservative about deciding when to use simple parameterization for a query. It will only parameterize queries if the prepared query plan is considered to be safe.<\/p>\n<p><strong>Read also:\u00a0<br \/><\/strong><a href=\"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/performance-sql-server\/fixing-cache-bloat-problems-with-guide-plans-and-forced-parameterization\/\"><span data-sheets-root=\"1\">Query Store for diagnosing parameter sniffing<br \/>Fixing cache bloat with guide plans and forced parameterization<br \/><\/span><\/a><\/p>\n<h2>Forced parameterization<\/h2>\n<p>You can enable forced parameterization at the DB level to enable parameterization for all queries in the database, subject to certain limitations (for a list of limitations and exceptions, check the <a href=\"https:\/\/docs.microsoft.com\/en-us\/sql\/relational-databases\/query-processing-architecture-guide?view=sql-server-ver15#ForcedParam\">BOL<\/a>).<\/p>\n<p>Once forced param is enabled, the result will be pretty much the same as simple-param, but for all queries.<\/p>\n<p>For example, consider the following queries that would not be simple-parameterized but are parameterized under forced param:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">-- Forced-param\n-- SET PARAMETERIZATION to FORCED \nALTER DATABASE Northwind SET PARAMETERIZATION FORCED;\nGO\nDBCC FREEPROCCACHE()\nGO\nSELECT * FROM OrdersBig WHERE OrderID = 1 OR Value &lt; 0.2\nGO\nSELECT * FROM OrdersBig WHERE OrderID = 2 OR Value &lt; 0.4\nGO\nSELECT * FROM OrdersBig WHERE OrderID = 3 OR Value &lt; 0.7\nGO<\/pre>\n<p>Following, you can see the details from the cache DMVs:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"716\" height=\"104\" class=\"wp-image-92044\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-70.png\" \/><\/p>\n<h2>Stored procedures and sp_executesql<\/h2>\n<p>It is a best practice to use stored procedures or rely on data access methods parameterization using param markers to increase the reuse of execution plans. For example, consider the same query used with forced param. Once the developer identifies all the parameters in the query, they can use parameter markers (question marks) to replace a constant in an SQL statement and are bound to a program variable. Using parameters at the application, you would see the following calls in the SQL Server:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">-- sp_executesql\n-- SET PARAMETERIZATION to SIMPLE\nALTER DATABASE Northwind SET PARAMETERIZATION SIMPLE;\nGO\nDBCC FREEPROCCACHE()\nGO\nEXEC sp_executesql N'SELECT * FROM OrdersBig WHERE OrderID = @OrderID OR Value &lt; @Value', \n                   N'@OrderID Int, @Value NUMERIC(18,2)',\n                   @OrderID =1,\n                   @Value = 0.2\nGO\nEXEC sp_executesql N'SELECT * FROM OrdersBig WHERE OrderID = @OrderID OR Value &lt; @Value', \n                   N'@OrderID Int, @Value NUMERIC(18,2)',\n                   @OrderID = 2,\n                   @Value = 0.4\nGO\nEXEC sp_executesql N'SELECT * FROM OrdersBig WHERE OrderID = @OrderID OR Value &lt; @Value', \n                   N'@OrderID Int, @Value NUMERIC(18,2)',\n                   @OrderID = 3,\n                   @Value = 0.7\nGO<\/pre>\n<p>Following, you can see the details from the cache DMVs:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"931\" height=\"82\" class=\"wp-image-92045\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-71.png\" \/><\/p>\n<p>When using <code>sp_executesql<\/code>, there are no entries for the ad hoc shell query (unparameterized) queries. Not having the ad hoc shell queries in the plan cache is a key factor for the analysis I was doing, as 80% of the plan cache was filled with those shell queries.<\/p>\n<p>Note: It is worthy of adding a note in favor of parameterization. Using parameters to hold values that end users type is more secure than concatenating the values into a string that is then executed by using either a data access API method, the EXECUTE statement, or the sp_executesql stored procedure. SQL injection, anyone?<\/p>\n<h2>Back to the scenario analysis<\/h2>\n<p>Now that you understand how SQL manages plan cache for ad hoc queries, it\u2019s time to get back to the analysis.<\/p>\n<p>Once I looked at the batch request\/sec and compilations\/sec numbers, I started to question myself: How can I have such a high number of compilations\/sec if most requests use forced parameterization? (notice that forced param\/sec counter is pretty much the same as compilations\/sec). If an ad hoc query execution is parameterized (via forced param), how could I still have a compilation? Shouldn\u2019t they use the \u201cprepared\u201d plan and count as a cache hit (found the plan in cache)?<\/p>\n<p>Remember, on both simple-param and forced-param, SQL Server also adds the ad hoc shell query in the plan cache. When it happens, since the plan for the ad hoc query is not in cache, it counts as a cache miss (couldn\u2019t find query plan in cache). This will decrease the plan cache hit ratio and increases compilations\/sec. However, there is an important thing to consider: those compilations\/sec are not triggering an optimization, so the CPU cost of the compilation is not so bad. Since it is not triggering the optimization, it may use less CPU.<\/p>\n<p>In other words, considering CPU usage, a high number of compilation\/sec per batch requests\/sec may not necessarily be a problem. This is still definitely something of concern if you consider the internal\/local memory pressure it may cause. As a result of having many ad hoc plans in cache, you may have the following problems:<\/p>\n<ol>\n<li>If the number of ad hoc shell entries gets too big, you\u2019ll reach a point (there is no more room left in plan cache) where you hit the limit of entries in the plan cache memory table. This will make SQL run the eviction policy algorithm to determines which plans to remove from cache.<\/li>\n<li>When the plan cache reaches its entry limit, plans with low cost must be evicted in order to insert new plans. Suppose there are many concurrent users\/sessions trying to insert or remove rows in the same hash table (in the case of ad hoc queries, I\u2019m talking about the SQL Plans internal area). In that case, there may be spinlock (SOS_CACHESTORE and\/or SOS_CACHESTORE_CLOCK) contention which can cause high CPU usage.<\/li>\n<li>Consider a scenario with the same ad hoc query being called hundreds of times per second. SQL will cache the ad hoc shell query for each unique (individual calls using different constant values) call, even though there may be only one cached prepared plan available and being reused. In such scenarios, the CMEMTHREAD wait may start to pop up, indicating you have contention on the memory object from which cache descriptors are allocated. Even though Microsoft already <a href=\"https:\/\/support.microsoft.com\/en-us\/topic\/kb3074425-fix-cmemthread-waits-occur-when-you-execute-many-ad-hoc-queries-in-sql-server-2012-or-sql-server-2014-4148404e-a2b2-2196-8449-2c22c9439fe6\">tried to optimize the code<\/a> to avoid this, you may still see those.<\/li>\n<li>If optimize for ad hoc workloads is not enabled, you\u2019ll probably end up with a <a href=\"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/performance-sql-server\/fixing-cache-bloat-problems-with-guide-plans-and-forced-parameterization\/\">cache bloat problem<\/a>.<\/li>\n<\/ol>\n<p>Considering that, I decided to check the internal memory pressure condition. I started by looking at the wait CMEMTHREAD, and it is indeed there.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92046\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-72.png\" alt=\"Memory pressure\" width=\"734\" height=\"382\" \/><\/p>\n<p>I\u2019ve also looked at DMV <code>sys.dm_os_memory_cache_clock_hands<\/code>, This DMV has information about how many clock rounds have been made for each cache store. The query below returns information about cache stores with most entries removed in the last round. Notice that internal (internal and external clock hand distinguishes internal and external memory pressure respectively) clock hand for <code>CACHESTORE_SQLCP<\/code> indicates the number of entries (plans) removed in the last round was 161098 (pretty much everything).<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">SELECT mcch.name,\n       mcch.[type],\n       mcch.clock_hand,\n       mcch.clock_status,\n       SUM(mcch.rounds_count) AS rounds_count,\n       SUM(mcch.removed_all_rounds_count) AS cache_entries_removed_all_rounds,\n       SUM(mcch.removed_last_round_count) AS cache_entries_removed_last_round,\n       SUM(mcch.updated_last_round_count) AS cache_entries_updated_last_round,\n       SUM(mcc.pages_kb) AS cache_pages_kb,\n       SUM(mcc.pages_in_use_kb) AS cache_pages_in_use_kb,\n       SUM(mcc.entries_count) AS cache_entries_count,\n       SUM(mcc.entries_in_use_count) AS cache_entries_in_use_count,\n       CASE\n           WHEN mcch.last_tick_time\n                BETWEEN -2147483648 AND 2147483647\n                AND si.ms_ticks\n                BETWEEN -2147483648 AND 2147483647 THEN\n               DATEADD(ms, mcch.last_tick_time - si.ms_ticks, GETDATE())\n           WHEN mcch.last_tick_time \/ 1000\n                BETWEEN -2147483648 AND 2147483647\n                AND si.ms_ticks \/ 1000\n                BETWEEN -2147483648 AND 2147483647 THEN\n               DATEADD(s, (mcch.last_tick_time \/ 1000) - (si.ms_ticks \/ 1000), GETDATE())\n           ELSE\n               NULL\n       END AS last_clock_hand_move\nFROM sys.dm_os_memory_cache_counters mcc (NOLOCK)\n    INNER JOIN sys.dm_os_memory_cache_clock_hands mcch (NOLOCK)\n        ON mcc.cache_address = mcch.cache_address\n    CROSS JOIN sys.dm_os_sys_info si (NOLOCK)\nWHERE mcch.rounds_count &gt; 0\nGROUP BY mcch.name,\n         mcch.[type],\n         mcch.clock_hand,\n         mcch.clock_status,\n         mcc.pages_kb,\n         mcc.pages_in_use_kb,\n         mcch.last_tick_time,\n         si.ms_ticks,\n         mcc.entries_count,\n         mcc.entries_in_use_count\nORDER BY SUM(mcch.removed_all_rounds_count) DESC,\n         mcch.[type];<\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1187\" height=\"277\" class=\"wp-image-92047\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-73.png\" \/><\/p>\n<p>I also noticed that this was removing avg of 161k entries every 5 seconds.<\/p>\n<p>Luckily, the number of CPUs (96) available in the server was good enough to keep up with the workload and avoid CPU pressure, but that only happened because it has forced parameterization enabled in all DBs in the instance. This was helping to avoid the full optimization of all the ad hoc queries, which I\u2019m sure would make the CPU go to 100% in just a few seconds if it wasn\u2019t the case. Forced parameterization does fix the issue of optimizing the query plan for each ad-hoc query execution, but it <strong><em>does not<\/em><\/strong> fix the issue of compiling the plan and bloating the cache with garbage plans which may cause internal memory pressure and consequently the plan cache to be cleaned and the spinlock contention.<\/p>\n<p>The lesson I learned was, from now on, I\u2019ll look not only at batch requests\/sec, compilations\/sec and plan cache hit ratio, but also in another important counter which is the \u201cWorkload Group Stats: Query optimizations\/sec\u201d. When considering this counter, it would be easier to identify those cases where ad hoc queries are causing a high number of compilations but not necessarily triggering optimization. For instance, the following is the same image as before, but now with the info about the query optimizations\/sec:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92048\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-74.png\" alt=\"Perfmon counter report\" width=\"707\" height=\"495\" \/><\/p>\n<p>Notice that out of 20905 compilations per second, there were only 247 query optimizations per second\u2014a much better picture of what is going on with the server. The general guideline of \u201cCompilations\/sec should be less than 10% of the Batch Request\/sec\u201d is still valid as a good practice, but, as you know, a general guideline doesn\u2019t apply for all scenarios.<\/p>\n<h2>Time for some tests<\/h2>\n<p>Here are some tests to simulate the same scenario I presented before to see it\u2019s possible to identify the problems and analyze the alternatives and the benefits it may provide.<\/p>\n<p>To run the tests, I created the following PowerShell script to run an ad hoc query 50000 times.<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">Clear-Host\n$ScriptBlock = {\n    $conn = New-Object System.Data.SqlClient.SqlConnection(\"Server=vm3;Database=Northwind;User ID=sa;Password=-------;\")\n    $conn.Open()\n    $i = 1\n    while ($i -le 50000){\n        try {\n            $sqlCmd = New-Object System.Data.SqlClient.SqlCommand\n            $sqlCmd.Connection = $conn\n            [string]$Guid = New-Guid            \n            $query = \"SELECT *, (Select 1) AS Num FROM OrdersBig o \n                        WHERE o.Col1 = '$Guid'\"\n            $sqlCmd.CommandText = $query\n            $reader = $sqlCmd.ExecuteReader()\n            $reader.Close()\n            $i = $i + 1\n        } \n        catch {\n            $ErrorMessage = $_.Exception.Message\n            $vGetDate = Get-Date -Format G\n            Write-Output \"$vGetDate : Connection Failed \"$ErrorMessage\"... retrying...\"\n        }\n    }\n    $conn.Dispose()   \n}\n$current = Get-Date\nInvoke-Command -ScriptBlock $ScriptBlock\n$end= Get-Date\n$diff= New-TimeSpan -Start $current -End $end\nWrite-Output \"Time to run the script: $diff\"\n&lt;#\nexec xp_cmdshell 'powershell -NoLogo -NoProfile \"C:\\Temp\\Test.ps1\"'\n#&gt; <\/pre>\n<p>Then, I\u2019m using SQLQueryStress to call the script with 100 threads using <code>xp_cmdshell<\/code>. (I know there are easier ways to do it via PS, but that\u2019s good enough for me \ud83d\ude0a)<\/p>\n<p>Before I run the tests, I\u2019m also enabling optimize for ad hoc workloads to avoid memory pressure due to the size of the plan caches using the following code:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">-- Run tests with ad hoc workloads\nEXEC sys.sp_configure N'optimize for ad hoc workloads', N'1'\nGO\nRECONFIGURE WITH OVERRIDE\nGO<\/pre>\n<h2>Test 1: Ad hoc with simple param triggering compilation and optimization<\/h2>\n<p>For the first execution, I tried the script with simple parameterization using 100 concurrent threads. Since the query is a bit complex, it won\u2019t be auto-parameterized, which means it will trigger a compilation and an optimization for each call. The script took 12 minutes and 16 seconds to run, and while it was running, I saw the following counter numbers:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92049\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-75.png\" alt=\"Perfmon optimizations\/sec\" width=\"423\" height=\"288\" \/><\/p>\n<p>As per the image above, I was able to run an average of 7597 batch requests per second. After a few seconds, I hit the limit of entries in the plan cache. Also, as expected, each ad hoc query execution is compiled and optimized. Since those operations are CPU intensive, with only 7597 batch requests\/sec of a very lightweight (query doesn\u2019t return anything and is very fast) query, it is using almost 100% of CPU in my 96 cores VM. That\u2019s a lot of CPU to do nothing other than creating execution plans.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92050\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-76.png\" alt=\"CPU Utlization\" width=\"939\" height=\"680\" \/><\/p>\n<h2>Test 2: Ad hoc with forced param triggering compilation and only 1 optimization<\/h2>\n<p>For the second execution, I tried the script with forced parameterization and using the same 100 concurrent threads as before. This will trigger optimization only for the first call of the query, and all subsequent calls will reuse the prepared plan. The script took only 58 seconds to run, and while it was running, I saw the following counter numbers:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92051\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-77.png\" alt=\"Results with parameterization\" width=\"417\" height=\"297\" \/><\/p>\n<p>As per the image above, I was able to run an average of 94 thousand batch requests per second, and as expected, the compilations\/sec counter is showing the same 94 thousand values as each ad hoc query counts as a compilation, although there were no optimizations. Because there was no extra CPU cost related to the optimizations for each call, I was able to run a lot more requests per second.<\/p>\n<h2>Test 3: Parameterized queries in the application<\/h2>\n<p>For the third execution, I tried the script with a query parameterized from the application. Again, I used the same 100 concurrent threads as before.<\/p>\n<p>To parameterize it from the PS script, I\u2019ve changed the following part of the code:<\/p>\n<pre class=\"lang:ps theme:powershell-ise\">...\n[string]$Guid = New-Guid\n$Col1Param = New-Object System.Data.SqlClient.SqlParameter(\"@Col1\",[Data.SQLDBType]::varchar, 250)\n$Col1Param.Value = $Guid\n$sqlCmd.Parameters.Add($Col1Param) | Out-Null\n$query = \"SELECT * FROM OrdersBig o \n            INNER JOIN CustomersBig c ON o.CustomerID = c.CustomerID \n            INNER JOIN Order_DetailsBig od ON o.OrderID = od.OrderID \n            INNER JOIN ProductsBig p ON od.ProductID = p.ProductID \n            WHERE o.Col1 = @Col1\"\n$sqlCmd.CommandText = $query \n...<\/pre>\n<p>Now, the script took only 51 seconds to run, and while it was running, I saw the following counter numbers:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92052\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-78.png\" alt=\"Perfmon counters for parameterization\" width=\"410\" height=\"289\" \/><\/p>\n<p>This is 106469 batch requests per second with CPU usage averaging at 90%. At this time, I thought, well, CPU at 90%, I think I can add a few more threads to see how it goes. Here is the result of the execution using 150 threads:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-92053\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-79.png\" alt=\"Result after increasing batches\" width=\"424\" height=\"296\" \/><\/p>\n<p>125428 batch requests\/sec, that\u2019s a lot of requests \ud83d\ude0a.<\/p>\n<p>This will work under simple or forced parameterization, so, it is a good way of guarantee a good parameterized behavior.<\/p>\n<h2>To summarize the results:<\/h2>\n<p>Following, you can see a table with the summarized results. Using parameterized queries from the application, I was able to run 125 thousand batch requests per second, which is impressive.<\/p>\n<table>\n<tbody>\n<tr>\n<td>\n<p><strong>Scenario<\/strong><\/p>\n<\/td>\n<td>\n<p><strong>Parameterization<\/strong><\/p>\n<\/td>\n<td>\n<p><strong>Threads<\/strong><\/p>\n<\/td>\n<td>\n<p><strong>Avg of batch requests\/sec<\/strong><\/p>\n<\/td>\n<td>\n<p><strong>Time to run all requests<\/strong><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<p>Test 1<\/p>\n<\/td>\n<td>\n<p>Simple without auto param<\/p>\n<\/td>\n<td>\n<p>100<\/p>\n<\/td>\n<td>\n<p>7597<\/p>\n<\/td>\n<td>\n<p>12 minutes and 16 seconds<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<p>Test 2<\/p>\n<\/td>\n<td>\n<p>Forced<\/p>\n<\/td>\n<td>\n<p>100<\/p>\n<\/td>\n<td>\n<p>94988<\/p>\n<\/td>\n<td>\n<p>58 seconds<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<p>Test 3<\/p>\n<\/td>\n<td>\n<p>sp_executesql<\/p>\n<\/td>\n<td>\n<p>100<\/p>\n<\/td>\n<td>\n<p>106469<\/p>\n<\/td>\n<td>\n<p>51 seconds<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<p>Test 3<\/p>\n<\/td>\n<td>\n<p>sp_executesql<\/p>\n<\/td>\n<td>\n<p>150<\/p>\n<\/td>\n<td>\n<p>125428<\/p>\n<\/td>\n<td>\n<p>61 seconds<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>SQL compilations\/sec<\/h2>\n<p>Parameterization plays a very important role in the performance of a system. Although simple parameterization, forced parameterization, and optimize for ad hoc workload features can help to minimize the lack of parameterization, it is a good practice to parameterize queries at the application side or use stored procedures.<\/p>\n<p>Using stored procedures is also good because it usually avoids issues with implicit conversions due to wrong parameter data types (although even on SPs users can always use wrong datatypes if variable type is different than table column). This will reduce memory consumption, reduce memory pressure, and reduce CPU usage by saving the compilation and optimization time of each query.<\/p>\n<p>If you liked this article, you might also like\u00a0<a href=\"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/performance-sql-server\/sql-server-performance-monitor-data-introduction-and-usage\/\">SQL Server performance monitor data: Introduction and usage<\/a><\/p>\n\n\n<section id=\"my-first-block-block_4a1d390570ff81f7c67251958eb33dc2\" class=\"my-first-block alignwide\">\n    <div class=\"bg-brand-600 text-base-white py-5xl px-4xl rounded-sm bg-gradient-to-r from-brand-600 to-brand-500 red\">\n        <div class=\"gap-4xl items-start md:items-center flex flex-col md:flex-row justify-between\">\n            <div class=\"flex-1 col-span-10 lg:col-span-7\">\n                <h3 class=\"mt-0 font-display mb-2 text-display-sm\">Fast, reliable and consistent SQL Server development&#8230;<\/h3>\n                <div class=\"child:last-of-type:mb-0\">\n                                            &#8230;with SQL Toolbelt Essentials. 10 ingeniously simple tools for accelerating development, reducing risk, and standardizing workflows.                                    <\/div>\n            <\/div>\n                                            <a href=\"https:\/\/www.red-gate.com\/products\/sql-toolbelt-essentials\/\" class=\"btn btn--secondary btn--lg\" aria-label=\"Learn more &amp; try for free: Fast, reliable and consistent SQL Server development...\">Learn more &amp; try for free<\/a>\n                    <\/div>\n    <\/div>\n<\/section>\n\n\n<section id=\"faq\" class=\"faq-block my-5xl\">\n    <h2>FAQs: SQL Compilations\/sec in SQL Server<\/h2>\n\n                        <h3 class=\"mt-4xl\">1. What does the SQL Compilations\/sec counter actually measure in SQL Server?<\/h3>\n            <div class=\"faq-answer\">\n                <p>SQL Compilations\/sec counts the number of times SQL Server enters the compile code path per second, including both full compilations and compiled plan stub creations. This is important because with \u201coptimize for ad hoc workloads\u201d enabled, a first-execution stub creation counts as a compilation, but the second-execution full compilation (when the stub is promoted to a full plan) does not increment the counter. This makes the metric unreliable as a direct measure of compilation work for ad hoc workloads.<\/p>\n            <\/div>\n                    <h3 class=\"mt-4xl\">2. Why is my SQL Compilations\/sec ratio to Batch Requests\/sec so high in SQL Server?<\/h3>\n            <div class=\"faq-answer\">\n                <p>A high Compilations\/sec-to-Batch-Requests\/sec ratio (above 10%) typically indicates an ad hoc workload &#8211; the server is receiving many unique query texts that cannot reuse cached plans. Check sys.dm_exec_cached_plans for the ratio of single-use ad hoc plans to total cached plans. If it\u2019s above 50\u201360%, you have an ad hoc workload. Solutions include enabling \u201coptimize for ad hoc workloads,\u201d using forced parameterization, or refactoring application code to use sp_executesql for parameterized queries.<\/p>\n            <\/div>\n                    <h3 class=\"mt-4xl\">3. What is optimize for ad hoc workloads in SQL Server?<\/h3>\n            <div class=\"faq-answer\">\n                <p>The \u201coptimize for ad hoc workloads\u201d server option reduces plan cache bloat by caching only a small compiled plan stub on the first execution of an ad hoc query. The full compiled plan is only cached when the same query text executes a second time. This prevents the plan cache from filling up with thousands of single-use plans that will never be reused, freeing memory for plans that actually benefit from caching.<\/p>\n            <\/div>\n            <\/section>\n","protected":false},"excerpt":{"rendered":"<p>Why the standard SQL Compilations\/sec guideline is misleading for ad hoc workloads. Covers plan cache behavior, auto-parameterization, forced parameterization, and the optimize for ad hoc workloads setting.&hellip;<\/p>\n","protected":false},"author":65554,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[53,143529],"tags":[145440,5842,145441,4780],"coauthors":[6809],"class_list":["post-92038","post","type-post","status-publish","format-standard","hentry","category-featured","category-performance-sql-server","tag-sql-compilations-sec","tag-sql-monitor","tag-sql-parameterization","tag-sql-server-performance"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/92038","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/65554"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=92038"}],"version-history":[{"count":6,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/92038\/revisions"}],"predecessor-version":[{"id":109372,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/92038\/revisions\/109372"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=92038"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=92038"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=92038"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=92038"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}