Product articles
SQL Prompt
SQL Code Analysis
Changing SET options in a Procedure or…

Changing SET options in a Procedure or Trigger leads to Recompilation (PE012)

Phil Factor delves into SQL Prompt's performance rule, PE012, which will advise you if it detects the use of the SET statements within a stored procedure or trigger, which might cause unnecessary recompilations, though the issue extends to other types of batches.

Guest post

This is a guest post from Phil Factor. Phil Factor (real name withheld to protect the guilty), aka Database Mole, has 30 years of experience with database-intensive applications.

Despite having once been shouted at by a furious Bill Gates at an exhibition in the early 1980s, he has remained resolutely anonymous throughout his career.

He is a regular contributor to Simple Talk and SQLServerCentral.

Sometimes you will have a stored procedure or trigger that intermittently takes longer to run, for no apparent reason. You’ve checked the indexes, ruled out problems like parameter sniffing, but the intermittent performance problem persists. Could it be something as simple as the fact that you have issued a SET statement in the batch, in order to change an execution setting? If you do, there is a chance that the issue is caused by SQL Server needing to recompile the procedure or trigger repeatedly.

There is nothing particularly wrong with recompilations, and in fact it’s quite common to force certain queries to recompile on every execution, precisely to avoid bad performance problems related to parameter sniffing, misuse of Execute(), or catch-all queries. However, if recompilations become excessive, especially for frequent or costly queries, then it can become a problem, and it’s worth investigating the cause, which I’ll show how to do with Extended Events.

What are recompilations?

When SQL Server executes an ad-hoc batch or query, or an object such as a stored procedure or trigger, SQL Server compiles an execution plan for each batch or object, and for each query within that batch or object, optimized for the current state of the database, its objects and their data. It takes time and resources for SQL Server’s optimiser to devise this plan, but it must be done before the code can passed onto the execution engine. Fortunately, we tend to execute the same queries or procedures repeatedly, maybe with different parameters, so SQL Server stores most of the plans it generates in the plan cache, and will ensure that all plans are safe for reuse, regardless of what parameter values we supply. When we execute the same batch or object again, it will simply reuse its cached plan, whenever possible.

Sometimes, however, we re-execute a stored procedure, or resubmit a batch or query the optimizer has seen before, and for which it has an optimized plan in cache, but for some reason it can’t reuse that plan, and compiles a new one. This is a recompilation and it happens for various reasons. It will happen automatically if the execution engine detects that a table has been altered or its statistics have changed substantially, at which point it will mark for recompilation any cached plans for queries that access that table. The next time one of those queries runs, the optimizer will produce a new plan and the old one will be removed.

We can also force the optimizer to continually recompile a plan by attaching to the query an OPTION (RECOMPILE) hint. The plan for that query may still be in the cache, but it will not be reused. This is often done to deal with the erratic performance caused by parameter sniffing, the use of “catch-all” procedures, misuse of Execute(), and so on.

To save time and resources, SQL Server does statement-level recompilation, where possible. If the plan for only one statement within a batch or stored procedure has been invalidated by underlying changes to the data structures or data, or only one statement has the OPTION (RECOMPILE) hint, then only the plan for the affected statement is recompiled, not the whole batch or procedure.

Occasionally, recompiles can occur that are neither triggered automatically by changes in the data structures or data, nor forced by use of hints. We re-execute the same query on the same database, there is a cached plan that is a match, in that the SQL text of the submitted query and SQL text associated with the cached plan match exactly (including spaces and carriage returns), but the plan is not reused.

Again, there are several possible causes that we won’t discuss further here, such as references to temporary tables that are not created, statically, in the procedure, or lack of schema validation, and one that we will, which is that the cached plan was created using different SET options than the ones in use for the connection that submits the query.

The ‘plan-reuse-affecting’ SET options

Changing the values for certain of the SET options, sometimes referred to as the “plan reuse affecting” options, will change the way that queries run, and their results. Therefore, when the optimizer checks its cached plans for a match, it includes a check that the SET options used in compiling the cached plan match those in use for the connection issuing the batch. If they do not match, then it will not reuse the existing plan and instead it compiles a new one.

This means that you can see multiple cached plans, all essentially identical except for the details of these SET options.

These ‘plan-reuse-affecting’ options are, in alphabetic order, ANSI_DEFAULTS, ANSI_NULL_DFLT_OFF, ANSI_NULL_DFLT_ON, ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, CONCAT_NULL_YIELDS_NULL, DATEFIRST, DATEFORMAT, FORCEPLAN, LANGUAGE, NO_BROWSETABLE, NUMERIC_ROUNDABORT and QUOTED_IDENTIFIER.

These SET statements are detected when SQL Server performs ‘constant folding’ as part of compilation process, and it seems that in older versions of SQL Server, changing certain of these SET options to certain values could cause a recompilation every time the procedure was called. However, in more recent versions of SQL Server, it is unusual to hear of this problem.

However, it is still wise to understand that changing the SET options, at the start of a batch, or even within a procedure of trigger, can cause a new plan to be compiled, which will be reused only when executing the exact same batch or object with the exact same settings. While causing plans to be recompiled in this way rarely causes major performance problems, it does come with a CPU cost, and could cause issues, especially for complex queries with high compile costs and high-frequency of execution, or maybe even several of both varieties within multi-statement procedures.

Changing Connection settings

For an ODBC, ADO or JDBC connection, the way of specifying any change to the default settings for a connection is to execute a preliminary batch of SET statements once the connection is first made. There is no option in the connection string to allow this: it must be done by the SET Statement. In SSMS, you can use the Query menu (Query > Query Options) to specify the Advanced and ANSI standard options for the execution behaviour of your connections. When developing and testing, it is worth setting them to be the same as will be used for connections to the production system. These settings only reflect the execution settings at the point that the connection is made. If you then change the settings within a batch in the connection, those settings will be used for subsequent batches.

You’ll notice that the options in this tab (and the ANSI tab, not shown) do not cover all the SET options that are ‘plan-reuse-affecting’. The rest must be done at the time of making a new connection via SET option statements.

Changing results by changing the SET Options

As discussed earlier, a change in a session SET option can either result in errors or warnings in some cases or result in different results for a query. A quick demonstration is worth a lot of words, and here I’ll simply change the values of a couple of SET options at the start of each batch:

With ARITHABORT set to ON, the query ends with an error (which we catch) when it hits the divide-by-zero, so returns 2 rows. When we turn this option OFF, the same query returns 3 rows:

If you inspect the plans for each of these batches, you’ll see that they are the same, except for the values of these SET options (open the properties of the SELECT operators to see them):

The following query will show us what’s going on in the plan cache (I’ve done this on the PhilFactor Database, so you’ll need to change that).

Giving this result…

Each batch ended up with its own compiled plan because of the different SET option settings (235 and 4331). You will notice that there is an attribute of the plan called set_options which gives you a bitmap value of all the SET options, most of which are either on or off.

Every time you change one of these set options, you will see a new plan created specifically for that set of options, which clearly will increase the requirements for cache and the CPU time spent compiling plans. If you execute the two batches ten times, you’ll see that the appropriate plan is used with no further need for recompilation.

Changing the SET options within stored procedures

So far, we’ve dealt only with batches but what if, for some reason, you wanted to ensure that individual procedures executed with specific settings?

I’ve encapsulated the same logic in three stored procedures, the first two of which use specific settings for our two options, and the third has none of the SET option statements in it.

I executed each of these three procedures twice each, first from a session with ‘default’ settings for all options, where both ARITHABORT and ANSI_WARNINGS are ON (set_options = 4347), then from a session with the former ON but the latter OFF (4331), and finally from a session with both OFF (235).

We see 9 plans in total, with a new plan being compiled for each procedure, each time it is executed from a connection with a different set_options value. In other words, if the calling batch has execution settings that don’t match those that were in effect when any of the execution plans for the procedures were compiled, a new cached plan is created with the new set options. If we re-execute the same stored procedure with the same set_options value for the connection, the plan is reused.

Calling the first stored procedure, which explicitly sets ARITHABORT to ON, always return 2 rows, whereas calling the second procedure always returns 3 rows. When calling the procedure with no SET statements, it simply depends on the settings of the calling connection.

If you alter the settings within a procedure, they last only for the procedure, so they don’t affect the batch that called the procedure. All 9 plans show the SET options values in use for the connection from which the calling batch executed.

Catching the use of ‘plan-reuse-affecting’ SET statements in procedures and triggers

The performance rule (PE012) in SQL Prompt looks to see if you have made any the ‘plan-reuse-affecting’ SET statements in stored procedures and triggers (though not batches). You can also run a check using SQL Change Automation to spot this issue within the database build source. SQL Monitor also supports code analysis.

A word of caution though: this phenomenon is not just for procedures or triggers but for any ad-hoc batches, batches that are executed by using sp_executesql, prepared queries, and dynamic SQL. The cached plans for any of these cannot be reused so easily if a ‘plan-reuse-affecting’ SET statement is issued, and on earlier versions of SQL Server risk recompile on every use.

We use stored procedures and triggers in preference to batches of dynamic Transact-SQL because they are more easily reused. They are parameterized and so the SQL text never changes, which promotes reuse. When changing settings within prepared batches, or procedures, the setting options are only used for the execution of the prepared batch or procedure,

Batches can be reused as well, although SQL Server finds it easier to do this if the batch is executed via sp_executesql or Prepare methods, rather than dynamic SQL or Execute methods.

Worse, when executing ad-hoc batches, any change in SET options leaks from the batch so that the connection keeps its new settings: you have to explicitly restore the settings, but then an error that immediately aborts the batch before that point won’t execute the code.. The optimizer may then need to compile new plans, for these new settings for all subsequent batches and procedures you execute on that connection.

This error is harder to detect, and it reinforces the general advice that these statements must always be done as a preliminary batch just after the connection is made and any change avoided subsequently. This means that all such SET statements are suspect in code and should be regarded as ‘SQL code smells’. It is difficult to justify them.

Investigating excessive recompilation

In SQL Server versions where Extended Events are either not available or too rustic, then you can use the SQL Server Profiler. Although SP:Recompile trace event can be used just to report statement-level recompilations of procedures and triggers, SQL:StmtRecompile can also be used to track and debug recompilations, and it can detect recompilations for stored procedures, triggers, ad-hoc batches, batches that are executed by using sp_executesql, prepared queries, and dynamic SQL. The EventSubClass column of SP:Recompile and SQL:StmtRecompile contains an integer code that indicates the reason for the recompilation.

With Extended Events, things become a lot more civilized. We can get a full report about recompilations and what causes them. Here is a simple session for reporting on individual compilations.

With this, we can get the details of individual recompilations. I usually add a session event filter on the sqlserver.username field to get recompilations only for a particular user (the name of the test user that I run the test code as). Otherwise you get a lot of noise.

Summary

If you find that your code is peppered with SET statements that involve ‘plan-reuse-affecting’ options, then that is a code smell and you should investigate why.

You may, of course, be doing something cunning and clever, but in my working life with SQL Server development I’ve never found this. It isn’t just in stored procedures or triggers that it is bad practice, but in any batch that is likely to be executed several times. If you need to set language, ANSI options or error-handling compatibility, then do it when the connection is created and create a single standard. If you fail to do this, you’ll cause SQL Server to perform unnecessary recompilations.

When I write that the use of these SET statements is ‘bad’, I don’t wish to imply that recompilation of batches are necessarily bad: sometimes they avoid one of several insidious performance problems, and it is quite rare for them to impact the performance of your application as long as you don’t indulge in SQL Code smells unnecessarily. The virtuous amongst us, for example, always promote code reuse by using sp_ExecuteSQL with parameters when creating a batch that we’d like to reuse, or within applications, we use bound parameters properly. We use table variables where it is prudent.

If you find a good reason for needing to use SET statements within a batch to change the way it executes, in a recent version of SQL Server, then please let me know about it because I’d be genuinely interested.

Tools in this post

SQL Prompt

Write, format, and refactor SQL effortlessly

Find out more