{"id":71926,"date":"2017-07-31T16:55:26","date_gmt":"2017-07-31T16:55:26","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/?p=71926"},"modified":"2021-09-29T16:21:10","modified_gmt":"2021-09-29T16:21:10","slug":"basics-good-t-sql-coding-style-part-4-performance","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/t-sql-programming-sql-server\/basics-good-t-sql-coding-style-part-4-performance\/","title":{"rendered":"The Basics of Good T-SQL Coding Style \u2013 Part 4: Performance"},"content":{"rendered":"<h4>The series so far:<\/h4>\n<ol>\n<li><a href=\"https:\/\/www.simple-talk.com\/sql\/t-sql-programming\/basics-good-t-sql-coding-style\/\">The Basics of Good T-SQL Coding Style<\/a><\/li>\n<li><a href=\"https:\/\/www.simple-talk.com\/sql\/t-sql-programming\/basics-good-t-sql-coding-style-part-2-defining-database-objects\/\">The Basics of Good T-SQL Coding Style \u2013 Part 2: Defining Database Objects<\/a><\/li>\n<li><a href=\"https:\/\/www.simple-talk.com\/sql\/t-sql-programming\/basics-good-t-sql-coding-style-part-3-querying-manipulating-data\/\">The Basics of Good T-SQL Coding Style \u2013 Part 3: Querying and Manipulating Data<\/a><\/li>\n<li><a href=\"https:\/\/www.red-gate.com\/simple-talk\/sql\/t-sql-programming\/basics-good-t-sql-coding-style-part-4-performance\/\">The Basics of Good T-SQL Coding Style \u2013 Part 4: Performance<\/a><\/li>\n<\/ol>\n\n<p>When working with SQL Server, performance should be at the forefront, whether you are designing databases, managing indexes, or writing T-SQL queries. Each of these three factors plays a role in delivering applications that will perform over the long haul. Although one factor is as important as the next, we focus only on the queries component in this article, as part of our continued discussion about the basics of good coding.<\/p>\n<p>Writing queries can be tricky because you need to continuously evaluate your code for performance-related issues, taking into account a wide range of considerations. For example, you should consider how you\u2019re using temporary tables, outer joins, comparison expressions, correlated subqueries, and a variety of other elements.<\/p>\n<p>Throughout the article, I provide a number of examples of T-SQL queries to help our discussion along. The examples are based on the <strong>AdventureWorks2014<\/strong> database, installed on a local instance of SQL Server 2016. I\u2019ve tried to hit as many of the more salient issues as possible, but keep in mind that a discussion focused on performance is much broader than what we can cover here, even though we\u2019re limiting it only to T-SQL queries.<\/p>\n<p>Also be aware that performance-related issues are seldom black-and-white. More often than not, your final approach to a query will be based on your specific circumstances and the data you\u2019re working with. You might have to try different approaches before coming up with the best one for your particular circumstances, as they currently exist and will in the foreseeable future. Perhaps the way to think of this article is as a set of suggested guidelines, without any assumptions of absolute truth.<\/p>\n<h2>The ubiquitous table source<\/h2>\n<p>At the heart of most queries are one or more table sources (tables, views, derived tables or joined tables) that make it possible to get at the data you need in order to produce the necessary results. The tables will come in many forms. A query might use temporary tables, common table expressions (CTEs), table variables, Table functions, Table parameters (TVPs) or joins to provide one or more workable data sets that can be massaged and transformed before returning the final results. In fact, any query that contains a <strong>FROM<\/strong> clause is starting out with a table source.<\/p>\n<p>Given that T-SQL is a set-based language, thinking in terms of table sources makes sense. But working with these structures is not always a straightforward process when it comes to performance. Let\u2019s start with an example of a temporary table. The following T-SQL code creates the <strong>#sales<\/strong> temporary table and joins it with the <strong>Person<\/strong> table in the <strong>AdventureWorks2014<\/strong> database:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TABLE #sales\r\n  (SalesPersonID INT, TotalSales MONEY);\r\n  INSERT INTO #sales\r\n  SELECT SalesPersonID, SUM(SubTotal) AS TotalSales\u00a0\r\n  FROM Sales.SalesOrderHeader\r\n  WHERE SalesPersonID IS NOT NULL\r\n  GROUP BY SalesPersonID;\r\n  SELECT p.FirstName, p.LastName, s.TotalSales\u00a0\r\n  FROM #sales s INNER JOIN Person.Person p\r\n  \u00a0 ON s.SalesPersonID = p.BusinessEntityID\r\n  ORDER BY p.LastName, p.FirstName;<\/pre>\n<p>The example is fairly straightforward. We\u2019re simply aggregating the sales totals and mapping that information to the applicable sales rep. The <strong>SELECT<\/strong> statement returns the results shown in the following table:<\/p>\n<table class=\"table--tight\">\n<thead>\n<tr>\n<td><strong>FirstName<\/strong><\/td>\n<td><strong>LastName<\/strong><\/td>\n<td><strong>TotalSales<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Syed<\/td>\n<td>Abbas<\/td>\n<td>172524.4512<\/td>\n<\/tr>\n<tr>\n<td>Amy<\/td>\n<td>Alberts<\/td>\n<td>732759.1841<\/td>\n<\/tr>\n<tr>\n<td>Pamela<\/td>\n<td>Ansman-Wolfe<\/td>\n<td>3325102.5941<\/td>\n<\/tr>\n<tr>\n<td>Michael<\/td>\n<td>Blythe<\/td>\n<td>9293903.0046<\/td>\n<\/tr>\n<tr>\n<td>David<\/td>\n<td>Campbell<\/td>\n<td>3729945.349<\/td>\n<\/tr>\n<tr>\n<td>Jillian<\/td>\n<td>Carson<\/td>\n<td>10065803.5404<\/td>\n<\/tr>\n<tr>\n<td>Shu<\/td>\n<td>Ito<\/td>\n<td>6427005.554<\/td>\n<\/tr>\n<tr>\n<td>Stephen<\/td>\n<td>Jiang<\/td>\n<td>1092123.8561<\/td>\n<\/tr>\n<tr>\n<td>Tete<\/td>\n<td>Mensa-Annan<\/td>\n<td>2312545.69<\/td>\n<\/tr>\n<tr>\n<td>Linda<\/td>\n<td>Mitchell<\/td>\n<td>10367007.4265<\/td>\n<\/tr>\n<tr>\n<td>Jae<\/td>\n<td>Pak<\/td>\n<td>8503338.6457<\/td>\n<\/tr>\n<tr>\n<td>Tsvi<\/td>\n<td>Reiter<\/td>\n<td>7171012.7501<\/td>\n<\/tr>\n<tr>\n<td>Jos\u00e9<\/td>\n<td>Saraiva<\/td>\n<td>5926418.3555<\/td>\n<\/tr>\n<tr>\n<td>Lynn<\/td>\n<td>Tsoflias<\/td>\n<td>1421810.9242<\/td>\n<\/tr>\n<tr>\n<td>Rachel<\/td>\n<td>Valdez<\/td>\n<td>1827066.7118<\/td>\n<\/tr>\n<tr>\n<td>Garrett<\/td>\n<td>Vargas<\/td>\n<td>3609447.2148<\/td>\n<\/tr>\n<tr>\n<td>Ranjit<\/td>\n<td>Varkey Chudukatil<\/td>\n<td>4509888.9311<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The reason I\u2019ve included this example is not to demonstrate how to create or use temporary tables, but to point out the larger issue of choosing carefully which table structure to use in a given situation, taking into account the ramifications on performance, along with other considerations.<\/p>\n<p>There are, in fact, a number of important differences <a href=\"https:\/\/www.red-gate.com\/simple-talk\/sql\/t-sql-programming\/temporary-tables-in-sql-server\/\">between temporary tables and table variables<\/a>. The point of all this is that you need to fully understand how each one works before implementing them in your code. Only then can you choose the one best suited for your particular circumstances.<\/p>\n<p>You should also keep in mind these table sources for less common use cases. For example, if you need to create a join on a large table and you plan to filter the data in that table, you might consider creating a temporary table to hold the subset of data and then joining to the temporary table as a way to boost performance.<\/p>\n<p>Thinking in terms of data sets can often help you recast a T-SQL statement to perform better. The database engine works best with sets of data, unlike the engines used for procedural languages. Take for example, the following <strong>SELECT<\/strong> statement, which includes a correlated subquery in the <strong>WHERE<\/strong> clause:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SELECT SalesPersonID, SUM(SubTotal) AS TotalSales\u00a0\r\n  FROM Sales.SalesOrderHeader h\r\n  WHERE SalesPersonID = (\r\n  \u00a0 SELECT BusinessEntityID\r\n  \u00a0 FROM HumanResources.EmployeePayHistory p\r\n  \u00a0 WHERE Rate &gt; 25 AND p.BusinessEntityID = h.SalesPersonID)\r\n  GROUP BY SalesPersonID;<\/pre>\n<p>The <strong>SELECT<\/strong> statement returns the total sales for each sales rep with a pay rate greater than 25, giving us the results shown in the following table:<\/p>\n<table class=\"table--tight\">\n<thead>\n<tr>\n<td><strong>SalesPersonID<\/strong><\/td>\n<td><strong>TotalSales<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>285<\/td>\n<td>172524.4512<\/td>\n<\/tr>\n<tr>\n<td>287<\/td>\n<td>732759.1841<\/td>\n<\/tr>\n<tr>\n<td>274<\/td>\n<td>1092123.8561<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The statement uses a correlated subquery to retrieve the <strong>BusinessEntityID<\/strong> value for each rep with a pay rate that exceeds 25. As with any correlated subquery, this one depends on the outer query for at least one of its values, in this case, <strong>SalesPersonID<\/strong>. Because of this construction, the query in the subquery must run for each row returned by the outer query. For a small table, this is not a big deal, but when we start talking of millions of rows, performance can go down the tubes.<\/p>\n<p>One way to get around this is to join the two tables, rather than using a correlated subquery. This lets us work with the data in sets, rather than having to perform many individual operations. If the source table is large, and you\u2019ll be filtering the data, you can take the approach described above, which is to create a temporary table that contains a subset of data, and then join on that.<\/p>\n<p>Note, however, that we should be evaluating our code not only in terms of how to use certain types of data sources, but also when not to use them. For example, the following <strong>SELECT<\/strong> statement uses a left outer join to retrieve products that are not listed as product assembly components:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SELECT p.ProductID\r\n  FROM Production.Product p\r\n  \u00a0 LEFT JOIN Production.BillOfMaterials b\r\n  \u00a0 ON p.ProductID = b.ProductAssemblyID\r\n  WHERE b.ProductAssemblyID IS NULL\r\n  ORDER BY p.ProductID;<\/pre>\n<p>In this case, the statement returns 266 rows out of 504 total products, which are relatively small amounts. But if we were talking millions of rows, we would quickly see the impact on performance. In such cases, we might consider an approach other than joining the <strong>Product<\/strong> table to the <strong>BillOfMaterials<\/strong> table. When we join the tables, the database engine loads all 2,679 rows from the <strong>BillOfMaterials<\/strong> table before filtering out the rows whose <strong>ProductAssemblyID<\/strong> column is not null, which are the majority of rows.<\/p>\n<p>Instead of defining a join to the <strong>BillOfMaterials<\/strong> table, we can add a condition to the <strong>WHERE<\/strong> clause that uses the <strong>NOT<\/strong> <strong>EXISTS<\/strong> operator and a subquery to check values in the <strong>BillOfMaterials<\/strong> table. In this way, we\u2019re simplifying our data set in favor of a more robust <strong>WHERE<\/strong> expression.<\/p>\n<p>In some cases, we might want to move in the opposite direction, turning expression logic into a data set. For example, the following T-SQL code creates a function and uses that function in a query\u2019s select list:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE FUNCTION dbo.fnGetTotalItems (@OrderID INT)\r\n  RETURNS INT\r\n  WITH SCHEMABINDING AS\r\n  BEGIN\r\n  \u00a0 DECLARE @TotalItems INT\r\n  \u00a0 SELECT @TotalItems = SUM(OrderQty)\r\n  \u00a0 FROM Sales.SalesOrderDetail\r\n  \u00a0 WHERE SalesOrderID = @OrderID\r\n  \u00a0 GROUP BY SalesOrderID\r\n  \u00a0 RETURN @TotalItems\r\n  END;\r\n  GO\r\n  SELECT SalesOrderID, OrderDate, SalesPersonID,\u00a0\r\n  \u00a0 dbo.fnGetTotalItems(SalesOrderID) AS TotalItems\r\n  FROM Sales.SalesOrderHeader\r\n  ORDER BY SalesOrderID;<\/pre>\n<p>Whenever we include a scalar function in an expression, we have to keep in mind its impact on performance. As with the correlated subquery, the database engine will execute the query within the function for every row in the outer query.<\/p>\n<p>One solution is to turn the scalar function into an inline table-valued function. We would then use the function within the <strong>FROM<\/strong> clause, using the <strong>CROSS<\/strong> <strong>APPLY<\/strong> operator to join the <strong>SalesOrderHeader<\/strong> table to the function. This way, we can eliminate a great deal of unnecessary processing and consequently deliver better performance.<\/p>\n<p>When writing T-SQL queries, we should always evaluate the statement\u2019s elements to ensure that we\u2019re using the data sets in a way to maximize performance, whether creating temporary table structures, joining data sets, or mixing things up in some other fashion.<\/p>\n<h2>Building comparison expressions<\/h2>\n<p>Like other elements of T-SQL coding, the ways in which we set up the conditional expressions in our <strong>WHERE<\/strong> clauses can impact performance. For example, suppose we use the <strong>NOT<\/strong> <strong>IN<\/strong> operator in a conditional expression:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SELECT ProductID\r\n  FROM Production.Product\r\n  WHERE ProductID NOT IN (\r\n  \u00a0 SELECT ProductAssemblyID FROM Production.BillOfMaterials\u00a0\r\n  \u00a0 WHERE ProductAssemblyID IS NOT NULL)\r\n  ORDER BY ProductID;<\/pre>\n<p>As with most performance-related issues, using the <strong>NOT<\/strong> <strong>IN<\/strong> operator for a small data set is usually not a problem, but can be so for a larger one. This is because the database engine uses a table scan to find the data, rather than an index seek, even if an index is defined on the filtering column. Instead, consider using <strong>NOT<\/strong> <strong>EXISTS<\/strong> or perhaps joining the tables.<\/p>\n<p>You can run into similar issues if you wrap the filtering column in a function, as in the following <strong>SELECT<\/strong> statement:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE FUNCTION dbo.fnGetProductName (@ProductID INT)\r\n  RETURNS NVARCHAR(50)\r\n  WITH SCHEMABINDING AS\r\n  BEGIN\r\n  \u00a0 DECLARE @ProductName NVARCHAR(50)\r\n  \u00a0 SELECT @ProductName = name\r\n  \u00a0 FROM Production.Product\r\n  \u00a0 WHERE ProductID = @ProductID\r\n  \u00a0 RETURN @ProductName\r\n  END;\r\n  GO\r\n  SELECT SalesOrderID, ProductID\r\n  FROM Sales.SalesOrderDetail\r\n  WHERE dbo.fnGetProductName(ProductID) = 'LL Road Seat\/Saddle'\r\n  ORDER BY SalesOrderID;<\/pre>\n<p>In this case, the <strong>fnGetProductName<\/strong> function retrieves the product name, based on the <strong>ProductID<\/strong> value, and them compares that name to the string expression. Because of this construction, the database engine is again likely to do a table scan, rather than an index seek, even if an index exists.<\/p>\n<p>Another performance hole is to create a conditional expression that uses the <strong>LIKE<\/strong> operator in conjunction with specifying a wildcard at the beginning of the specified value:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SELECT ProductID, name AS ProductName\r\n  FROM Production.Product\r\n  WHERE name LIKE '%bike%'\r\n  ORDER BY ProductID;<\/pre>\n<p>The issue is not so much with the use of a wildcard, but rather with its placement at the beginning of the search value. In this position, the database engine must again do a complete table scan, reading every row of the <strong>Product<\/strong> table because it has no other way of ensuring that all potential values are returned.<\/p>\n<p>When using the <strong>WHERE<\/strong> clause to filter data, keep in mind its impact on performance and always evaluate whether there is a better way of doing something. Even using a not equal comparison operator (<strong>&lt;&gt;<\/strong>) in a conditional expression can result in performance issues if the database engine cannot use the indexes effectively when generating a query plan. Try using a different construction to see whether that improves performance.<\/p>\n<h2>Returning result sets<\/h2>\n<p>When defining a query, you should ensure that you\u2019re returning results as efficiently as possible. Even if a statement is technically correct and does not cause the database engine to balk, you might still inadvertently introduce performance issues. For example, the following <strong>CREATE<\/strong> <strong>PROCEDURE<\/strong> statement defines a very simple stored procedure that includes all the basic components, but fails to include a <strong>SET<\/strong> <strong>NOCOUNT<\/strong> <strong>ON<\/strong> clause:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE PROCEDURE dbo.GetPeopleInfo\r\n  AS\r\n  BEGIN\r\n  \u00a0 DECLARE @people TABLE (PersonID INT);\r\n  \u00a0 INSERT INTO @people\r\n  \u00a0 SELECT BusinessEntityID FROM Person.Person;\r\n  \u00a0 SELECT PersonID FROM @people;\r\n  END\r\n  GO<\/pre>\n<p>By including a <strong>SET<\/strong> <strong>NOCOUNT<\/strong> <strong>ON<\/strong> clause at the beginning of your procedure, you\u2019re telling the database engine to forego sending information messages back to the client for each executed statement, helping to improve application performance. Although this can seem a small step to take, it will in fact help and should be included in most of your stored procedures, unless you have a specific reason for returning those messages.<\/p>\n<p>Another consideration when it comes to stored procedures is to maximize the benefit you get with each procedure by building them to return multiple result sets, when appropriate. This can reduce the number of connections and calls between the client and server. Just be sure that if you\u2019re working with a data framework, it can handle multiple result sets.<\/p>\n<p>When retrieving data, whether or not it is within a stored procedure, you should also be aware of how you use operators and other options within your statements. A good example of this is the <strong>UNION<\/strong> operator, which lets you join similarly structured data sets, as in the following example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SELECT BusinessEntityID, FirstName, LastName, PersonType\u00a0\r\n  FROM Person.Person WHERE PersonType = 'EM'\r\n  UNION\r\n  SELECT BusinessEntityID, FirstName, LastName, PersonType\u00a0\r\n  FROM Person.Person WHERE PersonType = 'SC';<\/pre>\n<p>When you specify the <strong>UNION<\/strong> operator as I have here, the database engine combines the data sets, sorts the data in ascending order, and removes the duplicates. Not surprisingly, sorting and deduplication operations can add significant overhead, especially for large data sets. In some cases, you might not need to sort the data and remove duplicates. For example, the <strong>SELECT<\/strong> statements above return only unique rows because the <strong>BusinessEntityID<\/strong> column is included, which is the primary key. In such cases, we can add <strong>ALL<\/strong> to the <strong>UNION<\/strong> operator (as in <strong>UNION<\/strong> <strong>ALL<\/strong>) to join the data sets without incurring the sorting and deduplication overhead.<\/p>\n<p>An issue related to this is the use of the <strong>DISTINCT<\/strong> keyword at the beginning of the select list, as in the following example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SELECT DISTINCT BusinessEntityID, FirstName, LastName\r\n  FROM Person.Person WHERE PersonType = 'EM';<\/pre>\n<p>The <strong>DISTINCT<\/strong> keyword, as with the <strong>UNION<\/strong> operator, sorts the data and removes duplicates, which is fine if you\u2019re working with only a small amount of data, but it can significantly impact performance on large data sets, depending how the data has been indexed. In general, try to avoid <strong>DISTINCT<\/strong> for large data sets if you don\u2019t really need it, as is the case in our example (once again, because we\u2019ve included the <strong>BusinessEntityID<\/strong> column in the select list). If we still need to sort the data, we can add an <strong>ORDER<\/strong> <strong>BY<\/strong> clause.<\/p>\n<p>One other issue worth pointing out is the use of the asterisk wildcard in place of a query\u2019s select list, as in the following example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SELECT * FROM Person.Person WHERE PersonType = 'EM';<\/pre>\n<p>As a general rule, you should avoid using the wildcard and instead specify the columns so you\u2019re returning only the data you need. Why incur extra overhead to return unnecessary data? Besides, specifying the columns can also help prevent your applications from breaking, should the table structure change.<\/p>\n<h2>Proceed with caution<\/h2>\n<p>None of the issues we covered in the previous section are absolutes. At times, you might want to omit the <strong>SET<\/strong> <strong>NOCOUNT<\/strong> <strong>ON<\/strong> clause or use the <strong>UNION<\/strong> operator without <strong>ALL<\/strong> or add <strong>DISTINCT<\/strong> to your select list. You might even want to use a wildcard in you select list, at least when running ad hoc statements. The point is, you should proceed with caution whenever using any of these.<\/p>\n<p>There are also other T-SQL elements that warrant such careful consideration, such as the use of dynamic SQL, as in the following example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">DECLARE @sql NVARCHAR(1000);\r\n  DECLARE @id NVARCHAR(10);\r\n  SET @id = '869';\r\n  SET @sql = 'SELECT name FROM Production.Product\r\n  \u00a0 WHERE ProductID = ' + @id + ';';\u00a0\r\n  EXECUTE sp_executesql @sql;<\/pre>\n<p>You\u2019ll often see warnings about using dynamic SQL because of security risks (i.e., SQL injection attacks), but dynamic SQL can have performance implications as well, depending on how statements are constructed and executed. This is a huge topic, however, and well beyond the scope of this article, so be sure you do your homework before implementing dynamic SQL, for both security and performance reasons.<\/p>\n<p>Also be cautious about defining triggers that return results to the application. An application that issues a request to modify data normally does not expect results, such as those in the following example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">CREATE TRIGGER InsertTrig ON Production.Culture\r\n  AFTER INSERT\r\n  AS\r\n  BEGIN\r\n  \u00a0 DECLARE @CultureId NCHAR(6)\r\n  \u00a0 SELECT @CultureId = INSERTED.CultureId\r\n  \u00a0 FROM INSERTED\r\n  \u00a0\r\n  \u00a0 INSERT INTO dbo.EventLog(EventDescript)\r\n  \u00a0 VALUES('Table: Production.Culture; CultureID: ' + @CultureId)\r\n  \u00a0 SELECT TOP(1) CultureID, Name\u00a0\r\n  \u00a0 FROM Production.Culture ORDER BY ModifiedDate DESC\r\n  END;\r\n  GO<\/pre>\n<p>The trigger\u2019s main purpose is to log information to the <strong>EventLog<\/strong> table after data is inserted into the <strong>Production.Culture<\/strong> table. However, the trigger also includes a <strong>SELECT<\/strong> statement that returns the <strong>CultureID<\/strong> and <strong>Names<\/strong> values to the calling application. We can demonstrate this by running the following <strong>INSERT<\/strong> statement:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">INSERT INTO Production.Culture (CultureID, Name)\r\n  VALUES('tst', 'test culture');<\/pre>\n<p>After the data has been inserted, the database engine will log the data to the <strong>EventLog<\/strong> table and then return the following information:<\/p>\n<table class=\"table--tight\">\n<thead>\n<tr>\n<td><strong>CultureID<\/strong><\/td>\n<td><strong>Name<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>tst \u00a0<\/td>\n<td>test culture<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Unless the application includes a mechanism for handling these values, the extra resources needed to return this information are wasted. This does not mean you must always avoid returning values, just be sure you have a reason for doing so and that the application can handle them.<\/p>\n<p>You should also be cautious when including hints in your queries, such as those in the following example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SELECT h.SalesOrderID, d.SalesOrderDetailID, h.OrderDate\r\n  FROM Sales.SalesOrderHeader h\r\n  \u00a0 INNER HASH JOIN Sales.SalesOrderDetail d\r\n  \u00a0 WITH (TABLOCK)\r\n  \u00a0 ON h.SalesOrderID = d.SalesOrderID\r\n  OPTION (KEEP PLAN);<\/pre>\n<p>In this case, I\u2019ve included the <strong>HASH<\/strong> join hint, <strong>TABLOCK<\/strong> table hint, and <strong>KEEP<\/strong> <strong>PLAN<\/strong> query hint (a lot of hints, I realize, but I wanted to demonstrate them all at once). Regardless of the type, you should use hints only when you\u2019re certain that you can improve on the query optimizer.<\/p>\n<p>The optimizer generally does a good job of maximizing performance, in which case, using hints could make things worse. But in those rare cases when the optimizer is falling short\u2014and you know in your heart with absolute certainty that you can do a better job\u2014then consider the judicious use of hints. Just make sure you fully test whatever you end up implementing, using a large enough data set to provide you with meaningful results.<\/p>\n<p>One final issue worth mentioning is the use of <strong>SET<\/strong> options, such as <strong>FORCEPLAN<\/strong>, which is shown in the following example:<\/p>\n<pre class=\"theme:ssms2012-simple-talk lang:tsql decode:true\">SET FORCEPLAN ON;\r\n  GO\r\n  SELECT h.SalesOrderID, d.SalesOrderDetailID, h.OrderDate\r\n  FROM Sales.SalesOrderHeader h\r\n  \u00a0 INNER JOIN Sales.SalesOrderDetail d\r\n  \u00a0 ON h.SalesOrderID = d.SalesOrderID;\r\n  SET FORCEPLAN OFF;\r\n  GO<\/pre>\n<p>When you set <strong>FORCEPLAN<\/strong> to <strong>ON<\/strong>, the query optimizer processes the tables in a join in the same order they\u2019re specified in the <strong>FROM<\/strong> clause, overriding the query optimizer. As with hints, you want to be careful taking steps that change the optimizer\u2019s behavior, unless you\u2019re certain you know what you\u2019re doing.<\/p>\n<h2>Plenty more to think about<\/h2>\n<p>In this article, we\u2019ve covered a number of issues related to query performance, as it applies to T-SQL queries. We could have covered any one of these issues in more detail, many of which deserved articles of their own. There are also plenty of other issues we could have covered, such as avoiding the use of cursors, specifying schema names when references database objects, or not interweaving data manipulation language (DML) and data definition language (DDL) within a batch. That said, what we have covered should give you a good sense of the types of issues you should be taking into account in order to maximize performance.<\/p>\n<p>Whenever you\u2019re writing or updating T-SQL queries, you need to be thinking in terms of performance, along with security and data accuracy, of course. With performance, however, it\u2019s not always apparent which statement construction will deliver the best results, and you might need to try different approaches before you get it right, keeping in mind that indexing and database design also play pivotal roles. The key is to evaluate each statement element to ensure that you\u2019re taking the best approach to maximizing performance, even if it means trying several different approaches.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There are several obvious problems with poor SQL Coding habits. It can make code difficult to maintain, or can confuse your team colleagues.  It can make refactoring a chore or make testing difficult.  The most serious problem is poor performance. You can write SQL that looks beautiful but performs sluggishly, or interferes with other threads.  A busy database developer adopts good habits so as to avoid staring at execution plans.  Rob Sheldon gives some examples.&hellip;<\/p>\n","protected":false},"author":221841,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[143531],"tags":[5134,45110],"coauthors":[6779],"class_list":["post-71926","post","type-post","status-publish","format-standard","hentry","category-t-sql-programming-sql-server","tag-sql-prompt","tag-t-sql-formatting"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/71926","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/221841"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=71926"}],"version-history":[{"count":5,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/71926\/revisions"}],"predecessor-version":[{"id":72021,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/71926\/revisions\/72021"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=71926"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=71926"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=71926"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=71926"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}