Updated March 2013
- Measuring Performance
- Tuning Performance
- Suggested Reading
“We’re not building Amazon.com.”
Have you heard this statement or others like it? This is usually delivered early in a development cycle when someone, probably not the person spewing these words of “wisdom,” suggests that performance should be taken into account when designing and building the database, or laying out the new server configuration, or writing a trigger. Don’t listen to them. Performance is as important to a small system as it is to a large one. A tiny database with 10 users is as important to those 10 users as Amazon.com is to the 10 million users it supports. It’s true that worrying about the difference between a 20ms and a 5ms query on the small system may be a waste of time, but there is plenty of work to do before you get down to worrying about that sort of minutiae. But with the introduction of Azure SQL Database to traditional local copies of SQL Server instances, performance tuning has actually become a method for saving money.
How much performance is enough? Where do you start tuning? When do you stop tuning? Each application being developed will answer these questions in a different way. The important thing is not to establish a single mechanism for answering them. Your goal is to establish best practices and guidelines that will lead to the answers in the right way for the application under consideration.
First, and most important, the SQL Server system itself needs to be configured correctly. It also needs to be running on a correctly configured Windows server. This is the foundation on which the databases will be built. These same requirements do not exist when dealing with an Azure SQL Database. After the server is configured, you need to design and build the database to perform optimally. That’s assuming you’re building a new database. If you’re trying to tune an inherited database, then you’ll want to know what a good database looks like. Appropriately designing the database is even more important in Azure. Once the server and database are out of the way, you need to be concerned with the code running against it. This means the views, triggers, functions and, on local copies of SQL Server, the CLR code. It doesn’t stop there because you need to be sure that the development staff is accessing the database correctly either in their general use of the database and it’s code, or in their own use of ADO or whatever other client they might be using to access the system
In order to address all these concerns you need to understand how to measure and test performance. Once you’ve measured the performance and found it wanting, you’ll need to know what to do about it. After addressing structural changes to the database or changes to the server or refactoring the T-SQL code, you’ll need to have a mechanism in place to test the results in order to be sure your work is accurate
After all this, you should have a correctly functioning system that performs and scales well.
While setting up the server and designing a database are the foundations on which performance is built, understanding how to measure performance allows you to verify the decisions you’ve made are the right ones and provides you with direction on future decisions. The two main areas that you’ll measure are the performance of the server itself, including both Windows and SQL Server, and the performance of databases and their associated code within SQL Server. To well and truly understand what’s happening with your system you’ll combine these two sets of measures. There are some tools provided with SQL Server, and whatever flavor of Windows server you’re running, that perform these measurements. Just remember that if you’re tuning an Azure SQL Database, you only get access to the internals of the database, not to the server itself. An entire industry has grown around monitoring and measuring performance of SQL Server and Windows.
Microsoft provides Performance Monitor Counters as a means for keeping an eye on the server. These are accessed through a product called Performance Monitor, commonly referred to as perfmon, from the name of the executable. The counters themselves are grouped into Performance Objects. These vary from the mundane of Physical Disk, Server, Memory, Processor and Cache to the more obscure like Telephony and WMI Objects, all included as part of a standard server installation. After you install SQL Server an additional list of counters specific to SQL Server Objects to measure how the server is behaving such as SQL Statistics, Buffer Manager, Cache Manager, SQL Server Memory and more are available. It can be quite overwhelming when you consider that each object then lists the available counters. So for example the Buffer Manager object contains 13 counters from “Background writer pages/sec” to “Target Pages.” Defining a standard set of counters that capture the core information to monitor the system is a must.
Perfmon Counter Set
As an initial list, collecting all the counters for these objects can act as a good baseline for data to make performance tuning decisions:
- Network Segment
- Physical Disk
- SQL Server: Access Methods
- SQL Server: Buffer Manager
- SQL Server: General Statistics
- SQL Server: Locks
- SQL Server: SQL Statistics
Performance Monitor is a graphical interface that is split in two. When you first open the GUI it will be in the display mode, probably showing a single metric for “% Processor Time” running on a one minute loop. Perfmon can be used one of two ways.
- You can add a series of counters and observe their behavior through the GUI in real time.
- You can use perfmon’s ability to create Data Collector Sets that can be scheduled to start and stop collection with Window’s Scheduler to record counters for later review and manipulation.
The latter approach is preferred because it allows you to be more systematic. Data Collector Sets are collections of data written to a file. You can run them over a period of time, say during year-end processing if that’s when you’re expecting high load, or over the course of a two hour test (more on that below in Performance Testing). Instead of simply watching the counters scroll by for one minute and then losing the data forever, you’ll have a record that allows you to isolate when and where bottle necks occurred. You’ll also be able to look at this data over time. A key point to keep in mind is that Perfmon has a data collection cycle interval. This means that if you’re experiencing high loads, but short transactions, a 30 second sample rate may entirely miss the events you’d most like to record. With this in mind, when not performing a long running system test, a low interval like 10 seconds would not be unreasonable. Remember that the lower the interval the more data will be collected. Keep this in mind when planning for disk space. The collection can contain start and stop times or define a length of time for which to run. All of these settings are available through the command line, so you can start data collection using third party tools or schedulers.
Supported file formats include binary, comma delimited and tab delimited. You can also store the data directly to SQL Server, but for most performance monitoring situations, storing the data to file rather than to SQL Server works well. You can run Perfmon on a machine other than that which is being monitored, which means that any I/O costs are not incurred by the monitored machine. If you were to use SQL Server, probably to set up some sort of enterprise level of monitoring, you would want to be careful to not use the machine you’re monitoring for storing the data being collected as this will mask the normal performance of the machine behind the transactions necessary to support perfmon.
One other source of performance counter information in SQL Server is the dynamic management view sys.dm_os_performance_counters. These are only a sub-set of the counters available in perfmon specific to your SQL Server instance, but this subset is immediately available inside of queries for whatever monitoring solution you might be trying to put in place.
Evaluating perfmon data
Having collected the data on your server during high production use, or as part of a load test, you need to know how to evaluate these counters. Each set of counters, and each individual counter, tells a different part of the story. A massive amount of drill-down is required to define all of them. Instead, I’ll focus on a very few of the most important counters – ones that can directly indicate the health of your overall system, the Server itself and SQL Server.
Starting with the server itself, you need to worry about:
- Disk I/O
- The Processors
- The most basic memory check is “Pages/Sec”. According to the MS documentation, this counter shows the: “Rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays.” A high number here means there is a lot of activity in the memory of your system. What constitutes “a high number” depends on the amount of memory on the machine. My laptop shows spikes of 156 pages/ ¬Sec under a light load, whereas one of my production servers can show a spike of over 1,000 under a normal load. Measuring your system over time will allow you to develop the knowledge of what constitutes an abnormal load.
- Disk I/O
- The core check for disk drives is the length of time that writes (including updates, inserts and deletes) or reads wait in the queue, and this is measured by the counter “Avg. Disk Queue Length.” Again, a high number is bad. This counter can be set to average all the disks on a system or you can look at each individual disk. Averaging the disks may be an interesting exercise, but to understand what is happening on your system, you really need to set a counter for each individual disk. Further, if you are in a SAN, this number will only be meaningful if you know how many disks are configured for the drive on your system.
- Finally you can look at the activity of the processors using the “System:Processor Queue Length” counter. This counter, like the disk one, can either be an average or a precise measure of each processor and once more, the higher the number, the worse the performance. If the values are consistently low, while the processors on your system may be used extensively, they’re not being used so much that they’re causing other threads to wait for access to processor.
SQL Server health
The above counters point to the general health of the server, but what about the SQL Server itself? There are a few counters here that can indicate how well the server is behaving, but keep in mind that these are only broad indicators. Detailed troubleshooting will require more detailed information to support accurate analysis.
- Memory Grants Pending
- SQL Server manages it’s own memory space within the memory provided by the Windows operating system. The Memory Grants Pending measure indicates when processes within SQL Server are waiting to get access to memory. Like most of these measures, you’ll need to understand how your system behaves by collecting this measure over time and comparing values. In general, a consistently high number is indicative of memory pressure within SQL Server.
- Full Scans/Sec
- Next, as a general measure of health, it’s good to look at the Full Scans/Sec counter in Access Methods. This is basically the number of table or index scans that the system is experiencing. A high number here shows either poorly written stored procedures or bad indexing. Either way, you need to get to work identifying the source of the problem.
- Lock Requests/Sec
- Under Locks the Lock Requests/ ¬Sec counter can show the number of new locks and lock conversions that are taking place on the system. This counter can be general, showing Total Locks, or it can get extremely specific, counting row ID locks (RID), key, file and page locks. While higher numbers may be bad, depending on the circumstances, they may also be an indicator of just a lot of use on the system. If this number is spiking or growing over time, you will need to pursue more details to ascertain whether or not you have a problem.
- Also under Locks, you may want to put the Deadlock/ ¬Sec counter on if you’re experiencing deadlocks. Deadlocks, by their very nature are indicative of performance problems that require, at the least, procedure tuning and review, and a check on the indexes of a machine. Other steps may be required.
- User Connections
- Not actually an indicator, but a very useful measure in combination with all the other counters. is the User Connections counter under General Statistics. It’s not that this number will necessarily indicate a problem on your server, but it helps in conjunction with the other counters to identify where real problems exist. For example, say you have a larger than normal number of locks. Check the number of user connections. If it’s higher than normal, then you’re probably just experiencing a spike in usage, but if it’s average or below average, then you may have a problem and it’s time for more detailed investigation
- Batch Requests/Sec
- The last general measure is Batch Requests/ ¬Sec under SQL Statistics. This measure quite simply is the number of requests coming in to the server. This is a very general number and may be indicative of nothing more than high use, but it’s a good value to track over time because it can show you how your system use is scaling. It can also be used to indicate when you have peaks and valleys in the number of user requests on the system.
All of the counters outlined above are only the beginning of the metrics you can use to get a general measure of system performance. All the other available counters will enable you to drill down into specifics within SQL Server or the server itself. After determining what the OS and the Server are up to, you will need to look inside at what the queries are doing. This is where Extended Events come into play.
Extended events were introduced in SQL Server 2008. They only worked by T-SQL commands until SQL Server 2012. But, because the architecture of how extended events are implemented, they cause radically less load than the older method of setting up a server-side trace. Extended events are now the best method for gathering performance metrics, in addition to a number of other events, within SQL Server. This is the list of categories of events that can be captured:
In order to capture performance metrics, you’ll usually focus on just a few specific events that capture T-SQL performance. Unlike the old mechanisms with trace events, you don’t have to define all the columns to be collected. You just have to capture the extended event and the majority of the information you need will be capture with it. Here are the principal events you would want to capture:
This event fires after the completion of a remote procedure call, or, in other words, when a stored procedure finishes executing.
When a T-SQL batch, other than a stored procedure, completes, you’ll get a result from this event.
Each of these events will return information such as the amount of reads, writes, CPU, duration, start and stop times. They’ll also return the procedure call or SQL batch, so you can see exactly what was passed to SQL Server.
In addition to the standard Event Fields, you can add additional fields, called Actions or Global Fields. These can provide additional useful information for the queries you’re collecting. However, they add additional overhead to the event capture, so you need to be very judicious in their use. Here are a few that might be worth adding to your data collection:
In the event that you have the same procedure or query running on different databases, it would be worth knowing which database this particular call was run against.
If the application name is supplied through the connection, you can see which applications are making which calls.
A hash is created of queries and stored in cache. You can use this value to identify common query patterns within your system. If you have ad hoc queries, it’s a great way to identify multiple tuning opportunities.
Provides the name of the login that is currently running the query. Depending on the security settings of your system, this can be useful for figuring out who ran a query.
If you use SQL logins, this functions in the same way as nt_username.
Just remember, while Extended Events are a very inexpensive method for collecting performance data from your system, nothing is ever free. It is possible to add enough events and enough actions to those events to seriously impact your system. But, judicious application of these events will enable you to gather extensive query metrics without negatively affecting the people accessing your data.
You can output the Extended Events to anything from the ring buffers (an internal alerting mechanism within the Windows operating system) to a file or a database. In general, output to a file works best for this type of data collection. Be sure you have enough space to accommodate the data collected. You can then use either the Data Explorer in SQL Server 2012 to look at the data or an extended stored procedure to pull the data from the disk and put it into tables in order to aggregate and query against it:
The primary information is returned in an XML column, event_data. This means you’ll need to write some XQuery statements to pull out the pertinent information:
AS (SELECT fxftrf.object_name AS xEventName,
CAST(fxftrf.event_data AS XML) AS xEventData
NULL, NULL, NULL) AS fxftrf
'bigint') AS Duration,
'bigint') AS PhysicalReads,
'bigint') AS LogicalReads,
'bigint') AS CpuTime,
END AS SqlText
FROM xEvents AS x;
Dynamic Management Objects
Setting up Performance Monitor to observe the behavior of your system along with Extended Events to track the behavior of your queries is the best method for gathering useful performance metrics. However, you have other methods available that can help you with spot checks and with checks not available through the other two methods. This method is querying against the Dynamic Management Objects (DMO). The DMOs were introduced in SQL Server 2005. They consist of views and functions that give you access to information within SQL Server. Earlier the view sys.dm_os_performance_counters was mentioned. There are a number of others that can help you. Here are just a few:
This view shows an aggregation of the waits occurring within your SQL Server instance since the last time the server was started or failed over. Using wait statistics is a great way to understand exactly where your system is running slow because it’s waiting on resources. Querying this view will not tell you which queries are waiting though. For that you’ll have to go to other DMOs.
To see exactly what is currently running within your system you can query this view. It will show you which processes are running, how long, when they started, any previous waits, any current waits and if they are blocked by other processes. You can combine this with a number of other DMOs to retrieve information from the cache such as the T-SQL text and execution plans.
For the queries currently in cache in SQL Server’s memory, this view will show aggregate performance information. You can see how long queries have been running, how many times they’ve been called, when they were first compiled and a number of other interesting statistics.
To see the query from sys.dm_exec_requests or sys.dm_exec_query_stats you can call this function. It will return the text of an entire batch, or, you can pass it a begin and end offset to get individual statements.
To understand how SQL Server is retrieving the data you’re manipulating within your T-SQL statement, you need to look at the execution plan. You can get the plan_handle from sys.dm_exec_requests or sys.dm_exec_query_stats to get a look at the execution plan in the cache.
As an example of how you can put the DMOs together to retrieve interesting information, this query will get the top 10 query statement, the total execution statistics and the query plan for all the queries currently in cache. It will order them by the total execution time so that you can know which queries have been taking up the most time on your system:
SELECT TOP 10
SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,
((CASE WHEN deqs.statement_end_offset = -1
END) - deqs.statement_start_offset) / 2 + 1) AS StatementText,
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp
ORDER BY deqs.total_elapsed_time DESC;
Third Party Tools
There is an entire industry built around monitoring servers and databases. Most of these use the same counters that you can access through Performance monitor or Extended Events. A few of these tools find ways to monitor packets or logs or memory and provide a slightly different set of measures. However, most of them do things that you could do for yourself. What they offer is the ability to set up monitoring and alerts and take advantage of all the work that they’ve put into recording the data over time, setting up reports, building enterprise level alerts, etc. If you have more than a handful of servers to monitor, doing the research and finding a third party tool that works in a manner that you find acceptable is worth the time and effort they will save you.
Performance tuning consists of setting or changing the server configuration, the database indexing or storage or the T-SQL code in order to achieve enhanced performance. After gathering information about what is happening on your machine, you can begin the process of identifying where you are experiencing slow performance. Once you’ve identified the source you then have the information you need to determine what changes you can make to improve performance.
The quickest possible solution here is to get more and faster CPUs running on more and faster memory against more advanced and faster disk storage. There, we’re done. What’s that, you’ve got a big honking machine and it isn’t performing properly, or you’ve got a limited budget so you need to squeeze more from the machine that you have? OK. Listed below are some areas where you can change settings in order to make a difference to how your machine is configured. Most of these are suggestions as each and every set of circumstances is unique. You need to test these suggestions in your environment in order to ensure their applicability to your own set of circumstances.
Rather than let SQL Server manage memory allocation, and grow and shrink it as necessary, simply determine the maximum memory that you can allocate to the server and fix the memory allocation at this point. Variable memory is only helpful if you’re running SQL Server in a mixed environment with other applications (this, by the way, will lead to poor performance as the server has to contend with other applications for precious memory, cpu, and disk resources).
You should reserve some amount of memory for the OS, depending on the amount of memory on the system. Smaller systems with 32gb of memory or less can probably make due with between 1gb and 2gb reserved for the OS.
For most systems the Max Degree of Parallelism (MAXDOP) default value of zero (0) which enables all the available processors should work well. But, you may need to adjust the allocation of CPUs in order to ensure some for the operating system and for other processes running on the server.
The default value for the Cost Threshold for Parallelism is 5. This is much too low for the majority of systems out there, especially OLTP systems. Set this value to 35 as a starting point and observe the use of parallel queries and CPU from there to determine if it needs to be adjusted up or down.
One of the most consistent bottlenecks in SQL Server is the tempdb. This is used when applications create objects such as temporary tables, but it is also used when rebuilding indexes, sorting data, snapshot isolation and a number of other processes. The common recommendation is to create one file in tempdb for each CPU core in your system. This can be a good starting point, but may be overkill in small systems or inadequate in larger systems. Before changing your tempdb file allocation, watch for PAGELATCH wait types in your tempdb. If you are seeing those, you may not have enough files for tempdb and will need to add them. Where possible, isolate the tempdb files to their own drives.
Database performance almost always comes down to I/O. How much data can you get in/out of the system and how quickly? First, and most important, will be your actual database design. It can’t be over-emphasized that a poorly constructed database, no matter how sophisticated the server, will perform badly.
To start with, you need to plan an indexing strategy at the same time as you plan the database. First, and most important, is the clustered index. As a general rule, every table in a SQL Server database should get a clustered index. There are exceptions, but the exceptions should be exceptional. You can only put one clustered index on a given table, so the proper selection of exactly what column or columns to place it on is extremely important. By default, the clustered index gets created on the primary key. This may well be the appropriate place to put a clustered index, but you need to evaluate how the data is most likely to be accessed.
It makes sense to leave the cluster on the primary key if that primary key provides the most commonly used access path, the most common column used to either search the table or relate the table to another.
Changing the parameters slightly, if the access path is mostly through another column, say Company Name, this may make a more appropriate clustered index. Another situation is when the table is no longer at the top of the chain, but somewhere in the middle and the most likely avenue of selection for the data is going to come through the foreign key to a parent table. Then the foreign key column becomes a good candidate for the clustered index. A further wrinkle could be added by needing to get the related data in this child table, but based on a date. This would result in a clustered index composed of two columns: the foreign key and the date. As you can see, in a few sentences a number of options were laid out. You need to think this through as you design the system.
You may also identify, either during design, testing or monitoring the production system, that other indexes are needed. While multiple indexes on any given table may be needed, you need to keep in mind that each index adds overhead to the system because these values have to be maintained as the data gets modified, which includes inserts, updates and deletes. Further, since indexes are stored by pointing to the clustered index (one of the many reasons you need a clustered index on the table) changes to the clustered index can result in a cascade through all the other indexes on a table. Because of all this, while indexes are good and necessary and absolutely should be created where needed, restraint must be exercised in their application & use.
Files and FileGroups
Other factors that can affect the performance of your system include the way the files and file groups are laid out. You should be creating multiple files for your databases to optimize performance. A baseline for this would be to create one file for the logs, another for the data (defined by the clustered index), and another for non-clustered indexes. Additional files may be necessary to separate out BLOB data or XML data or unusually active tables, each onto its own file, and where possible, onto its own disk. This has been found to be true even on SAN systems because distributing the load takes further advantage of the architecture of the SAN.
The data stored and the data retrieved should be defined appropriately. This means normalizing the storage. If you simply want a flat file, don’t put it into a relational database system. A NoSQL system will perform much better. You’re paying for overhead you don’t need and sacrificing benefits that you could achieve. Normalization actually acts as a performance enhancement and the query optimizer can use enforced referential constraints to make your queries perform better. That said, some targeted denormalization, picking some fields to duplicate rather than maintaining a perfect third normal form system, can provide some performance benefits. But you must test this to ensure you’re in that situation.
Define the data types that you need, not what you think you might need someday. A phone number is a string, not a number. Define the length of field that you need and enforce that length.
Simple things can make a difference too.
- Turn auto-shrink off.
- Make sure auto-update of statistics is turned on.
- If a database is read only, set it to read only.
- Use triggers very judiciously. They mostly operate in the background making them difficult to monitor and troubleshoot.
- Be very careful of autogrowth settings on the database. 10% autogrowth will be fine when the database is 500mb. It makes a huge difference when the system is 50gb. For larger databases, change the setting to grow by a fixed amount rather than a percentage of total database size.
After you’ve configured your server and built a functional database, you’ll need to move data in and out of it. Most of this will be through T-SQL queries. These queries should be defined within stored procedures and will make use of views and user-defined functions to manipulate and access sets of data. The most important part of this concept is the set. Most people think in terms of pieces of data instead of sets of data. This conceptual shift, to manipulating the data in batch instead of row by row delivers the biggest performance benefits when working with T-SQL. Learning T-SQL syntax and set-based querying methodologies up front will provide more performance benefits by having well written procedures up front. This is much easier than attempting to tune or fix hundreds or even thousands of poorly written queries after the fact.
Writing T-SQL queries that perform well isn’t always as easy as it sounds. The target needs to be to work with the T-SQL compiler and optimizer, processes internal to the SQL Server itself, to provide them with queries that they can tune in order to optimally deliver your data. You must then start with the basics and get the simple stuff right at the start.
- Make sure that your queries are written to manipulate only the data you need.
- Ensure that simple things like qualifying database objects by their owning user or schema are a regular part of your coding practices
- Learn and use the latest ANSI style of syntax employed by SQL Server (ANSI 99 for 2005, ANSI 92 for 2000).
- Avoid cursors as much as possible. While there are some good uses for them, they usually become a crutch used instead of learning how to manipulate the data in sets.
- Remember that transactions and transaction processing within the procedure should be kept as small as practicable. For example, if you need to do an insert and read some data, separate the read from the insert.
- Minimize the dependence in your code on constructs such as table variables or temporary tables. Again, these very useful tools frequently substitute a piece-meal approach for one that emphasizes sets of data.
- When writing stored procedures, things as simple as making the data type of parameters match the data type of the columns being referenced can make a big difference.
- Table variables, table parameters, and multi-statement table valued functions do not have statistics, which can make them difficult for the optimizer to create good queries for. Use these objects very judiciously.
Once you’ve identified a stored procedure or query as being problematic, you’ll want to tune it. This is where a query plan comes into play. SQL Server can display either a graphical plan (estimated and actual) or an XML plan. There are advantages to each type of plan. The graphical plan can be a quick and easy way to peruse the actions a query has put the system through. Especially useful is the ability to display the estimated query plan, which could identify problems without having to actually execute a query. XML query plans present data to the user that can be searched or parsed through code, allowing for some automation of tuning if you so desired it. Reading and understanding execution plans is a complex process. I suggest reading my book on that topic.
All the good work done within SQL Server can be undone by a poorly written piece of client code. More often than not, the DBA can’t write that code for the company. However, you can monitor, mostly through Profiler, what that code is doing to your system and how it is doing it. This is where you can make a difference.
If transactions are managed by code instead of by the database, observe these transactions to ensure that they are as short as possible and that any data access is isolated from other client side processing.
A classic error is to open a database transaction and then wait for user input. Be sure that the application is using the procedures you provided in the manner in which they were meant. You may have a large query that returns a lot of data being called frequently. Talking to the developers you may find that they only need one or two columns from this result set. Providing them with a modified or new procedure can save lots of processing time.
Be sure that the latest ADO.NET is in use. Be sure that the connections being made are using the settings you expect since things such as the connection timeout can be different than the database timeout resulting in odd behavior within your application. Again, moving only the data you need and only when you need it should be a working maxim.
Instead of “discovering” problems only when the application hits production, try loading up the database with some data and run tests against it. While it is time consuming, database testing is as easy as that. First, you need a good understanding of the data. Just how many rows of what type of data are you likely to see? Then you need a good understanding of your data model so that you know that “x” parents have so many children on various tables of type “y”. This isn’t a technical issue. It’s primarily a series of questions put to your business.
Once this is defined, you either have to define your transactions, or put trace events (not extended events, unfortunately) to work capturing a set of transactions from the application. Obviously the second approach is preferable since the data will be much more accurate.
You can then use the Distributed Replay tool from Microsoft. This tool is included as part of SQL Server 2012. It allows you connect multiple different machines to run the load against a server. You can control how the output runs, either running transactions as fast as possible, letting them run in a real time manner, or introducing your own customized wait times. You can then gather metrics using the tools you already have in order to understand how your system is performing.
If nothing else, open up each query and look at the query plan to identify areas that might cause you trouble down the road. While that table scan is hitting 75 rows, it doesn’t really affect performance, but when it’s hitting 75,000 or 75 million rows, you’ll probably notice. While running through all these tests, use Performance Monitor and Profiler to capture the data. This can then be used over time to track how well your performance tuning is working out.
All of this only scratches the surface of what may be required to accurately assess and rectify performance issues in SQL Server. For more information consult books such as:
- “Inside SQL Server 2000” by Kalen Delaney
- “Inside SQL Server 2005: The Storage Engine” by Kalen Delaney
- “Inside SQL Server 2005: TSQL Querying” by Itzik Ben-Gan
- “Inside SQL Server 2005: TSQL Programming” by Itzik Ben-Gan
There are also fantastic web sites, in addition to Simple-Talk, like: