{"id":91903,"date":"2021-07-20T16:21:16","date_gmt":"2021-07-20T16:21:16","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/?p=91903"},"modified":"2021-07-20T16:21:16","modified_gmt":"2021-07-20T16:21:16","slug":"collecting-sql-server-performance-monitor-data","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/performance-sql-server\/collecting-sql-server-performance-monitor-data\/","title":{"rendered":"Collecting SQL Server performance monitor data"},"content":{"rendered":"<p>SQL Server performance monitor data is a vast set of diverse metrics that provide a window into many areas of SQL Server performance. To get the most out of this data, it needs to be collected, stored, and maintained over a long period of time. This article explains a method of collecting SQL Server performance monitor data.<\/p>\n<p>Given the choice, it is always better to be proactive and plan ahead rather than be forced to respond to emergencies when resources run low, or worse, run out.<\/p>\n<p>By regularly sampling and retaining this data, long-term trends and capacity planning models can be constructed that allow an organization to make smart decisions about its resources with plenty of time to spare. This article walks through a process that collects performance monitor data, allowing it to be used for these purposes.<\/p>\n<h2>How should performance counter data be stored?<\/h2>\n<p>Since performance counters are not persisted in SQL Server, it falls to a developer or administrator to regularly poll <code>dm_os_performance_counters<\/code>, collect metrics, and store them in a permanent location.<\/p>\n<p>The data can be stored in a similar fashion as it is returned in the view: one row per counter per sample, but I personally find it far easier to work with this data when it is pivoted so that there is a single column per counter per sample. It creates a smaller data set and one that benefits more from compression and columnstore indexes.<\/p>\n<p>For the remainder of this article, a selection of performance counters be used to demonstrate their storage, collection, and usage. Assume that the list of counters used in any organization will vary from these 12 (possibly by a lot):<\/p>\n<ul>\n<li><strong>Page life expectancy<\/strong> (Static value. Can be sampled at any point in time)<\/li>\n<li><strong>Page reads<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<li><strong>Page writes<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<li><strong>Lazy writes<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<li><strong>Page lookups<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<li><strong>Target pages<\/strong> (Static value. Can be sampled at any point in time)<\/li>\n<li><strong>Logins<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<li><strong>Logouts<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<li><strong>Temp Tables Created<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<li><strong>Batch Requests<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<li><strong>Compilations<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<li><strong>Recompilations<\/strong> (Cumulative value. Must be diffed between two points in time)<\/li>\n<\/ul>\n<p>Note that the collection method is included in the list above to aid later in this article.<\/p>\n<p>A table will be created to store this data that contains a row per sample time and a column per counter. This table includes a clustered columnstore index as it could get quite large and would benefit greatly from the added compression and rowgroup\/segment elimination offered by it when performing analytics:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">CREATE TABLE dbo.performance_counters\r\n(\tsample_time_utc DATETIME2(3) NOT NULL,\r\n\tpage_life_expectancy_seconds BIGINT NOT NULL,\r\n\tpage_reads_per_second DECIMAL(18,4) NOT NULL,\r\n\tpage_writes_per_second DECIMAL(18,4) NOT NULL,\r\n\tlazy_writes_per_second DECIMAL(18,4) NOT NULL,\r\n\tpage_lookups_per_second DECIMAL(18,4) NOT NULL,\r\n\ttarget_pages BIGINT NOT NULL,\r\n\tlogins_per_second DECIMAL(18,4) NOT NULL,\r\n\tlogouts_per_second DECIMAL(18,4) NOT NULL,\r\n\ttemp_tables_created_per_second DECIMAL(18,4) NOT NULL,\r\n\tbatch_requests_per_second DECIMAL(18,4) NOT NULL,\r\n\tcompilations_per_second DECIMAL(18,4) NOT NULL,\r\n\trecompilations_per_second DECIMAL(18,4) NOT NULL);\r\nCREATE CLUSTERED COLUMNSTORE INDEX CCI_performance_counters \r\nON dbo.performance_counters;<\/pre>\n<p>Note that while the counter values stored in <code>dm_os_performance_counters<\/code> are <code>BIGINTs<\/code>, <code>DECIMAL(18,4)<\/code> is used for many counters to support the math needed to calculate per-second numbers, which provides more value than raw numbers would. Units are included in all counters to make it easier to understand what they mean and how to use them.<\/p>\n<p>If metrics are to be collected and aggregated across many servers, then adding a database server column would be an easy way to facilitate that. An additional non-clustered index on that column (plus the sample time) would be valuable if there were a need to perform calculations based on server rather than time.<\/p>\n<h2>Collecting performance counter data<\/h2>\n<p>With a table created to accept data from <code>dm_os_performance_counters<\/code>, a process can be built that regularly collects the data and stores it here. For the sake of brevity, this section will cover the key components to data collection only. The full <a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/Performance-Counter-Data-Collection.zip\">script<\/a> can be downloaded, reviewed, and tested at the reader&#8217;s leisure.<\/p>\n<p>One key concern in this data collection is differentiating between the collection of static and cumulative counters. Most of the counters chosen to demo are cumulative, and thus the only way to measure them is to take the current sample, compare it to the last sample, and perform the necessary math to determine the metric over that time span. There are a handful of ways to manage this, but for simplicity and reliability, a new table will be created:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">\tCREATE TABLE dbo.performance_counters_staging\r\n\t(\tsample_time_utc DATETIME2(3) NOT NULL,\r\n\t\tpage_reads_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tpage_writes_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tlazy_writes_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tpage_lookups_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tlogins_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tlogouts_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\ttemp_tables_created_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tbatch_requests_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tcompilations_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\trecompilations_cumulative DECIMAL(18,4) NOT NULL,\r\nCONSTRAINT PK_performance_counters PRIMARY KEY CLUSTERED \r\n(sample_time_utc));<\/pre>\n<p>In addition, little history needs to be saved to this table, therefore it will not grow large and a standard clustered primary key is adequate. Realistically, only one previous sample is required to calculate what is needed for our process, but to play it safe and keep retention straightforward, this example will retain a single day of data.<\/p>\n<p>In an effort to structure this project, the steps needed to collect this data are as follows:<\/p>\n<ol>\n<li>Collect a snapshot of all counters of interest into a temporary table.<\/li>\n<li>Determine the last sample time for the cumulative counters.<\/li>\n<li>Join the new counter data to the previous cumulative counter data.<\/li>\n<li>Insert the results into <code>dbo.performance_counters<\/code>.<\/li>\n<li>Insert the raw cumulative data into <code>dbo.performance_counters_staging<\/code>.<\/li>\n<li>Remove any staging data older than a day.<\/li>\n<\/ol>\n<p>Note that there is no retention on <code>dbo.performance_counters<\/code>. This is intentional as it is built as a table to store performance counter data long-term. Retention may be added at the end of the process to limit storage to a reasonable time frame if needed. It\u2019s advisable to keep at least 1-2 years of data, if it is to be used for capacity planning or other long term trending. Given the table\u2019s small size, retaining it forever would not be expensive.<\/p>\n<p>The process can run as often as is prudent, and that frequency would be determined by an organization and the environment being monitored. For an average database server, I\u2019d consider starting at samples every minute and adjust more\/less often as needed.<\/p>\n<h3>Collect a Snapshot<\/h3>\n<p>To reduce the performance impact of querying system views, there is a benefit in collecting counter data in a single query and putting it into a temporary table:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">DECLARE @sample_time_utc DATETIME2(3) = GETUTCDATE();\r\n\tCREATE TABLE #performance_counters\r\n\t(\tsample_time_utc DATETIME2(3) NOT NULL,\r\n\t\tpage_life_expectancy_seconds BIGINT NOT NULL,\r\n\t\tpage_reads_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tpage_writes_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tlazy_writes_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tpage_lookups_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\ttarget_pages BIGINT NOT NULL,\r\n\t\tlogins_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tlogouts_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\ttemp_tables_created_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tbatch_requests_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\tcompilations_cumulative DECIMAL(18,4) NOT NULL,\r\n\t\trecompilations_cumulative DECIMAL(18,4) NOT NULL);\r\n\tINSERT INTO #performance_counters\r\n\t\t(sample_time_utc, page_life_expectancy_seconds, \r\n                 page_reads_cumulative, page_writes_cumulative,\r\n\t     lazy_writes_cumulative, page_lookups_cumulative, \r\n             target_pages, logins_cumulative,\r\n\t\t logouts_cumulative, temp_tables_created_cumulative, \r\n                 batch_requests_cumulative, compilations_cumulative,\r\n\t\t recompilations_cumulative)\r\n\tSELECT\r\n\t\t@sample_time_utc,\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Page life expectancy' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS page_life_expectancy_seconds,\t\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Page reads\/sec' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS page_reads_cumulative,\t\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Page writes\/sec' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS page_writes_cumulative,\t\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Lazy writes\/sec' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS lazy_writes_cumulative,\t\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Page lookups\/sec' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS page_lookups_cumulative,\t\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Target pages' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS target_pages,\t\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Logins\/sec' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS logins_cumulative,\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Logouts\/sec' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS logouts_cumulative,\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Temp Tables Creation Rate' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS temp_tables_created_cumulative,\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'Batch Requests\/sec' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS batch_requests_cumulative,\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'SQL Compilations\/sec' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS compilations_cumulative,\r\n\t\tSUM(CASE WHEN dm_os_performance_counters.counter_name = 'SQL Re-Compilations\/sec' THEN dm_os_performance_counters.cntr_value ELSE 0 END) AS recompilations_cumulative\r\n\tFROM sys.dm_os_performance_counters\r\n\tWHERE (dm_os_performance_counters.object_name LIKE '%Buffer Manager%' AND dm_os_performance_counters.counter_name IN ('Page life expectancy', 'Page reads\/sec', 'Page writes\/sec', 'Lazy writes\/sec', 'Page lookups\/sec', 'Target pages'))\r\n\tOR (dm_os_performance_counters.object_name LIKE '%General Statistics%' AND dm_os_performance_counters.counter_name IN ('Logins\/sec', 'Logouts\/sec', 'Temp Tables Creation Rate'))\r\n\tOR (dm_os_performance_counters.object_name LIKE '%SQL Statistics%' AND dm_os_performance_counters.counter_name IN ('Batch Requests\/sec', 'SQL Compilations\/sec', 'SQL Re-Compilations\/sec'))<\/pre>\n<p>The syntax looks a bit messy but performs as well as a <code>PIVOT<\/code> or some other method would. The goal is to hit <code>dm_os_performance_counters<\/code> a single time and not need to revisit it for different subsets of counters. The T-SQL above will generate a single row of data with all of the requested metrics placed into the temporary table.<\/p>\n<h3>Determine the Last Sample Time for Cumulative Counters<\/h3>\n<p>For the cumulative counters, it is necessary to compare the values just collected against the last sample taken. If no previous sample has been taken, then this sample time will be <code>NULL<\/code> and results for these counters will not be collected until the next time this process runs.<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk \">\tDECLARE @last_sample_time datetime2(3);\r\n\tSELECT\r\n\t\t@last_sample_time = \r\n                 MAX(performance_counters_staging.sample_time_utc)\r\n\tFROM dbo.performance_counters_staging;\r\n\t-- Do not create performance data for the first run.  \r\n        --Instead, only seed the cumulative counters for the next run.\r\n\tIF @last_sample_time IS NOT NULL\r\n\tBEGIN\r\n\t\tDELETE performance_counters_staging\r\n\t\tFROM dbo.performance_counters_staging\r\n\t\tWHERE performance_counters_staging.sample_time_utc \r\n                 &lt; DATEADD(DAY, -1, @last_sample_time);\r\n\t\tDECLARE @seconds_since_last_execution INT;\r\n\t\tSELECT @seconds_since_last_execution = \r\n                DATEDIFF(SECOND, @last_sample_time, @sample_time_utc);<\/pre>\n<p>This is straightforward and checks the staging table for the last sample time and stores it in a variable for use soon. The remainder of this process will be nested within an IF statement, ensuring that the first run of this code will not generate any performance counter data. Instead, staging data will be populated and a full data set generated on the next run.<\/p>\n<p>This includes the deletion of any staging data more than a day older than the last sample time, which ensures this table does not get overly large. An administrator may wish to keep the raw numbers for telemetry purposes for a longer period of time, and if so, this retention can be adjusted accordingly here.<\/p>\n<h3>Calculate Metrics and Insert into Performance Counter Table<\/h3>\n<p>For the two point-in-time metrics, the work is simple, but for the cumulative counters, some logic needs to be added to compare with the previous value and also calculate the per-second metric.<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk \">\t\tINSERT INTO dbo.performance_counters\r\n\t\t\t(sample_time_utc, page_life_expectancy_seconds, \r\n                         page_reads_per_second, page_writes_per_second, \r\n                         lazy_writes_per_second,\r\n\t\t\t page_lookups_per_second, target_pages, \r\n                         logins_per_second, logouts_per_second, \r\n                         temp_tables_created_per_second,\r\n\t\t\t batch_requests_per_second, \r\n                         compilations_per_second, \r\n                         recompilations_per_second)\r\n\t\tSELECT\r\n\t\t\tperformance_counters.sample_time_utc,\r\n\t\t\tperformance_counters.page_life_expectancy_seconds,\r\n\t\t\tCASE WHEN performance_counters.page_reads_cumulative &gt;= performance_counters_staging.page_reads_cumulative\r\n\t\t\t\tTHEN performance_counters.page_reads_cumulative - performance_counters_staging.page_reads_cumulative\r\n\t\t\t\tELSE performance_counters.page_reads_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS page_reads_per_second,\r\n\t\t\tCASE WHEN performance_counters.page_writes_cumulative &gt;= performance_counters_staging.page_writes_cumulative\r\n\t\t\t\tTHEN performance_counters.page_writes_cumulative - performance_counters_staging.page_writes_cumulative\r\n\t\t\t\tELSE performance_counters.page_writes_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS page_writes_per_second,\r\n\t\t\tCASE WHEN performance_counters.lazy_writes_cumulative &gt;= performance_counters_staging.lazy_writes_cumulative\r\n\t\t\t\tTHEN performance_counters.lazy_writes_cumulative - performance_counters_staging.lazy_writes_cumulative\r\n\t\t\t\tELSE performance_counters.lazy_writes_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS lazy_writes_per_second,\r\n\t\t\tCASE WHEN performance_counters.page_lookups_cumulative &gt;= performance_counters_staging.page_lookups_cumulative\r\n\t\t\t\tTHEN performance_counters.page_lookups_cumulative - performance_counters_staging.page_lookups_cumulative\r\n\t\t\t\tELSE performance_counters.page_lookups_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS page_lookups_per_second,\r\n\t\t\tperformance_counters.target_pages,\r\n\t\t\tCASE WHEN performance_counters.logins_cumulative &gt;= performance_counters_staging.logins_cumulative\r\n\t\t\t\tTHEN performance_counters.logins_cumulative - performance_counters_staging.logins_cumulative\r\n\t\t\t\tELSE performance_counters.logins_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS logins_per_second,\r\n\t\t\tCASE WHEN performance_counters.logouts_cumulative &gt;= performance_counters_staging.logouts_cumulative\r\n\t\t\t\tTHEN performance_counters.logouts_cumulative - performance_counters_staging.logouts_cumulative\r\n\t\t\t\tELSE performance_counters.logouts_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS logouts_per_second,\r\n\t\t\tCASE WHEN performance_counters.temp_tables_created_cumulative &gt;= performance_counters_staging.temp_tables_created_cumulative\r\n\t\t\t\tTHEN performance_counters.temp_tables_created_cumulative - performance_counters_staging.temp_tables_created_cumulative\r\n\t\t\t\tELSE performance_counters.temp_tables_created_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS temp_tables_created_per_second,\r\n\t\t\tCASE WHEN performance_counters.batch_requests_cumulative &gt;= performance_counters_staging.batch_requests_cumulative\r\n\t\t\t\tTHEN performance_counters.batch_requests_cumulative - performance_counters_staging.batch_requests_cumulative\r\n\t\t\t\tELSE performance_counters.batch_requests_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS batch_requests_per_second,\r\n\t\t\tCASE WHEN performance_counters.compilations_cumulative &gt;= performance_counters_staging.compilations_cumulative\r\n\t\t\t\tTHEN performance_counters.compilations_cumulative - performance_counters_staging.compilations_cumulative\r\n\t\t\t\tELSE performance_counters.compilations_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS compilations_per_second,\r\n\t\t\tCASE WHEN performance_counters.recompilations_cumulative &gt;= performance_counters_staging.recompilations_cumulative\r\n\t\t\t\tTHEN performance_counters.recompilations_cumulative - performance_counters_staging.recompilations_cumulative\r\n\t\t\t\tELSE performance_counters.recompilations_cumulative\r\n\t\t\tEND \/ @seconds_since_last_execution AS recompilations_per_second\r\n\t\tFROM #performance_counters performance_counters\r\n\t\tINNER JOIN dbo.performance_counters_staging\r\n\t\tON performance_counters_staging.sample_time_utc = @last_sample_time;\r\nEND<\/pre>\n<p>Since a service restart will reset cumulative counters, it is necessary to check if the current value is larger than the previous value. If so, then subtract to get the difference. If the current value is less, then simply use that value as the counter value.<\/p>\n<p>The counter rate per second is calculated by dividing the amount that the counter incremented by since the last sample by the number of seconds since the last sample.<\/p>\n<h3>Populate Raw Cumulative Counter Data<\/h3>\n<p>This final <code>INSERT<\/code> ensures that a snapshot of cumulative counters is retained for posterity.<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">\tINSERT INTO dbo.performance_counters_staging\r\n\t\t(sample_time_utc, page_reads_cumulative, \r\n                 page_writes_cumulative, lazy_writes_cumulative, \r\n                 page_lookups_cumulative,\r\n\t\t logins_cumulative, logouts_cumulative, \r\n                 temp_tables_created_cumulative, \r\n                 batch_requests_cumulative,\r\n\t\t compilations_cumulative, recompilations_cumulative)\r\n\tSELECT\r\n\t\tsample_time_utc,\r\n\t\tpage_reads_cumulative,\r\n\t\tpage_writes_cumulative,\r\n\t\tlazy_writes_cumulative,\r\n\t    page_lookups_cumulative,\r\n\t\tlogins_cumulative,\r\n\t\tlogouts_cumulative,\r\n\t\ttemp_tables_created_cumulative,\r\n\t    batch_requests_cumulative,\r\n\t\tcompilations_cumulative,\r\n\t\trecompilations_cumulative\r\nFROM #performance_counters;<\/pre>\n<h3>Putting it All Together<\/h3>\n<p>When the entire process is pieced together and executed for the first time, it will place data into the staging table only:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-91904\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image.jpeg\" alt=\"collecting sql server performance data into a staging table\" width=\"1029\" height=\"64\" \/><\/p>\n<p>These represent raw, cumulative counters for ten of the twelve metrics we are going to track.<\/p>\n<p>When the same process is executed again, the staging table gets a second row, and the primary counter table is populated with its first complete row:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-91905\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/word-image-1.jpeg\" alt=\"The results of two collections of SQL Server performance data\" width=\"965\" height=\"143\" \/><\/p>\n<p>With this and each subsequent run, a row will be placed in <code>dbo.performance_counters<\/code> (the bottom result set) that contains both the point-in-time and cumulative counters. The calculation that converts raw\/cumulative numbers into rates can be observed with the following example from the data above:<\/p>\n<p><strong>17:25:26.783: Page Lookup Count (Cumulative):<\/strong> 70515235<\/p>\n<p><strong>17:28:05.320: Page Lookup Count (Cumulative):<\/strong> 70521221<\/p>\n<p><strong>Difference<\/strong>: 5986 Page Lookups<\/p>\n<p><strong>Seconds Between Samples<\/strong>: 159<\/p>\n<p><strong>Page Lookups Per Second<\/strong> = 5986 \/ 159 = 37.65<\/p>\n<p>The final step to collecting this data is to schedule a job or task that executes this code regularly, adding a row to <code>dbo.performance_counters<\/code> on each run.<\/p>\n<p>The <a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2021\/07\/Performance-Counter-Data-Collection.zip\">script<\/a> included in this article contains the full stored procedure code for <code>dbo.populate_performance_counters<\/code>, which takes all of the code above and packages it into a single stored procedure for easy use by a developer or administrator.<\/p>\n<p>This process is meant for customization and is of most value when tailored to the environment that is to use it. If multiple servers are to be monitored, then run the stored procedure presented here on each server and then (if needed) have a centralized process that collects this data regularly and stores it for central analysis.<\/p>\n<p>The only alteration to the table structure for the centralized table would be the addition of a column to store the source database server name and possibly an index to support searches on that column.<\/p>\n<h2>Customization<\/h2>\n<p>Since <code>dm_os_performance_counters<\/code> contains a wide variety of performance counters, it is natural that only a small subset will be useful to any one person, and that specific subset will vary from organization to organization. For the purpose of this article, a selection of what I considered to be some of the more useful performance counters were chosen.<\/p>\n<p>Adding or removing counters can follow the patterns illustrated in this code and should not be difficult. To add a counter, follow a process similar to this:<\/p>\n<ol>\n<li>Add a column to <code>dbo.performance_counters<\/code> for the new counter.<\/li>\n<li>If the counter is cumulative, then also add it to <code>dbo.performance_counters_staging<\/code>.<\/li>\n<\/ol>\n<p>Within the stored procedure:<\/p>\n<ol start=\"3\">\n<li>Add the counter to <code>#performance_counters<\/code><em>.<\/em><\/li>\n<li>Add the counter to the <code>INSERT\/SELECT<\/code> into <code>#performance_counters<\/code><\/li>\n<li>Add the counter to the <code>INSERT<\/code> into <code>dbo.performance_counters<\/code>.<\/li>\n<li>If the counter is cumulative, then also add it to the <code>INSERT<\/code> into <code>dbo.performance_counters_staging<\/code>.<\/li>\n<\/ol>\n<p>To remove a counter, follow a similar process to what is outlined above, but instead of adding a counter, remove it instead.<\/p>\n<p>While no database-scoped counters were included in the examples above, they can be added via similar processes. If the number of databases to be tracked is small and finite, then adding them to the tables already presented here makes sense. If the number of databases is large or variable, then creating a separate table for them is a more scalable solution.<\/p>\n<p>In the latter scenario, the database name would be added as a column in a new table and a row would be inserted per sample time per database. A separate table for database-scoped counters is the easiest way to avoid confusion between server and database metrics and a need to differentiate between them whenever querying one single table.<\/p>\n<h2>Monitoring vs. capacity planning and analytics<\/h2>\n<p>Before wrapping up this discussion of performance monitor data, it is important to differentiate between two functional uses of these metrics:<\/p>\n<ul>\n<li>Monitoring and Alerting<\/li>\n<li>Analytics and Capacity Planning<\/li>\n<\/ul>\n<p>The code in this article captures performance monitor metrics for long-term storage and is ideally used for capacity planning and analytics. It can help identify long-term trends and allow developers and administrators to plan ahead, avoiding resource crunches and emergencies.<\/p>\n<p>For example, a long-term downward trend of page life <a id=\"post-91903-_Hlk75417219\"><\/a>expectancy indicates an impending memory shortage. This may be caused by code, increased usage, more data, or something else. Seeing the long-term trend and identifying this challenge with months to spare would allow for additional memory to be purchased and allocated. If a release were to blame, it would provide the time needed to research and resolve those code changes before becoming a systemic problem. If the cause were unclear, time would be available to research and find it before it is too late.<\/p>\n<p>While this data can be used for monitoring, it is not in of itself a monitoring tool. Having a reliable SQL Server monitoring tool is an invaluable resource that can:<\/p>\n<ul>\n<li>Alert in the event of immediate problems.<\/li>\n<li>Help identify code or resource problems.<\/li>\n<li>Let you know when resources run low at certain times of the day.<\/li>\n<li>Correlate an app problem to the code or configuration causing it.<\/li>\n<\/ul>\n<p>Always consider the tools used for monitoring database servers. If you are in need of a tool for the job, consider <a href=\"https:\/\/www.red-gate.com\/products\/dba\/sql-monitor\/\">Redgate\u2019s SQL Monitor<\/a>. The time (and sleep!) saved by having a reliable monitoring tool is significant; therefore it is worth the time needed to test and implement one.<\/p>\n<h2>Collecting SQL Server performance monitor data<\/h2>\n<p>Performance counter data is exceptionally useful, but its value cannot be fully realized until it is reliably collected and stored for future use. The more data that is retained, the more value it has. Trending IO for a week may be useful, but being able to evaluate it over a full year (or longer) will provide far more perspective and the ability to make better decisions that can stretch far into the future.<\/p>\n<p>The value in a process like this is versatility, and by customizing the data and usage, a developer or administrator can learn a great deal about their database servers and the underlying databases. No single set of metrics applies to everyone and by tailoring a data set to an organization\u2019s specific needs, an ideal set of data can be maintained that allows for database servers to be effectively analyzed and planned for over time. As conditions change, metrics can be added or removed, ensuring that even as data evolves, the ability to understand it is not compromised.<\/p>\n<p><em>If you like this article, you might also like\u00a0<\/em><a href=\"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/performance-sql-server\/sql-server-performance-monitor-data-introduction-and-usage\/\"><em>SQL Server performance monitor data: Introduction and usage<\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>SQL Server provides information for troubleshooting performance. Ed Pollack demonstrates collecting SQL Server performance monitor data.&hellip;<\/p>\n","protected":false},"author":329827,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[53,143529],"tags":[145423,5842,145422],"coauthors":[101655],"class_list":["post-91903","post","type-post","status-publish","format-standard","hentry","category-featured","category-performance-sql-server","tag-dm_os_performance_counters","tag-sql-monitor","tag-sql-server-performance-data"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/91903","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/329827"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=91903"}],"version-history":[{"count":6,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/91903\/revisions"}],"predecessor-version":[{"id":91907,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/91903\/revisions\/91907"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=91903"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=91903"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=91903"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=91903"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}