Doing a SQL Server Healthcheck via PowerShell

PowerShell is an ideal tool for doing health checks of a collection of SQL Server instances, and there are several examples around, but few acknowledge the fact that individual DBAs have their own priorities for tests, and need something easily changed to suit circumstances. Omid's health check allows tests to be SQL or PowerShell and requires only adding, altering or deleting files in directories.


This article describes a way of checking the health of any number of SQL Server instances and servers. It quickly collates server performance statistics and highlights potential problems such as missing indexes, untrusted foreign keys and database options that aren’t set to the default. Whether you are an accidental or deliberate DBA, this tool will either save you paying a consultant to run a health check for you or it will save you writing and executing all of the necessary scripts and formatting the report. This tool is open-source, on GitHub and is free for use. rather different from SQLBlitz in that it is PowerShell-driven and does not attempt to be prescriptive. It is also designed to be easy for the user to customize.

To get started, all you need to do is to:

  1. Download and extract to the desired location (it is in a link at the bottom of the article).
  2. Open DBList.txt and replace server and database names as required. Alternatively, execute the PowerShell script passing the name of the server and the database as parameters and skip the third step.
     For example: ./HealthCheck.ps1 Server1\InstanceA DB1
  3. Execute HealthCheck.bat.

The report can be found in the\HealthCheck\Reports directory.

The Report

The report contains one worksheet for each of the T-SQL scripts. I have included what I think is most important when evaluating a SQL Server from a performance perspective.


shows the hit ratios and sizes for each plan cache store.  The closer the hit ratios are to 100%, the better. Lower values could indicate memory pressure or plan cache bloat.
lists tables that have more than one column defined as varchar(255). These tables should be evaluated to see whether other data types would be more appropriate.
Calculates signal waits and resource waits as a percentage of the overall wait time. As a guideline, signal waits should usually be below 20%. If you see values higher than this, try tuning the queries highlighted by the MissingIndexesProcsByCPU report.
lists objects with cursors that have not been closed or deallocated. Static cursors use the most memory and could therefore have the greatest impact if not closed correctly.
returns database options that are enabled as well assome other information such as compatibility leveland recovery model.
lists tables with columns that use deprecated types text, ntext and image. These should be replaced by the new types (n)varchar(max) and varbinary(max) to avoid your code breaking after upgrading SQL Server.
lists indexes with fill factors of less than 90. You may want to review these and decided if space is being wasted.               
lists untrusted foreign keys.These constraints will not be considered by the query optimizer. See for more details.
lists tables that have no indexes. It is usually a good idea to have at least a clustered index on a table.
lists tables that have a non-clustered index but do not have a clustered index. There are exceptions where this might be acceptable. See the following article for more information:
lists hypothetical indexes. They are usually created by the Database Tuning Advisor but can also be created using T-SQL. It is best to ensure these are removed to avoid clutter.  Also, there have been bugs related to these such as the following:
lists tables with non-clustered indexes that have theignore_dup_key option enabled. This option harms performance of inserts and updates as describedin Craig Freedman’s article on MSDN:
lists tables that have unused indexes. Before removing indexes, remember that the index usage stats DMV is cleared upon a server restart.
lists indexes that have a maximum key width over 900 bytes which could result in error 1946 during an insert or update operation.
lists tables with wide cluster keys. These should be reviewed as cluster keys are also present in all non-clustered indexes and can therefore consume a lot of space, possibly unnecessarily.
InMemoryTables (2014 only)
lists In-Memory OLTP tables along with their memory usage.
returns IO stall statistics for all databases with the target database listed first.
returns latch wait statistics. The latch stats DMV is also cleared upon a server restart.
lists top ten memory clerks and total memory usage.
lists missing indexes for the target database with an estimated impact above 70 and index advantage above1,000.
lists unindexed columns referenced by foreign keys.  Deletes on the referenced tables will result in scans on the parent tables. Evaluate whether indexes on the parent tables are required.
MissingIndexesProcsByCPU (2008-2014 only)
returns missing index information for the ten stored procedures that consume most CPU resource. Filtered on indexes with an impact above 70.
MissingIndexesProcsByReads (2008-2014 only)
as above but for the highest consumers of I/O.
returns plan cache size broken down by type. Look out for a bloated ad hoc cache store, which is a common problem.
lists active server audits. Watch out for any synchronous server audits, with a queue delay of 0. These can really hurt performance.
returns non-default server configurations. Look out for a non-default fill factor. I prefer to leave this setting as the default of zero and assign each index its own fill factor, if required. Check that the “max degree of parallelism” has been set according to the official guidelines from Microsoft A common oversight is to leave the maximum server memory as the default 2 TB. This is one setting where you will definitely want to have a non-default configuration.
PowerShell script that returns information on the target server’s CPUs, BIOS, and memory, as well as SQL Server-specific details.
lists active traces on the server. Beware of traces where is_rowset = 1 as this means somebody is running SQL Profiler against your server, which has a big overhead.
lists objects that prepare XML documents but do not remove them. Microsoft recommends alwaysincluding this to avoid running out of memory.
lists duplicate statistics with their sizes.This report uses the key_ordinal column to identify the first column of an index statistic due to the following bug\unexpected behavior with the stats_column_id:
lists tables created with ANSI_NULLS off. This setting is deprecated and should no longer be used. Also, various features in SQL Server will generate an error when this setting is off, such as indexed views and filtered indexes.
lists tables with more than five indexesand an index count greater than half the column count. Having five indexes or more isn’t necessarily bad, it’s just a starting point for finding tables that possibly have an unnecessarily large amount of indexes. They can often be consolidated into fewer indexes, saving space and improving write latencies.
lists tables larger than 10 GB. Good candidates for data compression, assuming you have SQL Server 2008 or a later version and the Enterprise edition.
lists tables that have more indexes than columns. Not usually a very good sign!
lists tables that have a unique identifier as the first column of the clustered index. This is rarely a good idea, although there are exceptions to this such as latch contention on increasing keys. This MSDN article explains in more detail.
Returns server wait statistics. These give a good indication as to where you should focus your performance tuning efforts.

I deliberately didn’t include missing indexes resulting from ad hoc queries in the report due to the slow performance of the sys.dm_exec_query_stats DMV on servers with a lot of memory. If you find that the DMV returns results quickly on your servers, feel free to add a script to report on this.

I also decided not to includeindex fragmentation statistics for the same reason.
The performance of the sys.dm_db_index_physical_stats DMV can be really slow with large tables. If you do want to include this information, there are plenty of ready-made scripts out there on the various SQL Server forums.

The scripts for the stored procedures missing index reports …
(MissingIndexesProcsByCPU.sql and MissingIndexesProcsByReads.sql)
…are not available for SQL Server 2005 instances as the sys.dm_exec_procedure_stats DMV was only introduced in SQL  Server 2008.

How Does It Work?

The batch file reads a list of SQL Server instance and database names from a text file and passes them as parameter values to a PowerShell script, which it executes in a loop, once for each database (see the “Permissions Required” section below for security issues). The PowerShell script is intended to do a health check on a single instance, putting the result in an excel file of the same name as the instance. It does this health check by executing a set of diagnostic T-SQL and PowerShell scripts, exporting metadata to CSV files.  Finally, a VBA Excel macro is executed from PowerShell to merge the CSV files into an Excel spreadsheet and then formats the report. I could have implemented this part in PowerShell too, but as I had written a formatting macro a few years ago, I decided to reuse it.

Software Required

The following software is required to run the health check:

  • PowerShell version 2.0 or higher (comes with Windows 7 and Windows Server 2008 R2)
  •  Microsoft Excel 2007 or higher.
  •  SQLPS PowerShell module, which is installed with SQL Management Tools.

Permissions Required

A windows account must be used with the following access to the target server and database:

Server Scope       

  • View Server State

Database Scope     

  • View Definition in target database.
  • ddl_admin to run DBCC SHOW_STATISTICS if the target server is pre-SQL 2012 SP1, otherwise db_datareader is enough.
  • Remote WMI access is required to run the ServerInformation.ps1 script.  The following article has details on how to grant this access to non-admin users.
    If the user does not have the necessary access, the report will still run but will not include the server information worksheet.

Customising the report

You can adjust the report by deleting or adding scripts, or by changing them as you would any other T-SQL.

  • To expand the range of the health check to meet your requirements simply add the T-SQL scripts that you need to the relevant scripts directory and they will be automatically executed the next time you run the health check. A corresponding worksheet will be added to the Excel report.
  • Similarly, to remove a worksheet, you just need to remove the unwanted T-SQL script from its directory or, if it’s a temporary change, rename the extension to something like “old”.
  • You can add or remove a column in an existing script and this will be reflected in the final report too.

The scripts are maintained at four different levels of compatibility, so you should repeat these operations for each directory, Scripts_2005, Scripts_2008, Scripts_2012 & Scripts_2014.

The Code contains the following five files:

HealthCheck.ps1 – This PowerShell script controls the flow of the program.

After initializing some variables the script determines the SQL Server version and database compatibility level (see Listing 1 below).  Each major version has a directory with its own scripts, from SQL Server 2005 to SQL Server 2014.

Listing 1

Next, another PowerShell script is run to gather information on the target server before executing the SQL scripts from the relevant directory (see Listing 2 below). The output is temporarily stored in CSV files in the \Reports\Temp directory.

Listing 2

Databases which have compatibility level 80 (SQL Server 2000) will not be compatible with all of the scripts due to the presence of T-SQL operators such as APPLY and UNPIVOT. The affected scripts are, therefore, stored in a subdirectory called “80” and are executed in the context of the master database as shown in Listing 3.

Listing 3

Finally, the CSV files are imported into an Excel spreadsheet using a VBA macro, which is stored in \Reports\Master.xlsm.

Listing 4

Master.xlsm- This contains the VBA code that imports the CSV files and formats the report. It contains five subroutines:

  • Loader – contains a loop that calls the CSV import subroutine.
  • ImportCSV – imports the CSV files and renames the worksheets.
  • FormatWorkSheets – adjusts column widths and aligns the text.
  • SelectAllMakeTable – formats the data as tables.
  • SortWorksheets – sorts worksheets into alphabetical order.

HealthCheck.bat – This is the executable that creates the report. It loops through the list of servers and databases in DBlist.txt and passes these names as parameters to the PowerShell script.

DBList.txt – Put the names of the target servers and databases in this file. Use a semi-colon as a delimiter, for example:

  • For servers that don’t have the SQL Browser service enabled, you can specify the port number delimited by a back tick and a comma, as follows:
    • Server1`,1433; DB1

ErrorLog.txt – The PowerShell script appends errors to this file.


I hope you find this tool useful in evaluating and troubleshooting the performance of your SQL Servers. I tried to create something that would quickly provide a detailed health check, while also being customizable. There are hundreds of more checks that you could include in the report but my work here is done: What I’ve done works for me. Now it’s your turn!

Note: The latest source to the SQL Healthcheck utility is on GitHub.