Centralize Your Database Monitoring Process

SQL Server Data Collector, together with Management Data Warehouse, is a fine and useful component for gathering information centrally about how SQL Server instances are being used, and thereby keeping an eye out for problems. It comes into its own when you have figured out how to configure it to run on maybe hundreds of instances using Central Management Server. Dennes describes how to tame the system so that it scales.

Centralize Your Database Monitoring Process Using Data Collector and CMS

Although Data Collector is provided by Microsoft to monitor SQL Server instances and create baselines, it’s not so easy to maintain and manage Data Collector configuration through hundreds of instances of SQL Server, so let’s see how to solve this problem using Central Management Server (CMS).

The CMS allows us to run a script over a collection of instances of SQL Server, as many as we would like. We can configure CMS to run scripts in our SQL Server instances and then configure Data Collector in all instances. Because the list of databases is stored in a SQL Server instance, it can be accessed as a team resource; all the DBAs in your team will have access to it.

Data Collector allows us to collect information from SQL Server instances. We create and manage collection sets, which specify the information to gather. All the information in sent to a central database called MDW (Management Data Warehouse). Data Collector provides for us a few built in collection sets and reports and we can customize if we need so.

This article aims to show you how to configure data collector in dozens of servers in a relative easy way using CMS.

The configuration of CMS involves registering one server as the CMS, the central server to manage the script execution. We can use the same server instance for both tasks, hosting both the CMS and MDW databases.

To achieve our goal, we will need the following steps:

  1. Configure the MDW Database
  2. Configure the CMS
  3. Configure data collector through CMS
  4. Query the status of the server environment
  5. Configure Collection Sets through CMS
  6. Collect the information
  7. Check the MDW reports
  8. Check if the collection task is working

Test environment

In my test environment to demonstrate the system, I have four instances of SQL Server, one of which is the default instance and the three others named SQL2, SQL3 and SQL4. I will use the default instance as the CMS server and also as a host for the MDW database.

Configuring MDW Database

We can use SQL Server Management Studio (SSMS) interface to configure the MDW database. In the Object Explorer window, we first connect to the server. We then click in ‘management‘, right-click ‘Data Collector‘ and click ‘Configure Management DataWarehouse. This will start a wizard that you can follow to do the configuration.


Configuring MDW Wizard

After the wizard has finished running, you have not only the new database but also a new job named ‘mdw_purge_data_[yourdatabasename]’. This job is responsible for deleting the expired information from the MDW database, following the expiration rules that are configured in each collection set, for each SQL Server instance.

Configure the CMS

2133-Screenshot%20(87)-c5a9925d-4df4-481The CMS configuration can be done in the registered servers window of SSMS (SQL2008 onwards): You will find the ‘Central Management Servers’ folder and by right-clicking this folder you will find the ‘Register Central Management Server’ command.

After registering the CMS, you can then right-click the CMS registration and will find the option ‘Register Server…‘ so, in our example, three instances will be listed as being registered.


It’s important to plan the security when registering the servers. CMS only works with integrated security, so your logged-in account need access to all the servers. You can use a single domain, multiple-domain with trust relationships or stand-alone servers with the same user name and password in all servers, the important point is that the logged-in account you will use to manage the servers will need access to all of them.


After all the servers are registered, you can right-click the CMS registration and use the command ‘New Query’ to open a query window. It will seem like a regular query window expect for the status at the lower left side indicating the number of servers connected through this query window.

All the queries that you run in this window will be executed against all the connected servers. The result of the query will have one additional column with the server name so you can identify the server that generated each row. It’s important to notice that the CMS server is never included for the execution of multi-server queries.

You can also execute queries against a few servers and not all of them. To accomplish this you will need to create server groups below the CMS registration and register servers inside server groups. You can either open query window from the CMS Server, to run queries against all the servers, or from a server group folder, to run queries against only the servers in the server group.

Using CMS it’s likely that you will need to move servers from one group to another, you can accomplish this by right-clicking the server and selecting “move to…”.


Registered Servers and groups

Configure Data Collector through CMS

Configuring Data Collector requires you to set three parameters: The server that will host the MDW database (to where the data will be sent), the MDW database name and the cache directory.

The cache directory will be used to store the information collected but not uploaded yet to the MDW database. It is possible to specify asynchronous upload for any collection set, so you can use different schedules to gather the data and to upload the information to MDW. This is important with some kinds of information that needs to be gathered within seconds of interval, but can be uploaded with minutes or hours of interval.

To use the same script for all SQL Servers, we will need to specify a path for the cache directory and use that same path in all servers. However, this will present a problem when the same physical server has more than one instance.

The solution for this problem is to use ‘@@servername‘ variable in the path. The script to configure data collector will became like this:

This script is for the first Data Collector configuration. For situations where you have disabled data collector and would like to re-enable it, you may divide this script into three parts. Bear in mind that:

  • You only need to create the cache directory the first time, so if you need to change configuration or disable/re-enable data collector you may separate this part of the script.
  • Unless you actually need to change the configuration, you only need to configure data collector the first time. You can disable and re-enable data collector without changing the configuration.
  • The third part is the main one; you will use this procedure every time you need to enable Data Collector.

Notice that I’m not checking if the path already exists, the scripts implies that the ‘@@servername‘ directory doesn’t exists and the ‘c:\DCCACHE’ exists.

You will need to adapt this script for your environment. Let’s see some options you can use:

  1. You can replace ‘DCCache with a path that you are sure that already exist in all servers.
  2. You can run a script before the above one to create ‘DCCache‘ path using xp_create_Subdir, this need to run once by physical server, not by instance. There are plenty more solutions, such as PowerShell.
  3. You can change the above script to check if ‘DCCACHE‘ already exist and create the ‘DCCache‘ if it doesn’t. The additional steps, before the above script, will be like this:

Query the status of the environment

Up to this point, we have configured and enabled Data Collector but we have yet to do something about the collection sets.

Data Collector has four system collection sets: Disk Usage, Query Statistics, Server Activity and Utility Information.

The ‘Utility Information’ collection set is used by a feature called Utility Control Point (UCP) that’s beyond the scope of this article. In SQL Server 2014 we may find another two collection sets: ‘Table Usage Analysis’ and ‘Store Procedure Usage Analysis’, both used for In-Memory features in SQL 2014, that’s also beyond the scope of this article.

Firstly, we need to check if the data collector is enabled in all the instances after the previous script execution.

For this task we need to query the table ‘syscollector_config_store in MSDB. The query will be like this:


Now that we know that all three data collectors are enabled, we would like to know which collection sets are enabled.

We need to query the ‘syscollector_collection_sets table in MSDB to check the ‘is_running field and identify which collection sets are running. The query will be this:


Configure Collection Sets

The next step is to control the configuration of each collection set. Each one can be configured as cached or non-cached. The non-cached collection set uploads the information immediately to the MDW database, while the cached one stores the information in the cache directory and uploads with a different schedule.

The question is: Is the configuration correct? Is the schedule correct?

The answer for these questions are just one query away, we only need to do a few changes over the previous query:


Now that we have the information we need, let’s understand the configuration and see how to customize it.

Enabling or Disabling a Collection Set

We can enable or disable any of these collection sets using two stored procedure: ‘sp_syscollector_start_collection_set and ‘sp_syscollector_start_collection_set, both in MSDB database.

We just need to execute these procedures with the collection_set_id as a parameter to enable or disable any collection set and, with the use of CMS, we can do this for all the servers in only one execution.

Collection Mode

The collection mode is directly related with the schedule of the collection set. If the schedule interval is very small, may be in seconds, you will need to choose the cached mode, then you will collect and upload information in different schedules.

On the other hand, if the schedule interval isn’t small, you can use the non-cached mode and upload the data to the server in the same schedule used to collect the data, there is no need of two schedules when the schedule interval isn’t small.

Schedule Name

The ‘Schedule_name field contains the name of a schedule (SQL Agent schedule) that is in use by the collection set, but this can have different meanings according to the collection mode configuration.

If the ‘collection_mode field is ‘non-cached’ the schedule is used to collect information and to upload the information to the MDW database. On the other hand, if the ‘collection_mode field is cached, the schedule is used only for the upload, not to collect. In fact, we are not seeing in the previous query the collect schedule of the cached collection sets.

Each collection set has several collection items and each collection item has its own schedule configuration (not SQL Agent schedule). When the collection set is non-cached, this configuration is ignored, but when the collection set is cached, is the schedule of the collection item that is used to collect the information.

Let’s check the schedule of each collection item of the cached collection sets:


The ‘frequency’ field is in seconds.

Days until Expiration

Exactly as the name of this field implies, ‘Days_until_expiration contains the number of days that the information will remain in the MDW database.

When and how should you change the configuration?

Once you’ve understood all these configuration options, it’s time to decide if you need to change something. The relationship between ‘days_until_expiration, ‘frequency‘ and ‘schedule_name has a direct impact over the size of the MDW database. Will you actually use all this information? Should you reduce days_until_expiration? Should you increase ‘frequency’?

Most of the time, a good option is to reduce ‘days_until_expiration. Fourteen days of queries and server activity and more than a year for disk usage are too much.

The ‘frequency‘ field can have impact in server performance, but if you increase this too much some information can lose its meaning; but the main point is: What you will actually use? Most common uses of ‘Query Statistics’ collection set, for example, will not change if you increase the ‘frequency’ field.

Changing the configuration

You will use the procedure ‘sp_syscollector_update_collection_set in the MSDB database in order to change the collection set configuration.

Here is a simplified syntax of this procedure:

You can use this procedure to change the collection mode, days until expiration and the schedule. For the schedule, you can use the following query to identify which schedules are available to use:


You can create your own schedules. This is a simple SQL Server Agent schedule, but I will not include details of this option in this article.

Let’s see, as an example, how to change the configuration of a ‘query statistics’ collection set :

Now we need to change the frequency of the collection items. For this we will use the procedure sp_syscollector_update_collection_item:

Collect the information

Sometimes you will want to force the collection of the information instead of waiting for the collection schedules: This is especially true after the initial configuration, because you will need to force the collection so as to test everything.

There are two challenges:

  • We need to collect information for each collection set
  • the procedure to execute the collection is different according to the collection set configuration.

This sample script below does the job for only one collection set (collection_set_id=1):

We can execute this script for each collection set we have, changing the collection_set_id, or we can create a new script that do this task for all active collection set. Let’s see the new script:

Using offset/fetch, we do a loop through all active collection sets (is_running=1), check the ‘collection_mode and execute the correct procedure to do the collection. You can run this script through CMS every time you would like to manually collect the information. To see the result you just need to check the data collector reports.

Data Collector Reports

The data collector reports are installed in the MDW database. We can right-click the MDW database to check the reports of the collection sets. Although the reports are available only through the SSMS interface, the data is inside the database so we can query this data in a lot of ways.



Check if the collection process is working

To check if the collection process is working, we need to query the table ‘syscollector_execution_log in MSDB. We need to find the last log status (most recent) for each collection set.

The query will be this:

You need to be sure that SQL Server Agent is running, because the status will not show the problems if it’s not.


Additional insights

This is just the beginning. There are many more details that will help to create a better data collector environment. As a summary:

  • You should create custom collection sets for your needs. You can collect not only information from your databases about the server, but also about the business processes. One example: the total sales from each of your stores.
  • SQL Profiler can translate its traces to data collector, creating custom collection sets to retrieve information from the servers. In SQL Profiler menu, use ‘File->Export->SQL Trace Definition->For SQL Trace Collection Set
  • You need to control the size of the MDW database. Check the execution of the job mdw_purge_data. If the database starts to grow too much and the job doesn’t solve the problem, it’s even possible that you will need to customize this job.
  • Usually I would recommend transforming complex queries in stored procedures or functions, but in this case all the queries in this article are for use in CMS, so to create procedures or function you would need to create and manage them in all SQL Server instances. Sure this is possible, you can do this in your environment.
  • The limitation to only see reports in the SSMS interface isn’t good. Bill Ramos solved this problem by re-creating some of the reports for the report server, so we can install these reports and access them from anywhere. He developed a report for server activity and for query hash statistics. You can see the first of a 6 part series here: MDW Overview Report for Data Collector – MDW Report Series Part 1
  • Instead of creating a report for Query statistics, Bill Ramos created a report for Query Hash Statistics, a custom collection set based on the query fingerprint feature. You can see details in this article – SQL Server Query Hash Statistics – but the download links for query hash statistics collection set are all broken. After some research and contacts I got the file and it is available from the head of this article.


Data Collector and Management Data Warehouse provide a way of collecting a wide range of data about the behavior of a working database, but it is difficult to manage the process because the configuration of the process is done on each server that hosts a database that is being monitored. By using Central Management Server (CMS) it becomes relatively quick and easy to configure and control data collector in dozens or even hundreds of SQL Server instances, thereby centralizing information for large-scale server-environments.