Collect Your SQL Server Auditing and Troubleshooting Information Automatically

Comments 0

Share to social media

DevOps, Continuous Delivery & Database Lifecycle Management
Continuous Integration

After many years, the Default Trace still remains the simplest  way of auditing SQL Server. It gives you so much useful information about significant events, when they happened and, where relevant, the login associated with the event. The Default trace is, however, now deprecated in favor of Extended Events and  so has not evolved much over the years. The biggest problem with it is that it consists only of 5 files of 20Mb each, and they get overwritten often, especially in a busy SQL Server environment.  

This article shows how you can get around this difficulty in order to maintain an unbroken record of trace events. This is only the start.

We then tackle the problems of maintaining  a record of these default trace events for a whole group of servers and their databases, and use this archive for reporting and alerting for potential problems.  We will do this by  automating the process of extracting the default trace data from several SQL Server instances to a centralized location, persisting the data to a single database and preparing it for further analysis. The information that it can provide you about the way these servers and databases are being used is extremely valuable and difficult to get any other way.  We can, for example get a complete record of every change to every database object, when it happened and who did it.

For the purpose, we will be using a Robocopy script which offloads the default trace files from the remote servers, then SSIS package which will import the data into a database and will delete the imported files.

The steps are as follows:

  • Configure the Robocopy to access the remote server and to store the default trace files locally
  • Configure the SSIS package to look for the default trace files copied by Robocopy

 We’ll use Robocopy because  the tool can be used to

  1. monitor the folder in a remote server  that contains the default trace files
  2. detect any changes and copy over any changed file  periodically

We choose Robocopy over  SSIS to do this because  we would have to schedule an SSIS package to run quite often and the copying process is not as lightweight.

Setting up Robocopy

The purpose of the Robocopy script in this case is to use it to maintain a copy of the Default Trace files in a centralized location, since the default trace log files in the  SSQL Server instance are overwritten after a certain time.

This is a bit tricky to schedule and it is based on each individual SQL Server instance. For example, on a very busy production server it might be so that, every 10 minutes, all 5 default trace files are overwritten and on another SQL Server instance it may take 5 days for the files to be overwritten. The overwrite of the files depends on the volume of the traced events occurring on the system and also on instance restarts.

This is why it will take some investigation to understand and to schedule the Robocopy script  in individual cases.

For the purpose of this article I will use a setting for Robocopy to check for changes in the default trace files every 10 minutes, though, in the assumption that this interval would be geared to the number events being recorded in the trace for the individual server.

The following script will execute Robocopy and will look at the default trace folder for the SQL Server instance and will copy over the changes to a local folder:

Note that the script is using the UNC path for the file storage locations. This means that it is up to the user to decide whether the robocopy script will be scheduled to run on the source machine or on the destination machine. (From my personal experience, it is better to have all Robocopy scripts to run from the same destination machine – it is easier to monitor and maintain).

Also note that the Destination folder contains a sub-folder for each monitored server. This is used later on in the configuration of the SSIS package.

Setting up the database

Here is the database script:

After creating the objects, we have to populate the config table:

The table contains 3 columns:

  1. Server name – the name of the server which is audited
  2.  Trace path – the local folder where the default trace files are stored for the server
  3.  isActive – this flag indicates whether the files should be processed

Importing the default trace files

The SSIS package takes its configurations from the dbo.ProcessingTrace_Config table.

Then the ForEachLoop container executes for every record in the config table and it imports each trace file into a scrubbing table called dbo.temp_trc.

From there the default trace data is queried by event groups and merged into separate tables.

The idea is that since we do not know how often the default trace files are changing for each server, and since the files have a maximum size of 20Mb each (but they may be much smaller), it is actually more efficient to import them and merge them than to write custom logic to check which file was imported and which has not. (The performance overhead of importing 20Mb trace files and using the MERGE script is minimal. I performed a test by populating 1 million rows in each table by using Redgate’s Data Generator and even in such case the import was fast. )

Technically, the Robocopy script makes sure that the files are stored and updated on our local storage and later on we can schedule the SSIS package to import them at any time we would like.

The events are split in the following categories, and each category is represented by a database table:

  • FileGrowAndShrink
  • LogFileAutoGrowAndShrink
  • ErrorLog
  • SortAndHashWarnings
  • MissingStatsAndPredicates
  • FTSearch
  • AlteredObjects
  • CreatedUsersAndLogins
  • DroppedUsersAndLogins
  • LoginFailed
  • ServerStarts
  • MemoryChangesEvents

A typical merge operation is this, for sort and hash warnings. (the rest are in the SSIS package that you can download from the link at the bottom of the article.) The scripts can be viewed here.

After extracting and merging the data, the last step is to delete all the files from the filesystem that are older than 1 day.

Note that the scheduling of the Robocopy and the SSIS package is individual and it depends on the systems which are audited. If the default trace files are overwritten often by the source system then we might want to run the Robocopy task and the SSIS package more often.

For the purpose of this article I have set up the SSIS Script Component to delete files older than 1 day.

Here is the C# script for the component:

Conclusions

This article shows how the default trace logs of a number of SQL Servers can be aggregated and preserved on a  centralized auditing server , and then imported into a central auditing database via an SSIS task that filters and merges the results into a number of tables that give a central record of  a number of diverse events that are useful for first-line problem-diagnosis, such as database and log File growth and shrinkage,  Error Log information,  a variety of warnings, notice of created or altered  or deleted database objects, users  or logins, failed logins, server starts and memory change events.

Now we have all this information in one place for all our servers, we have the opportunity for  first-line alerting for a number of signs that things are going wrong, and that we need to reach for our monitoring system to find out more about what is going on within that server, and maybe also database.

With this database in place we can then have a number of data mining possibilities for this data. We’ll do into more detail about this in a subsequent article.

The SSIS package is downloadable from the link at the head of the article, as is the SQL source of the scripts. You can view all the SQL merge scripts via the browser by clicking here.

DevOps, Continuous Delivery & Database Lifecycle Management
Go to the Simple Talk library to find more articles, or visit www.red-gate.com/solutions for more information on the benefits of extending DevOps practices to SQL Server databases.

Load comments

About the author

Feodor Georgiev

See Profile

Feodor has a background of many years working with SQL Server and is now mainly focusing on data analytics, data science and R.

Over more than 15 years Feodor has worked on assignments involving database architecture, Microsoft SQL Server data platform, data model design, database design, integration solutions, business intelligence, reporting, as well as performance optimization and systems scalability.

In the past 3 years he has expanded his focus to coding in R for assignments relating to data analytics and data science.

On the side of his day to day schedule he blogs, shares tips on forums and writes articles.