{"id":1663,"date":"2013-07-02T00:00:00","date_gmt":"2013-07-02T00:00:00","guid":{"rendered":"https:\/\/test.simple-talk.com\/uncategorized\/collecting-the-information-in-the-default-trace\/"},"modified":"2021-08-24T13:39:53","modified_gmt":"2021-08-24T13:39:53","slug":"collecting-the-information-in-the-default-trace","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/database-administration-sql-server\/collecting-the-information-in-the-default-trace\/","title":{"rendered":"Collecting the Information in the Default Trace"},"content":{"rendered":"<div id=\"pretty\">\n<h2>Introduction<\/h2>\n<p>There&#8217;s important information in the default trace, and after a while it gets deleted. If you need to look back over a time-period to investigate a problem, you&#8217;ve got to find a way of preserving it.<\/p>\n<p>The default trace in SQL Server can help to diagnose a problem retrospectively because it logs the major events that happen, such as security audit , database and error events, along with database object changes . Not only does it provide information about the event, but it also identifies which database user made the changes. If you can find a way of preserving this information, then you can be certain of being able to find out what happened for certain.<\/p>\n<h2>What&#8217;s the Default Trace?<\/h2>\n<p>The Default Trace has been with the product since SQL Server 2005, but has now been deprecated and will be removed in a future version of Microsoft SQL Server . Extended events has superseded it in SQL Server 2012 with\u00a0 the default system_health extended event session. Having said that it is deprecated, I must add that it will continue to provide very valuable information for many years to come until the last servers running SQL Server 2005 and 2008 are upgraded . The newer system has a similar problem with making data persistent as does the default trace, but\u00a0 discussing any solution for this\u00a0 is beyond the scope of this article.<\/p>\n<p>The default trace is a system-defined trace which consists of 5 files, each one of 20 MB which reside in the SQL Server directory. These files are used as a circular buffer that records a large number of events. Information in this circular buffer will be deleted after a while, since the oldest file is deleted on restart, or when the current file reaches 20Mb in size.\u00a0 In a busy OLTP system this could be a matter of minutes. There are quite a few events collected, and on a busy server the 5 files, 20 MB each, will be rolled over very fast. (Some maintenance tasks may fill up the trace files quickly.) In this article, we&#8217;ll go on to describe various ways of getting around this problem.<\/p>\n<h2><span class=\"Heading2Char\">Where is it? Is it running<\/span>?<\/h2>\n<p>By default, the trace is enabled. When the advanced option of <b>sp_configure<\/b> is set, it will list the option and whether it is enabled.<\/p>\n<p>This is how you can check via script whether the default trace is running on a sql server instance:<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">EXEC sp_configure 'show advanced options', 1;\r\nCreate table #Options (name Varchar(68), minimum int, maximum int, config_value int, run_value int)\r\ninsert into #Options\r\nexecute sp_configure\r\nSelect \r\ncase when exists \r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 (select * from #Options \r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 where name like 'default trace enabled' and run_value=1) \r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 then 'Enabled' else 'disabled'\r\n\u00a0\u00a0 end\r\n<\/pre>\n<p class=\"caption\">Listing 1: TSQL batch to determine if the default trace is running<\/p>\n<p>If it isn&#8217;t enabled, you can, if your userID has the necessary rights, start it running &#8230;<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">EXEC master.dbo.sp_configure 'allow updates', 1;\r\nEXEC master.dbo.sp_configure 'show advanced options', 1;\r\nEXEC master.dbo.sp_configure 'default trace enabled', 1;\r\nRECONFIGURE WITH OVERRIDE;\r\nEXEC master.dbo.sp_configure 'show advanced options', 0;\r\nEXEC master.dbo.sp_configure 'allow updates', 0;\r\nRECONFIGURE WITH OVERRIDE;\r\n<\/pre>\n<p class=\"caption\">Listing 2: TSQL Batch to start the default trace<\/p>\n<h2>What&#8217;s in the trace?<\/h2>\n<p>You can determine what is in the default trace by using this code<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">SELECT distinct e.name AS EventName\r\nFROM\u00a0\u00a0\u00a0 fn_trace_geteventinfo(1) evi \r\nJOIN\u00a0\u00a0\u00a0 sys.trace_events e ON evi.eventid = e.trace_event_id \r\n<\/pre>\n<p class=\"caption\">Listing 3: Expression to return the types of traces collected<\/p>\n<p>Here is what is collected in the default trace, by category:<\/p>\n<div>\n<p><b><i>Database events<\/i><\/b><\/p>\n<ul>\n<li>Data file auto grow<\/li>\n<li>Data file auto shrink<\/li>\n<li>Database mirroring status change<\/li>\n<li>Log file auto grow<\/li>\n<li>Log file auto shrink<\/li>\n<\/ul>\n<p>\u00a0<b><i>Errors and warnings<\/i><\/b><\/p>\n<ul>\n<li>Errorlog<\/li>\n<li>Hash warning<\/li>\n<li>Missing Column Statistics<\/li>\n<li>Missing Join Predicate<\/li>\n<li>Sort Warning<\/li>\n<\/ul>\n<p><b><i>Full text events<\/i><\/b><\/p>\n<ul>\n<li>FT Crawl Aborted<\/li>\n<li>FT Crawl Started<\/li>\n<li>FT Crawl Stopped<\/li>\n<\/ul>\n<p><b><i>Object events<\/i><\/b><\/p>\n<ul>\n<li>Object Altered<\/li>\n<li>Object Created<\/li>\n<li>Object Deleted<\/li>\n<\/ul>\n<p><b><i>Security audit events<\/i><\/b><\/p>\n<ul>\n<li>Audit Add DB user event<\/li>\n<li>Audit Add login to server role event<\/li>\n<li>Audit Add Member to DB role event<\/li>\n<li>Audit Add Role event<\/li>\n<li>Audit Add login event<\/li>\n<li>Audit Backup\/Restore event<\/li>\n<li>Audit Change Database owner<\/li>\n<li>Audit DBCC event<\/li>\n<li>Audit Database Grant, Deny, Revoke<\/li>\n<li>Audit Login Change Property event<\/li>\n<li>Audit Login Failed<\/li>\n<li>Audit Login GDR event<\/li>\n<li>Audit Schema Object GDR event<\/li>\n<li>Audit Schema Object Take Ownership<\/li>\n<li>Audit Server Starts and Stops<\/li>\n<\/ul>\n<\/div>\n<h2>How do I read the default trace?<\/h2>\n<p>The default trace is a standard *.trc file which can be opened by SQL Profiler, or it can be queried by using the sys.fn_trace_gettable system function.\u00a0 We&#8217;ll show how later on in the article<\/p>\n<p>\u00a0Here is a script which can tell us the exact path of the Default Trace files:<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">SELECT REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('\\', REVERSE(path)), 256)) AS [DefaultTracePath]\r\nFROM\u00a0\u00a0\u00a0 sys.traces\r\nWHERE\u00a0\u00a0 is_default = 1\r\n<\/pre>\n<p class=\"caption\">Listing 4: SELECT statement that returns the exact path of the default trace files<\/p>\n<p>\u00a0Here is an example of a T-SQL script which queries the most recent file from the Default trace:<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">SELECT *\r\nFROM\u00a0\u00a0\u00a0 sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 f.[value]\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 FROM\u00a0\u00a0\u00a0 sys.fn_trace_getinfo(NULL) f\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 WHERE\u00a0\u00a0 f.property = 2\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 )), DEFAULT) T\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id\r\n<\/pre>\n<p class=\"caption\">Listing 5: Selecting the most recent default trace file as a table<\/p>\n<h2>Saving information for later use<\/h2>\n<p>What can we do to persist the default trace data and make it available for aggregations and analysis at a later time?<\/p>\n<p>There are several options that come to mind:<\/p>\n<ol>\n<li>We could query the default trace files and save aggregated values<\/li>\n<li>We could get the definition of the default trace and create a collection set, which can be used with Management Data Warehouse, which is built in with SQL Server.<\/li>\n<li>We could get the definition of the default trace and create a remote trace from another server and save the data to a database table<\/li>\n<li>We could use a robocopy script to copy the trace files from our server to a different folder or a remote machine, which will leave the files on disk for longer time, giving us the possibility to analyze them<\/li>\n<\/ol>\n<p>Let&#8217;s look at these options in more detail:<\/p>\n<h3>Querying the default trace files and save aggregated values<\/h3>\n<p>A while back I wrote an article on the default trace (T<a href=\"https:\/\/www.simple-talk.com\/sql\/performance\/the-default-trace-in-sql-server---the-power-of-performance-and-security-auditing\/\">he Default Trace in SQL Server -The Power of Performance and Security Auditing<\/a> ), which was followed by a set of custom reports which show aggregations from the default trace directly from Management Studio (<a href=\"http:\/\/sqlconcept.com\/tools\/default-trace-audit-documentation-and-faq\/\">http:\/\/sqlconcept.com\/tools\/default-trace-audit-documentation-and-faq\/<\/a> ).<\/p>\n<p>In this case, however, the real challenge is to manage to save the aggregated values only once for prolonged period of time. We do not know exactly when the events will occur and when the trace files will roll over, hence it is not so easy to take meaningful aggregations at regular intervals.<\/p>\n<p>Of course, we can always write a script which runs every N minutes and aggregates the data for the past N minutes.<\/p>\n<p>For example, here is a script which may be scheduled to query the trace every 5 minutes and will give an aggregated result for the occurrences of sort warnings and hash warnings in the past 5 minutes:<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">SELECT TE.name AS [EventName] ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 V.subclass_name ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 T.DatabaseName ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 T.ApplicationName ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 COUNT(*) AS TotalCount\r\nFROM\u00a0\u00a0\u00a0 dbo.fn_trace_gettable(( SELECT REVERSE(SUBSTRING(REVERSE(path),\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 CHARINDEX('\\',\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 REVERSE(path)),\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 256)) + 'log.trc'\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 FROM\u00a0\u00a0\u00a0 sys.traces\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 WHERE\u00a0\u00a0 is_default = 1\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ), DEFAULT) T\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 JOIN sys.trace_subclass_values V ON V.trace_event_id = TE.trace_event_id\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 AND V.subclass_value = T.EventSubClass\r\nWHERE\u00a0\u00a0 StartTime &gt; DATEADD(mi,-5,GETDATE()) AND \r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 (TE.name = 'Hash Warning'\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 OR TE.name = 'Sort Warnings')\r\nGROUP BY TE.name ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 V.subclass_name ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 T.DatabaseName ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 T.ApplicationName\r\n<\/pre>\n<p class=\"caption\">Listing 6 Selecting the number of sort and hash warnings in the past minute<\/p>\n<p>It is worth noting that the script queries all 5 default trace files, since we do not know whether the last 5 minutes are contained only in the most recent file or in any number of them.<\/p>\n<p>Querying all trace files in the same file may be ensured by passing the code in Listing 4 as a parameter providing the trace path to the dbo.fn_trace_gettable function:<\/p>\n<h3>Collection Set and Management Data Warehouse<\/h3>\n<p>Management Data Warehouse(MDW) is a part of SQL Server which provides the capability of performance auditing and saving the results directly in a relational database. The MDW provides the administrator with some built in collection sets and reports (Disk Usage, Query Statistics, Server Activity) and also with the ability to create custom user-defined collection sets.<\/p>\n<p>Here are a few words on how to extract the definition of the Default trace.<\/p>\n<p>As previously mentioned, we can find out the folder in which the default trace files are saved by running the following script given in Listing 4:<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">SELECT REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('\\', REVERSE(path)), 256))\r\nFROM\u00a0\u00a0\u00a0 sys.traces\r\nWHERE\u00a0\u00a0 is_default = 1\r\n<\/pre>\n<p>After navigating to the folder we can double-click one of the trace files and it will open in the SQL Trace.<\/p>\n<p>From here, the usual steps would be to go to File &gt; Export &gt; Script Trace Definition&#8230; however we cannot go further because the options are grayed out.<\/p>\n<p>We can avoid this by saving the Default trace definition as a template first: go to File &gt; Save as &gt; Trace Template. After the template is saved, we have to run it, stop it and then we can extract the definitions.<\/p>\n<p>There are several options under Script Trace Definion: For SQL Server, or for SQL Trace Collection Set.<\/p>\n<p>If we script it for SQL Server, then we will get a T-SQL script which can be run to start a remote server trace, and if we script it as a collection set, we will get code which we can use to define a collection set for use in MDW.<\/p>\n<p>The problem with the user-defined collection sets, however, is that the user needs to create custom reports in order to consume the data from the custom collection sets. Purging the data has some issues, as well.<\/p>\n<h3>User-defined Remote Server default trace<\/h3>\n<p class=\"MsoNoSpacing\">As mentioned above, there is a way to script the definition of the default trace as a T-SQL script. The script can be used to run a server side trace and save the results to a file. The advantages of this method are that the user can define the location and the size of the files.<\/p>\n<p class=\"MsoNoSpacing\">The disadvantage is that we cannot write the trace results directly to a SQL Server table, however we can import them at a later time by using a script similar to this:\u00a0<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">SELECT * INTO trace_table FROM ::fn_trace_gettable('c:\\my_trace.trc', default)\r\n<\/pre>\n<p class=\"caption\">Listing 7 Importing a trace file into SQL Server<\/p>\n<p class=\"MsoNoSpacing\">\u00a0This method is also great if we have several servers we would like to monitor and if we wanted to save the trace files to a UNC path so the performance of the servers is not affected.<\/p>\n<h3>Use Robocopy to pull out default trace files<\/h3>\n<p>Another way to save the default trace files before they are rolled over is to use robocopy. This is a great tool which nowadays is part of Windows servers and is used to monitor source and destination folders and synchronize them.<\/p>\n<p>The idea in our case is that we do not know when the changes to the default trace files will occur and we would like to save the files to a remote folder before they are overwritten.<\/p>\n<p>Robocopy is perfect for this, because it has a parameter which does exactly this: it waits for 1 minute and checks if more than N changes have occurred and if this is the case, then it copies the changed files.<\/p>\n<p>We can use this to reliably save up to 100 files because the name of the trace file cycles through 100 different names\u00a0\u00a0 log_xx.trc (where xx is a number between 00 and 99)<\/p>\n<p>Here is how it works:<\/p>\n<p>For this example I set up a destination folder called c:\\Robocopy\\Destination on my local disk. But I could also use a UNC for it, or any network folder. Just for making this example colorful, I will actually use the UNC for the local folder: \\\\myMachine\\c$\\Robocopy\\Destination<\/p>\n<p>Then I ran the script which I mentioned above to get the trace files location folder, which in my case is C:\\Program Files\\Microsoft SQL Server\\MSSQL10_50.SQL2008\\MSSQL\\Log.<\/p>\n<p>I am only interested in *.trc files, so I will make sure to filter them in my robocopy script.<\/p>\n<p>Here is the actual robocopy script:<\/p>\n<pre>robocopy \"C:\\Program Files\\Microsoft SQL Server\\MSSQL10_50.SQL2008\\MSSQL\\Log\" \"\\\\myMachine \\c$\\Robocopy\\Destination\" *.trc \/MON:1\r\n<\/pre>\n<p class=\"caption\">Listing 8: Routine to set up Robocopy to check every minute and copy any changed file<\/p>\n<p>The \/MON:1 parameter instructs robocopy to monitor the source folder every minute and to introduce new and changed files to the destination folder if there are any.<\/p>\n<p>Open command line and execute your robocopy script.<\/p>\n<p>The command line result will look similar to this:<\/p>\n<p class=\"illustration\"><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1829-Robocopy.PNG\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1829-clip_image002.jpg\" alt=\"1829-clip_image002.jpg\" width=\"624\" height=\"235\" \/><\/a><\/p>\n<p class=\"caption\">Figure 1: Executing Robocopy from the command line<\/p>\n<p>Now you can simply attempt to login to your SQL Server instance with a nonexistent login, i.e. make sure you create a login failed event in the default trace.<\/p>\n<p>Wait for a minute and look at the trace file (either by querying it, or by simply opening it with SQL Profiler) in the destination folder &#8211; it will have the failed login recorded.<\/p>\n<p>The command prompt window will look similar to this:<\/p>\n<p class=\"illustration\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1829-clip_image004.jpg\" alt=\"1829-clip_image004.jpg\" width=\"624\" height=\"190\" \/><\/p>\n<p class=\"caption\">Figure 2: The status information from Robocopy<\/p>\n<p>There is one small problem with this method &#8211; the robocopy in this case is not cleaning up the destination folder. However, we can do it ourselves after we have made sure that we have extracted whatever we need from the default trace data.<\/p>\n<p>From here on, since we have the default trace files safe from overwriting, we can consume the data in several ways. We could use similar T-SQL to query and filter certain events:<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">DECLARE @traceFilesLocation VARCHAR(256)\r\nSET @traceFilesLocation = '\\\\myMachine\\c$\\Robocopy\\Destination\\'\r\n\u00a0\r\nSELECT TE.name AS [EventName] ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 V.subclass_name ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 T.DatabaseName ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 T.ApplicationName ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 COUNT(*) AS TotalCount\r\nFROM\u00a0\u00a0\u00a0 dbo.fn_trace_gettable(( @traceFilesLocation + 'log.trc' ), DEFAULT) T\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 JOIN sys.trace_subclass_values V ON V.trace_event_id = TE.trace_event_id\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 AND V.subclass_value = T.EventSubClass\r\nWHERE\u00a0\u00a0 StartTime &gt; DATEADD(hh, 24, GETDATE())\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 AND ( TE.name = 'Hash Warning'\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 OR TE.name = 'Sort Warnings'\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 )\r\nGROUP BY TE.name ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 V.subclass_name ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 T.DatabaseName ,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 T.ApplicationName\r\n<\/pre>\n<p class=\"caption\">Listing 9: scanning all archived files for aggregation of sort and hash warnings<\/p>\n<p>Further, we could develop custom logic, for example by using SSIS, which will import the trace files into a SQL Server table (by using the dbo.fn_trace_gettable function) and then we can use T-SQL to query the imported rows in the table. By using this method we can ensure that historical default trace data is available at our fingertips and that we can aggregate and generate reports from the data in much more efficient way.<\/p>\n<h2>Conclusion:<\/h2>\n<p>In this article I provided several methods of retaining the data from the default trace in SQL Server, before it is overwritten and lost forever. This is a great way to retain data for security and performance auditing and to use it for alerting and notifications. By saving the default trace, it becomes possible to automate daily audit reports, or keep a reliable check on the database objects that are changed, and who by. By copying the files, the overhead of analyzing the trace data can be delegated to a reporting system, thereby freeing a production system of any extra burden.<\/p>\n<h2>Further Reading<\/h2>\n<ul class=\"reference-list\">\n<li><a href=\"http:\/\/www.sqlservercentral.com\/articles\/SQL+Server+2005\/64547\/\">Default trace &#8211; A Beginner&#8217;s Guide <\/a><\/li>\n<li><a href=\"https:\/\/blogs.msdn.microsoft.com\/askjay\/2012\/06\/28\/default-trace-and-system-health\/\">Default Trace and System Health <\/a><\/li>\n<li><a href=\"https:\/\/www.simple-talk.com\/sql\/performance\/the-default-trace-in-sql-server---the-power-of-performance-and-security-auditing\/\">The default trace in SQL Server &#8211; the power of performance and security auditing<\/a><\/li>\n<li><a href=\"http:\/\/www.sqlsherwin.com\/2012\/10\/who-did-that-default-trace-knows.html\">Who Did that? The Default Trace Knows <\/a><\/li>\n<\/ul>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The default trace is still the best way of getting important information  to provide a security audit  of  SQL Server, since it records such information  as  logins, changes to users and roles, changes in object permissions, error events and changes to both database settings and schemas. The only trouble is that the information is volatile. Feodor shows how to squirrel the information away to provide reports, check for unauthorised changes and provide forensic evidence.&hellip;<\/p>\n","protected":false},"author":221902,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[143527],"tags":[4168,4170,4619,4150,4151],"coauthors":[11305],"class_list":["post-1663","post","type-post","status-publish","format-standard","hentry","category-database-administration-sql-server","tag-database","tag-database-administration","tag-security","tag-sql","tag-sql-server"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/1663","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/221902"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=1663"}],"version-history":[{"count":7,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/1663\/revisions"}],"predecessor-version":[{"id":73418,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/1663\/revisions\/73418"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=1663"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=1663"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=1663"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=1663"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}