Red Gate logo

Walkthroughs

Find out how to easily set up a regular backup and restore schedule and get faster, smaller, verified SQL Server backups.

Database backup, restore, and verification

This walkthrough exploring the new features in SQL Backup Pro 7 covers:

  • Using the Schedule Backup Jobs wizard to run database checks and create reminders to verify your backups.
  • Launching the Schedule Restore Jobs wizard from the Reminder tab.
  • Scheduling a restore job to fully verify your backups, even if they do not exist yet.

Schedule a backup job

Step 1: Launch the Schedule Backup Jobs wizard

Launch the Schedule Backup Jobs wizard

To schedule a backup of our database we launch the Schedule Backup Jobs wizard from the button on the toolbar.

By choosing the Schedule Backup Jobs wizard (rather than the Back Up wizard) we will have the option to create a reminder to verify our backups.

Step 2: Choose to perform a new backup or use previously saved settings

Selecting the server to back up

We now select the server from which we wish to back up the database. We can also choose to use settings from a template that we can customize and configure. In this case, we're going to create a new backup.

Step 3: Choose the database and the type of backup to create

Choosing the backup and the type of backup to create

Next, we choose the type of backup we want to create and select the database we want to back up. If full backups have previously been made, we can decide to perform differential or transaction log backups. In this walkthrough, we will create a full database backup.

Step 4: Create a backup schedule

Create a backup schedule

Here, we can create a schedule for our backups. In this walkthrough, we're creating a full backup and we're scheduling it to occur every weekday at 18.00. We could opt for more frequent backups to suit our needs, and can decide whether we want this schedule to run for a finite period, or run indefinitely.

Step 5: Configure backup file settings

Configuring backup file settings

In this step, we decide whether to back up to a single file, a single file mirrored to a second location, or split the backup over multiple, smaller files. Splitting the backup across multiple files can reduce the time needed to create it.

Here, we can also specify the file name of the backup or have it named automatically.

If we need to purge older backups as part of the process, we can choose to overwrite existing files or delete existing backup files by age or quantity.

If we want to copy the backup to a network location, we can also configure to which folder the backup should go and whether to purge existing backups at that location.

Step 6: Configure compression, encryption, network resilience and optimization settings for the backup

Configuring compression,network resilience and encryption settings

Next, we can select one of four compression levels for our backup. If we are unsure which compression level to use, the Compression Analyzer helps us by sampling data to suggest the optimal compression level for our database.

Here, we can also encrypt our data. With SQL Backup Pro, we can choose between 128-bit and 256-bit AES encryption.

SQL Backup Pro lets us optimize the performance of our backup by selecting the number of multiple threads to be used for the backup process. We can choose a maximum transfer and data block size for memory and disk optimization.

We can also protect our backup process against network outages which can interrupt backup operations and cause job failure. SQL Backup Pro is designed to minimize the disruption caused by network outages by automatically continuing the data transfer once the connection is re-established. By default, SQL Backup Pro is set to retry connecting to the network after 30 seconds and up to 10 times.

Step 7: Configure backup checks, reminders and email notification

Configure backup checks and email notification

This new step gives us the option to test backup files when they are created, by including CHECKSUM and RESTORE VERIFYONLY in the backup jobs.

CHECKSUM will validate any page checksums and generate a backup checksum. RESTORE VERIFYONLY will check the backup is complete and readable, and will validate the checksums.

For Jobs which include a full backup, we can also select 'Create reminder' which adds a reminder to schedule a restore job, to fully verify the backups. If we launch the Schedule Restore Jobs wizard from the Reminders tab, the wizard will be populated with details from the backup job. This will be demonstrated later in this walkthrough.

We can choose to be notified by email, with a copy of the completion log. By default, email notifications are only sent if errors occur during the backup process, such as the checksum failing if an error occurs. We can modify the email settings and choose to receive an email only when there are errors or warnings, or be informed about any outcome.

Step 8: Review a summary of the options chosen and complete

Review a summary of the options chosen

Before we confirm the backup schedule we can review a summary of all the options we've chosen, and view the SQL script for use with the command line utility or stored procedure. We then click Finish to create the scheduled backup job.

The job is listed in the Jobs tab and a reminder is added to the Reminders tab. The results of the CHECKSUM and RESTORE VERIFYONLY will be recorded in the Activity History.

Using the reminders

Step 9: How to set up a reminder

We can create a reminder to verify our backups when we create a scheduled full backup job using the Schedule Backup Jobs wizard (see step 7 above). A reminder is created for each database included in the scheduled backup job and can be found in the Reminders tab.

Step 10: The Reminders tab

The Reminders tab

The Reminders tab displays reminders to set up backup verification for each of our database backups. We can use SQL Backup Pro to verify our backups by creating a scheduled restore job to regularly restore a database from the latest backup and run a database integrity check (DBCC CHECKDB) to verify the logical and physical integrity of all its objects.

When we're ready to create a scheduled restore job, we go to the Reminders tab, find the reminder for the database we want to verify, and click Schedule a restore now. Launching the Schedule Restore Jobs wizard in this way will ensure that the wizard is populated with details from the scheduled backup job.

Schedule a restore job

Step 11: Launch the Schedule Restore Job wizard

In this walkthrough we will launch the Schedule Restore Job wizard from the Reminders tab, so the steps of the wizard will be populated with details from the backup.

NB: If we want to schedule a backup restore, but haven't come through the Reminders tab, we can simply launch the wizard from the SQL Backup Pro user interface. If we launch the wizard like this, it will not open pre-populated with any details of a scheduled backup.

Step 12: Select the destination server and backups to restore

Select the destination server and backups to restore

In this first step, we will find the destination server set to the server on which the scheduled backup job was created. For information on scheduling restore jobs to a different server, see Backing up and restoring on a network share.

The rest of the wizard will also be populated with details from the backup job. As we're setting up a regular restore schedule the 'Restore from' will be set to 'folder and file name pattern'; because each backup will be created with a unique file name, it is not possible to specify the exact backups to restore each time.

The folder, file name pattern, source database name, and the type of backup we want SQL Backup Pro to look for when it performs the scheduled restore, will already be filled in for our convenience.

Step 13: Choose whether to overwrite an existing database or restore the backup to a new database

Choose whether to overwrite an existing database or restore to a new database

In this step, we decide whether to overwrite an existing database, or whether to create a new database. 'Create new' is selected by default, and the new database name is pre-populated with the name of the source database followed by _Verification, though we can change this to any name that suits us.

We'll find that the checkbox 'Overwrite on subsequent restores' is also selected by default. We want to choose this option because we're creating a schedule for regular restores, and this will ensure that subsequent restores will not fail because a database of the same name already exists.

Step 14: Choose where to restore the database files to

Choose where to restore the database files to

Here, we can choose the file locations for the restored database. We have the option to restore 'All files to default locations', 'Data and log files to specific locations', or restore 'Individual files to specific locations'.

Again, we'll find the grid pre-populated with details from the source database, including the logical names of the database files at the time the reminder was created. We could change the 'restore to' file location and also add or remove files from the database to restore if we needed to.

Step 15: Specify the verification options for the restored database

Specify the verification options for the restored database

In step 4 we can set the level of verification we want to run on the backup(s).

We want to make sure that the database integrity check (DBCC CHECKDB) is selected, as this will fully verify the backup. We can also choose from any additional options we want to include in the database integrity check. We can opt for a more or less intense check and we can specify the amount of feedback we want returned.

Step 16: Select options for dropping the database following restore

Also in step 4 of the Schedule Restore Job wizard, we can select the option to 'Drop database following a restore'. This is really useful if we want to free the space used by the database between restore jobs.

We can choose to 'Always' drop the database, or to drop it only 'On success'. If we choose not to drop the database at all, it will be overwritten on the next restore because we selected 'Overwrite on subsequent restores' in step 2 of the wizard.

Step 17: Configure restore options for the destination database

Configure restore options for the destination database

Here we can select whether we want the recovery completion state of the restore to be 'Operational' or 'Read only'.

We can also choose to receive an email notification with a copy of the completion log, including any errors which arise during the database integrity check. By default, email notifications are only sent if errors occur during the backup process, such as the checksum failing if an error occurs. We can modify the email settings and choose to receive an email only when there are errors or warnings, or be informed about any outcome.

Step 18: Create the restore schedule

Create the restore schedule

This is where we can create the schedule for our restores.

We can set the restore schedule around the schedule we have created for our backups, and have complete control over the time and frequency of the restores, to suit our disaster recovery requirements.

We can schedule a restore job to verify backups even if the backups themselves do not yet exist; at the time of the restore SQL Backup Pro will automatically identify the latest backup of the database and restore that. If we had scheduled a set of full, differential, and transaction log backups of a database, SQL Backup Pro will restore each backup in the set in the correct order.

Step 19: Review a summary of the scheduled restore and complete

Review a summary of the scheduled restore and complete

Before we confirm the restore schedule we can review a summary of all the options we've chosen, and view the SQL script for use with the command line utility or stored procedure. We click 'Finish' to complete the wizard and create the scheduled restore job. The reminder is removed from the Reminders tab and the job is added to the Jobs tab.

Step 20: Review the verification results

Review the verification results

Once the restore has run, we can view the results of the restore and database integrity check in the Activity History tab, alongside the results of any backups that may have been performed. To view any of these items in more detail, we would simply click on it, and a detailed summary will pop up.

Backing up a database

This walkthrough of SQL Backup Pro will show you:

  • How to back up a Microsoft SQL Server database
  • How to compress your backup file
  • How to encrypt your backup
  • How to configure backup checks

Step 1: Launch the Back Up wizard

Launch the Back Up wizard

To back up a database we launch the Back Up wizard from the Back Up button on the toolbar.

Step 2: Choose to perform a new backup or use previously saved settings

Selecting the server to back up

We now select the server from which we wish to back up the database. We can also choose to use settings from a template that we can customize and configure. In this case, we're going to create a new backup.

Step 3: Choose the database and the type of backup to create

Choosing the backup and the type of backup to create

Next, we choose the type of backup we want to create and select the database we want to back up. If full backups have previously been made, we can decide to perform differential or transaction log backups. In this walkthrough, we create a full database backup.

Step 4: Configure backup file settings

Configuring backup file settings

In this step, we decide whether to back up to a single file, a single file mirrored to a second location, or split the backup over multiple, smaller files. Splitting the backup across multiple files can reduce the time needed to create it.

Here, we can also specify the file name of the backup or have it named automatically.

If we need to purge older backups as part of the process, we can choose to overwrite existing files or delete existing backup files by age or quantity.

If we want to copy the backup to a network location, we can also configure to which folder the backup should go and whether to purge existing backups at that location.

Step 5: Configure compression, encryption, network resilience and optimization settings

Configuring compression,network resilience and encryption settings

Next, we can select one of four compression levels for our backup. If we are unsure which compression level to use, the Compression Analyzer helps us by sampling data to suggest the optimal compression level for our database.

Here, we can also encrypt our data. With SQL Backup Pro, we can choose between 128-bit and 256-bit AES encryption.

SQL Backup Pro lets us optimize the performance of our backup by selecting the number of multiple threads to be used for the backup process. We can choose a maximum transfer and data block size for memory and disk optimization.

We can also protect our backup process against network outages, which can interrupt backup operations and cause job failure. SQL Backup Pro is designed to minimize the disruption caused by network outages by automatically continuing the data transfer once the connection is reestablished. By default, SQL Backup Pro is set to retry connecting to the network after 30 seconds and up to 10 times.

Step 6: Configure backup checks and email notification

Configure backup checks and email notification

This new step gives us the option to test backup files when they are created by including CHECKSUM and RESTORE VERIFYONLY in backup jobs.

CHECKSUM will validate any page checksums and generate a backup checksum. RESTORE VERIFYONLY will check the backup is complete and readable, and will validate the checksums. The results of the CHECKSUM and RESTORE VERIFYONLY will be recorded in the Activity History.

We can choose to be notified by email with a copy of the completion log. By default, email notifications are only sent if errors occur during the backup process, such as the checksum failing if an error occurs. We can modify the email settings and choose to receive warnings or be informed about any outcome.

Step 7: Review a summary of the options chosen

Review a summary of the options chosen

Before we start the backup we can review a summary of all the options we've chosen, and view the SQL script for use with the command line utility or stored procedure. We then click Finish to start the backup process.

Step 8: Review the backup

Review the backup

We can now see from the timeline in the GUI that the database backup has been completed.

The activity log will be updated automatically.

Log shipping with SQL Backup

This walkthrough will show you:

  • How to set up log shipping in two minutes
  • How to monitor log shipping with the timeline

Step 1: Select the databases

Choosing the source and the destination databases

In step one, we specify our source and destination database. We can choose between restoring to an existing database and creating a new database. If we decide to overwrite an existing database, the restore process will fail if there are existing connections to the destination database. To prevent this kind of problem, you can now check a box to kill any existing connections to the database automatically. This means the restore process can be completed successfully without further intervention.

Step 2: Configure the transaction log backup options

Configure the transaction log backup options

In step two, we choose a backup folder, and if we need to purge older backups as a part of the process, we can delete existing backup files by age or quantity.

In this step, we can also select one of four compression levels to optimize our backups for speed or size. Our in-house testing shows that SQL Backup Pro can compress database backups by up to 95%. This is particularly handy if you have a security requirement to keep a long history of backups.

To optimize the log shipping performance, you can determine the number of threads SQL Backup Pro uses for this operation. Using multiple threads can considerably speed up the backup process.

Here, we can also choose to receive an email notification if an error occurs.

Step 3: Specify a network share and network resilience settings

Choosing the network share

In step three we specify the network share. This is a storage area that both the source and destination servers can access. We can type the name of the network share to which we want to copy the backup files, or we can browse for the folder. We must then test that the source and destination servers have correct permissions to access the network share.

We can also adjust the network resilience settings. Network outages are a common problem when transferring data. They can interrupt backup operations and cause job failure. With network resilience, SQL Backup Pro will reattempt an interrupted operation, making your backup process much more reliable. We can specify here how long SQL Backup Pro should wait and how many times it should retry transferring an individual transaction log.

An extra level of resilience makes log shipping with SQL Backup Pro self-healing. If a prolonged network outage occurs, and the standard number of retries is exceeded, SQL Backup Pro keeps a record of the files that failed to copy and defers the log file transfer for a set period (two minutes by default). The transfer becomes a pending task in the Log Copy Queue tab, and SQL Backup Pro will keep attempting to complete the transfer at regular intervals for a maximum of 24 hours before marking it as failed. Once the network outage is resolved, SQL Backup Pro will transfer all the affected transaction log files, starting with the earliest. Log shipping is then brought right up-to-date, without the need for intervention.

Step 4: Configure the restore options

Configure the restore options

The settings for restoring transaction log backups can be adjusted in step three.

On completion of the restore operation we have to decide whether we want our restored database to be non-operational or read-only, since the database cannot be in an operational state if we want to restore transaction log files.

Once the backup files have been restored, SQL Backup Pro moves the files from the network share so they are not processed during the next run. Here, we specify the folder to which we want these processed backup files moved. We can choose to delete files that are over a certain age or quantity. To be notified of any problems, we can also specify an e-mail address.

Step 5: Set schedules

Setting schedules

Next, we select the schedules for the transaction log backups. We can set the frequency, and here we choose for transaction log backups to occur every day, every 15 minutes between midnight and 1 minute to midnight. We base the restore schedule on the backup schedule, with an offset of 2 minutes. Alternatively, we could create a separate schedule for the restores at this stage by clicking Specify a schedule.

Step 6: Summary of options

Summary of options

In the final step we review a summary of our log shipping configuration. If we spot any corrections to be made, we can go back through the wizard to make changes. If we are happy with our choices, we click Finish to create the SQL Server Agent jobs that will perform the log shipping.

If you would like to see how you can make your log shipping strategy as robust, reliable and easy to set up as demonstrated here, please try our free, fully functional and supported 14-day evaluation of SQL Backup Pro.

Object Level Recovery

This walkthrough will show you:

  • How to work with OLR
  • Selection of .sqb backup files and objects
  • Recovery of objects

Step 1: Start SQL Object Level Recovery Pro

Start SQL Object Level Recovery Pro

Right-click a SQL Server instance in the SQL Backup Pro GUI and select Object Level Recovery to start SQL Object Level Recovery Pro.

Step 2: Specify the destination database

Specify the destination database

In the Select Destination Database dialog, specify the database you want to restore the object to.

Step 3: Select the backup source

Select the backup source

Click Select backup source (next to the green arrow). The Select Backup Source dialog is displayed.

Next, click Add Files to browse for SQL Backup (.sqb) backup files. If you have encrypted your backup, enter the password and click Decrypt.

Now, in the Available backups pane, choose the backup you want to recover objects from. Then click OK.

Step 4: Choose and preview objects

Choose and preview objects

The objects available in the backup source you selected are now listed in the Objects pane. You can preview the schema for any object (and preview the data for table objects), by highlighting the object in the Objects pane. You can also view the recovery script.

Select the objects you want to recover, and click Recover Objects to view a summary of the objects that will be recovered.

Step 5: View a summary and recover objects

View a summary and recover objects

The summary shows you a list of the objects that will be recovered and to which SQL Server and database these objects will be restored. Click Recover to start recovering objects.

Step 6: Check the status of the object recovery operation

Check the status of the object recovery operation

A progress window shows you the status of the object recovery operation.

If an object cannot be restored, you can select the object in the progress window to see a more detailed error message.

Handling a Full SQL Server transaction log

The transaction log for a SQL Server database is critical to the operation of the database and the ability to minimize data loss in the event of a disaster. As such, proper management of log backups and sizing of the transaction log is crucial to managing a database in SQL Server.

In severe cases, transaction log mismanagement or sudden, rapid, log growth can cause a transaction log to grow and grow and eventually eat up all available space on its drive. At this point, it can grow no more and you'll encounter Error 9002, the transaction log full error, and the database will become read-only.

This walkthrough describes the correct way to respond to this error, reviews the danger of some of the courses of action suggested on various community sites and forums, and offers advice on how to best to avoid the occurrence of this problem.

Diagnosing a Full Log

Despite the urgency of this problem, it's important to react calmly, and avoid the sort of "spontaneous" solutions that are covered in the forthcoming section, Mismanagement or What Not To Do. Obviously the pressing concern is to allow SQL Server to continue to write to the log, by making more space available. The first port of call is to establish if the cause is a lack of log backups. The value of the log_reuse_wait_desc column in sys. databases Catalog view will show the current reason why log space cannot be reused, as shown in Listing 1:

DECLARE @DatabaseName VARCHAR(50) ;
							SET @DatabaseName = 'VeryImportant'
							SELECT name ,
							recovery_model_desc ,
							log_reuse_wait_desc
							FROM	sys.databases
							WHERE	name = @DatabaseName
						
Listing 1: Examining the value of the log_reuse_ wait_desc column

If the value for the log_reuse_wait_desc column is Log Backup then this is the likely cause of the issue. It is possible more than one issue is preventing log reuse. The sys.databases view will only show one of the reasons. It is therefore possible to resolve one problem, query sys.databases again and see a different log_reuse_wait reason. We don't have room in this whitepaper to cover all possible reasons, but they are listing in Books Online and described in some detail by Gail Shaw, in her article Why is my transaction log full?. One other value commonly seen for log_reuse_wait is ACTIVE_ TRANSACATION, indicating that a long-running or uncommitted transaction is preventing truncation of log and so space reuse.

Assuming the value is Log Backup then a query to the backupset table in the MSDB database, as shown in Listing 2, will confirm whether or not log backups are being taken on the database, and when the last one was taken.

USE msdb ;
						SELECT 	backup_set_id ,
							backup_start_date ,
							backup_finish_date ,
							backup_size ,
							recovery_model ,
							[type]
						FROM 	dbo.backupset
						WHERE 	database_name = 'DatabaseName'
						
Listing 2: Determine when the last log backup was taken

In the type column, a D represents a database backup, L a log backup and I a differential backup. If log backups aren't being taken, or are being taken very infrequently, then your best course of action is to take a log backup (assuming the database is operating in FULL or BULK LOGGED recovery model). Hopefully, this will free up substantial space within the log and you can then implement an appropriate log backup scheme, and log file growth management strategy.

If, for some reason, it is not possible to perform a log backup due to a lack of disk space, or the time it would take to perform a log backup exceeds the acceptable time to resolve the problem, then it might, depending on the disaster recovery policy for the database in question, be acceptable to force a truncation of the log by temporarily switching the database to the SIMPLE recovery model in order that inactive Virtual Log Files (VLFs) in the transaction log can be truncated on CHECKPOINT. You can then switch the recovery model back to FULL and perform a new full database backup (or a differential backup, assuming a full backup was taken at some previous time) to restart the log chain for point-in-time recovery. Of course, you'll still need to investigate the problem fully, in order to make sure that the space isn't simply eaten up again. One common cause of excessive logging and explosive log growth, for example, is performing bulk operations, such as index rebuilds, using the FULL recovery model. In such cases, it may be beneficial, assuming your Service Level Agreement allows for it, to switch to BULK LOGGED recovery at the start of index rebuild, to minimize the amount of space required to rebuild the index. If that is not possible, you might consider adapting your index maintenance so that indexes are reorganized rather than rebuilt, wherever possible. This is not meant to imply that index reorganization is a panacea for logging issues; it isn't. However, index rebuilds are accomplished in a single transaction, which can be very long-running (if the index is big) and prevent large swathes of log space from being reused, while the operation proceeds. By contrast, index reorganization proceed via a series of multiple, shorter transactions and so, with frequent log backups, it does become easier to curtail explosive log growth during these operations.

Bear in mind that if the problem preventing space reuse is anything other than Log Backup, then the technique of temporarily switching to SIMPLE recovery model won't work, since the records will simply remain part of the active log, preventing truncation. If a lack of log backups isn't the problem, or taking a log backup doesn't solve the problem, then investigating the cause will require a little more time. If you can easily and quickly make extra space on the log drive, by shifting off other files, or adding capacity to the current log drive, or adding an addition log file on a different disk array, then this will buy you the bit of breathing space you need to get the database out of read-only mode, and perform a log backup. Next, you need to find out what else might be preventing space reuse in the log (Listing 1) and take appropriate action.

Ultimately, having resolved any space reuse issue, you may still be left with a log file that is consuming the vast majority of the space on the drive. As a one-off measure, i.e. assuming steps will be taken to ensure proper management of log growth in the future (see the Proper Log Management section, shortly), it is acceptable to use DBCC SHRINKFILE to reclaim the space used by a bloated transaction log file.

You can either specify a target_size to which to shrink the log file, or you can specify 0 (zero) as the target size and shrink the log to its smallest possible size, and then immediately resize it to a sensible size using ALTER DATABASE. The latter is the recommended way, as it minimizes fragmentation of the log file. This fragmentation issue is the main reason why you should never schedule regular DBCC SHRINKFILE tasks as a means of controlling the size of the log; this is discussed in more detail in the next section.

Mismanagement or What Not To Do

Unfortunately, a quick search of the Internet for "Transaction Log Full" will return a number of forums threads, blog posts, and even articles published on seemingly reputable SQL Server sites, which recommend remedial action that is, frankly, dangerous. We'll cover a few of the more popular suggestions here.

Detach database, delete log file

The idea here is that you clear all users off the database, detach the database (or shut it down), delete the log file (or rename it) and then re-attach the database, causing a new log file to be created at whatever size is dictated by the model database. This is arguably the most appalling of all the terrible ways to handle a full transaction log. It can result in the database failing to start, leaving it in the RECOVERY_PENDING state.

Depending on whether or not the database had been cleanly shut down at the time of the log deletion, the database may not be able to perform the UNDO and REDO operations that are a normal part of the database recovery process, because the transaction log is missing, and so can't return the database to a consistent state. When the log file is missing, and the database requires the transaction log to perform crash recovery, the database will fail to start up properly and the only recourse will be to restore the database from the most recent backup available, which will most likely result in data loss.

Creating, detaching, re-attaching, and fixing a suspect database

Under specific circumstances, it may be possible to hack the existing database into a configuration that allows the transaction log to be rebuilt, although it may compromise the integrity of the data contained in the database. This type of operation is, at best, a last-ditch effort that may be used when there is absolutely no other way of recovering the database data, and it is not a recommended practice of the author, or anyone else involved in the authoring of this whitepaper. For an explanation of how to attempt hacking a database back into SQL Server where the transaction log file has been deleted, see Paul Randal's blog post, Creating, detaching, re-attaching, and fixing a suspect database.

Forcing log file truncation

In SQL Server 2000, BACKUP LOG WITH TRUNCATE_ONLY was a supported way of forcing SQL Server to truncate the transaction log, while the database was operating in the FULL or BULK LOGGED model, without actually making a backup copy of the contents of the log; the records in the truncated VLFs are simply discarded. So, unlike with a normal log backup, you're destroying your LSN chain and will only be able to restore to a point in time in any previous log backup files. Also, even though the database is set to FULL (or BULK LOGGED) recovery, it will actually, from that point on, operate in an auto-truncate mode, continuing to truncate inactive VLFs on checkpoint. In order to get the database operating in FULL recovery again, and restart the LSN chain, you'd need to perform a full (or differential) backup.

This command was often used without people realizing the implications it had for disaster recovery, and it was deprecated in SQL Server 2005 and removed from SQL Server 2008. Unfortunately, an even more insidious variation of this technique, which continues to be supported, has crept up to take its place, and that is BACKUP LOG TO DISK='NUL', where NUL is a "virtual file" that discards any data that is written to it. The really nasty twist to this technique is that, unlike with BACKUP LOG WITH TRUNCATE_ONLY, SQL Server is unaware that the log records have simply been discarded. As far as SQL Server is concerned, a log backup has been performed, the log records are safely stored in a backup file so the LSN chain is intact, and any inactive VLFs in the live log can safely be truncated. Any subsequent, conventional log backups will succeed but will be entirely useless from the point of view of disaster recovery since a log backup file is "missing" and so the database can only be restored to some point in time covered by the last standard log backup that was taken before BACKUP LOG TO DISK='NUL' was issued.

Do not use either of these techniques. The right way to "force" log truncation is to temporarily switch the database into the SIMPLE recovery model, as discussed earlier.

Scheduled shrinking of the transaction log

As discussed there are rare circumstances, where transaction log growth has occurred due to a lack of management and where the log growth is now being actively managed, in which using DBCC SHRINKFILE to reclaim the space used by the transaction log file is an acceptable operation.

However, the transaction log should never be shrunk using DBCC SHRINKFILE, or a database maintenance plan step to shrink the database, as part of normal, scheduled maintenance operations. The reason for this is that every time you shrink the log, it will need to immediately grow again to store log records for subsequent transactions and every log. If auto-growth is being relied upon solely for transaction log growth (see the next section for a fuller discussion), excessive VLFs can accumulate in the log file and this log fragmentation will impact the performance of any process that needs to read the log file and, if fragmentation gets really bad, possibly even the performance of data modifications. Also, since the transaction log cannot take advantage of instant file initialization, all log growths incur the cost to zero-byte the storage space being allocated.

The best practice for the transaction log file continues to be to size it appropriately up front so it does not have to grow under normal operations, and then to monitor its usage periodically to determine if the need to grow it manually occurs, allowing you to determine the appropriate growth size and determine the number and size of VLFs that will be added to the log file.

Proper Log Management

In the absence of any unexpected operations or problems that have resulted in unusual log growth (uncommitted transactions, replication problems and so on), if the transaction log associated with a FULL recovery model database fills up, and is forced to grow, there are really only two causes:

  • the size of the log file was too small to support the volume of data changes that were occurring in the database.
  • the frequency of log backups was insufficient to allow rapid reuse of space within the log file.

The best thing to do, if you can't increase the frequency of the log backups by decreasing the amount of time between log backups, is to manually grow the log file to a size that prevents it from having to grow using auto-growth when under load, and then leave the log file that size. Having a large transaction log file that has been properly grown to minimize the number of VLFs is not a bad thing, even if the log file has free space a majority of the time.

When configuring the initial size of the transaction log for a database, it is important to take into account the size of the largest table in the database, and whether or not index maintenance operations will be performed. As a rule of thumb, the transaction log should be sized to 1.5 times the size of the largest index or table in the database, to allow for logging requirements to rebuild the index under FULL recovery.

In addition to the initial sizing requirements, it is important to monitor and adjust the size of the transaction log periodically to fit the size of the database as it grows. There are a couple of problems with the auto-growth settings that a database will inherit from model, which is currently to grow in steps of 10% of the current transaction log size:

  • initially, when the log file is small, the incremental growth will be small, resulting in the creation of a large number of small VLFs in the log, causing the fragmentation issues discussed earlier.
  • when the log file is very large, the growth increments will be correspondingly large; since the transaction log has to be zeroed out during initialization, large growth events can take time, and if the log can't be grown fast enough this can result in 9002 (transaction log full) errors and even in the auto-growth timing out and being rolled back.

The auto-growth settings should be configured explicitly to a fixed size that allows the log file to grow quickly, if necessary, while also minimizing the number of VLFs being added to the log file for each growth event.

To minimize the chances of a timeout occurring during normal transaction log growth, you should measure how long it takes to grow the transaction log by a variety of set sizes while the database is operating under normal workload, and based on the current IO subsystem configuration. In cases where the necessary zero-initialization performance characteristics are not known for a database, I recommend, as a general rule, a fixed auto-growth size of 512 MB.

Ultimately, though, remember that auto-growth should be configured as a security net only, allowing the system to automatically grow the log file when you are unable to respond manually. Auto-growth is not a replacement for appropriate monitoring and management of the transaction log file size.