Pop Rivett engages in Log Shipping
A lot of people come up to me and say, “Pop, you’ve been around a bit, how do you ensure that your standby SQL Server is always in synch with your production one?” Usually I sigh wearily and tell them that it takes a lifetimes experience and a natural flair for the task, and would take far too long to explain, but this once I’ve decided to share with you a technique that I’ve always found effective.
It’s based around log shipping – a pretty standard technique for keeping two databases synchronised. In essence, one keeps a spare copy of a production database in ‘standby’ mode, so it can be switched in reasonably quickly if the primary database fails.
Figure 1: Pop attempting to ship an entire tree structure
The first thing to do is ensure that your “live” database is using the ‘full recovery’ model. Next, one creates a ‘standby’ clone by restoring a full backup and then applying all subsequent incremental logs to it, in sequence. The two databases, at that point of time, should be identical. (You can always check this with SQL Compare and SQL Data Compare).
Logs are then restored in sequence to the ‘standby’ as they are backed up from the primary server. This means that all transactions performed by the primary server will then be performed on the standby, thereby keeping the two synchronised. If the primary server fails, the standby server can then be brought into operation, though this process is, of necessity, a manual one.
The process, as demonstrated here, comprises the following three basic steps:
- On the primary server (svr1) a backup job is scheduled for the database(s) in question, using the backup procedure, s_BackupAllDatabases, previously described in Pop Rivett Backs Up.
- On the Standby server, a procedure, s_nrSyncDir, copies across from the primary server the latest Full backup, and/or any new Log backups
- On the Standby server, the database(s) in question is restored using the stored procedure, s_RestoreDatabase, previously described in Pop Rivett Restores. It will restore the Full backup followed by the logs.
All of the scripts required to try out this log shipping process are provided in the code download for this article (see the “Code Download” link in the box to the right of the article title).
The ‘Backup’ and ‘Restore’ stored procedures use the following convention for the format of the backup file names so as to ensure that the right files are used in the correct sequence:
- Databasename_Full_yyyymmdd_hhmmss.bak for a full backup
- Databasename _Log_yyyymmdd_hhmmss.bak for an incremental backup
Let’s step through this process in full detail.
On the Primary server…
1. Create a database called “Admin“
2. In the Admin database, create the DatabaseBackup table
CREATE TABLE DatabaseBackup
Name VARCHAR(128) PRIMARY KEY NONCLUSTERED ,
BackupFlagFull VARCHAR(1) NOT NULL CHECK (BackupFlagFull IN (‘Y’,’N’)) ,
BackupFlagLog VARCHAR(1) NOT NULL CHECK (BackupFlagLog IN (‘Y’,’N’)) ,
RetentionPeriodFull datetime NOT NULL ,
RetentionPeriodLog datetime NOT NULL
You can use this table to control which databases should be backed up and to set the retention period of the backup files.
3. Create the stored procedure Admin..s_BackupAllDatabases
4. Create the folders:
5. Create jobs to backup the database
a. Full backup scheduled daily:
s_BackupAllDatabases ‘C:\Backup\Full\’, ‘Full’
b. Log backup scheduled every 15 mins:
s_BackupAllDatabases ‘C:\Backup\Log\’, ‘Log’
xp_cmdshell needs to be enabled on both the primary and standby servers. During my testing I simply enabled it using the SQL Server Surface Area configuration tool, (under Surface Area Configuration for Features), ran the examples then disabled it again manually. Generally, however, you would assign the right to the scheduler user to enable it, run the job, and then disable it. One can specify the user with which one runs the task from Enterprise manager /SSMS (Edit job step | advanced tab | Run as user), after creating a user with the necessary rights.
On another note – look out for those trailing “\” on the paths in step 5. If you miss them out (as I did the first time) the procedure will appear to “work” but the .bak files will have the wrong names.
—End Editor’s Notes—
On the Standby server…
1. Create the folders:
2. Synchronise the SQL Server login IDs between the primary and standby servers, either by scripting the login IDs from the primary server and then running the script on the standby server to create the login Ids. This is easier with Login IDs that use NT authentication. Alternatively, if your standby server is being used for nothing else, then back up the
Master database on the primary server and restore it to the standby.
3. Create a database called Admin
4. In the Admin database, create the following stored procedures:
5. Create a job to copy all backups from the source server. Schedule it every 5 mins or however often synchronisation should be set. E.g.
EXEC s_nrSyncDir @LocalDir1 = 'c:\BackupCopy\Full\' , @LocalDir2 = NULL , @RemoteDir = '\\svr1\c$\Backup\Full\' , @FileMask = '*.bak' , @RetainPeriod = '19000105'EXEC s_nrSyncDir @LocalDir1 = 'c:\BackupCopy\Log\' , @LocalDir2 = 'c:\BackupCopy\Log\Archive\' , @RemoteDir = '\\svr1\c$\Backup\Log\' , @FileMask = '*.bak' , @RetainPeriod = '19000105'/*This will copy all files more recent than @RetainPeriod (5 days) from @RemoteDir to @LocalDir1 that are not found in @LocalDir1 or @LocalDir2 */
6. Copy the latest full backup of the database from c:\BackupCopy\Full\ to c:\BackupCopy\Log\
7. Schedule the following job to run – or add it as a second task to the job above. I would suggest creating a separate job for copying the full backups and add this to the log file copy.
@SourcePath = 'c:\BackupCopy\Log\' ,
@archivePath = 'c:\BackupCopy\Log\Archive\' ,
@DataPath = 'c:\sql\data\' ,
@LogPath = 'c:\sql\log\' ,
@recover = 'norecovery' ,
@recipients = '' ,
@Database = 'mydb'
This will now drop the database mydb if it exists, restore the full backup (moving the data and log files to the requested directories) and restore all the logs. Note the following:
- The database will “recovering” and ready to apply further logs. It will be inaccessible (“restoring” will appear after its name). If you execute a
DATABASEcommand it should bring it on-line. If you do this, however, you will have to restore another full database (which will drop the database and recreate it) before you can restore more logs. Alternatively, you can restore to standby which will enable you to access the database in read only mode – you will have to specify the rollback file to restore further logs.
- Any logs timestamped before the full backup are moved to
c:\BackupCopy\Log\Archive\ before the full database restore.
- After each file is restored it is moved to
c:\BackupCopy\Log\Archive\. Hence the requirement for two local folders in
In the @RemoteDir assignment in step 5,
C$ refers to an admin share. I did not have permission to access this, so I simply created a share called Backup on the C:\Backup folder on the primary server (by right-clicking Shares in the Computer Management Windows admin tool, and selecting Create New Share to start the wizard). So the assignment, for me, was:
Just for testing purposes I gave access permissions on this folder to Everyone. Of course, in general you would merely grant the necessary permissions to the process running the nrSyncDir procedure.
The definition of the #Files temporary table, defined in s_RestoreDatabase, works for SQL Server 2000 but that definition has changed in SQL Server 2005. If you are running SQL 2005, you will need to replace the definition of that table in the s_RestoreDatabase.sql script with the one provided in Files.sql
—End Editor’s Notes—
If the log shipping breaks down, because of a corrupt log file or someone has interfered with the source database, then it must be restarted as follows:
- Stop the restore job.
- Copy the latest full backup into
- Recover the database (restore database
mydbwith recovery) – this should really be included in the restore procedure.
- Restart the restore job.
Figure 2: Pop Rivett well on the way to success