I used to dread the phone call that went something like this:
“Uh, hi Grant. You know, yesterday, I accidently dropped/edited/deleted/updated a row/table and now I need you to restore the database… What?… No, I don’t want to lose the last 17 hours of data… Yes, I know the CEO thinks we should make a profit what does that have to do with it… Oh that sale was recorded yesterday wasn’t it? But I just want one row/table/index. Don’t you have a log or something?”
When that call came through and it was code or an index or something that I had in source control and could easily retrieve, it really wasn’t a big deal. I’d go to the source control system, retrieve the appropriate piece of code, and reapply it; usually after I’d revoked that person’s access to the production system. If it was non-volatile data, for example, look-up information, that would be available in a full backup, I could use SQL Data Compare and compare the production database against the last backup file directly to retrieve the information. But when the call was about volatile data; sets of data that are constantly changing with lots of dependencies across tables, then, everything changed.
This one time (no, not at band camp), I had to sort out a problem that was caused by a person with far too much access to production who was able to get in to the server and delete a few rows from a few key tables. It wasn’t much data, but it was a record of millions of dollars’ worth of transactions. Because the idiot person, didn’t realize what they had done until the next day, and because the system had close to 24 hours of other committed transactions, I couldn’t simply retrieve their data from a previous full backup. I also couldn’t simply compare this information to some other data source or re-key it because of the dependencies between the tables. I had to do a point in time recovery, but, not one I could do against our production system without losing all our committed transactions and causing all sorts of down time. I needed a solution that would let me recover the database to a point prior to the mistake, in a place other than production, where I could then retrieve the appropriate data in as timely a manner as possible.
The Old Days
Shortly after hanging up the phone I went into a tirade followed by a crying jag. I know that I need to restore my database to a point in time, which means I have to have full and log backups. I also know that I have to do this away from my production system which has to remain online and active. I know that I have to get my data from that point in time and move it to my production system without taking that system down or modifying all the other data within it. Once I’ve done all that, the system should be back online with the missing data intact.
Let’s walk through everything you’d need to know and do in order to fix this issue. First question, the biggest to determine how much data you are getting back, is the database enabled for Full Recovery? Great, it is. Which means log backups and you can restore this database up to the moment before our friend on the phone deleted this mission critical information (let’s refer to it as the Event). Next question, just how big is this database anyway? Ooooh, that big? Not good. Time to look for disk space. Wasn’t the QA team done with the latest release of Project VikingThunder? Good, so you can delete the databases on the QA machine and create some space for restoring from production. Do you have the backup files? Do you have the log backup files? Do you have to go to tape or off-site storage to get them back? Gather all the necessary full, differential and log backups as the first step.
Finally, you can start the process. First, you have to restore the database. We’re not, for the moment, even going to worry about whether or not you’ve tested the restore of this backup. Let’s assume it works. You restore the database, but leave it in recovery mode and then you start applying log backups and/or differential backups until you get to the moment before the Event. Finally, hours later, you have a database that looked basically the same as it did yesterday, pre-Event. Now where did you put that differential script? You know the one. You pulled it off some web site somewhere and it shows you which columns have different data between two different databases. Ah there it is, but it’s written so that you have to modify it for each table you want to compare. Another stretch of time, and a curse or two later, you have the script working. Finally you’re able to compare the various tables needed have generated multiple scripts that you can combine. Now you can restore the differences from the recreated database on top of the table in Production. But first, you need to test the script. What’s that mean? You need to make another copy of the production database, let’s just replace the one we had on QA, hopefully the scripts we’ve generated will work the first time and we won’t have to go back to the point in time database again, but that’s possible.
You run the script and now production is back online. The entire day is shot and now the phone is ringing again:
“Yeah, hi, this is the QA department. What happened to the QA server? We were testing VikingThunder… Yeah, well, you need to get that back online right now. We were getting ready to demo it for the CEO tomorrow…”
Welcome to the 21st Century
That wasn’t any fun, was it? But today we have lots of options available that will make that process much, much less painful. You still need to have backups in place. You need to have your database in full recovery and be backing up the log as well in order to restore your database to a point in time. You should also be testing your backups to ensure they will work when needed. But, with a good set of tested backups in place, the process of recovering data from a pre-Event database should be quite simple.
We’re going to make use of Red Gate SQL Virtual Restore. Using SQL Virtual Restore immediately solves several of the problems we ran into during the old way of recovering the data. First, we don’t have to sweat disk space. We’ve already allocated space for the backup file and SQL Virtual Restore will only need to define a very small set of place-holder files for the database, frequently no more than a few megabytes in size. SQL Virtual Restore, as the name implies, doesn’t actually restore the database. Instead a file filter process runs between SQL Server and the file system which makes the SQL Server process think that it’s accessing standard database files, when in fact it’s accessing the backup. In order to make this work, data and log files are created for the databases recovered using SQL Virtual Restore. These place-holder files are very small because they don’t store all the data in a database, but simply enough information to define data files. From there, any changes to the data stored in the backup file are stored in these place-holder files. In our case, with a point in time recovery, we’d have to be able to store the changes from the logs over the period of time that data was updated between the previous full backup and the Event. That additional data goes into the place-holder files. On most OLTP systems, this is not too much space at all, especially when compared to the full database. Because of that, our second problem, where to store the data largely goes away. We could just restore to a copy of the database right back on the production system, it’s that small. You’re absolutely better off going to a different server to avoid resource contention issues on production. But you won’t need to worry about interfering with Project VikingThunder.
Next, instead of trying to roll our own comparison script, we’ll make use of SQL Data Compare. Using this tool we can quickly and simply identify the pertinent data between the two databases as needed. A single script can be generated very quickly.
SQL Virtual Restore is faster than a real restore operation. It’s not a lot faster, but shaving 25-35% off the restore time (the approximate savings I’ve seen during testing) means you’re getting your production system fixed that much faster.
By using SQL Virtual Restore we can also create a copy of the production database from a full backup that will be very small and quickly recoverable so that we can test the script generated from SQL Data Compare. Regardless of how good SQL Data Compare may be at generating scripts, you’ll still want to test these prior to running them in production. This second virtual copy of the production system is ideal for this requirement.
For the purposes of our test, I’ve created a little database called MyDb. It has a pair of tables on it called ParentTable and ChildTable. I have a full backup and a couple of log backups available. In my case, the Event occurred when our friend on the phone deleted several rows from ChildTable. We’ll now work on restoring that table to a point in time in order to recover those rows.
Setting up the Virtual Database
SQL Virtual Restore works just fine with native backups. It also works very well with Red Gate SQL Backup compressed files and SQL Hyperbac compressed files. Regardless of the source of the backup, the process for restoring it will be basically the same. I’ll walk through using the GUI first and then I’ll show the same thing using TSQL commands, which I prefer because you can automate your restores that way. Also, the GUI will only allow you to do complete restores from data and log files. In order to do a point in time recovery, you must use TSQL.
SQL Virtual Restore GUI
You can launch the GUI for SQL Virtual Restore from your Start menu. Better still, if you’re like me and one of the first things you open in the morning, and the last thing you close before you shut down, is SQL Server Management Studio, you can access it from the Tools menu there. Opening it up you’ll see the following screen:
As you can see, this is a fairly standard introduction and connection window and the start of the wizard process for creating a virtual database. You can connect to the local instance that has the Hyperbac Service installed on it. The Hyperbac Service runs as a process that enables us to create the virtual database. You can connect through standard means, either Active Directory style Windows authentication, or using a SQL Server login. Whichever you use, select the instance that you’re going to connect to and click the Next button. This will open up the backup file selection window:
This is a simple window to deal with. Basically you need to add valid backup files to it. These are either native SQL Server backup, SQL Backup or Hyperbac files. Clicking on the “Add Files…” button opens a standard file selection window. Selecting the Full backup and log backup files is very simple. Once you’re done your window will look something like this:
With the files you need selected, there’s nothing else to do at this step. As you can see, you can remove all the files by clicking the “Remove All” button. You can also select individual files and click the “Remove” button to get them off the list. Once you are satisfied that you have everything, click the next button. This will open the next step in the Wizard, specifying options about the database:
There are a number of choices you can make to complete the restore. First, you can choose to overwrite an existing database with this virtual restore or create a new database. For our purposes, recovering a lost table in a production environment, it would be a pretty foolish thing to do, overwriting our production database. You can determine where you’re placing the data files for the virtual database. The choices are fairly clear. You can use the default directories as defined within SQL Server. You can use the directories defined by the files within the backup itself. Or, as I have done, you can choose to put those files in some other location. There is no right, or better, choice here, just the one that best suits your needs. If you’re unsure of the status of the consistency of the backup, you can elect to include a DBCC check as part of the restore. When that option is selected, it will give you the opportunity to immediately drop the database after the consistency check. Again, since we want to access this database, doing that would be a poor choice in our situation. Finally, you choose what state you want to leave the database in after the restore operation is done. For our first example with the GUI, I’m going to go ahead and restore the database and leave it in an operational state using WITH RECOVERY. But you could, if you wanted, use the GUI to get the database and some set of logs restored, but leave them in a recovering state. Then you could exit the GUI and complete the restore of logs to a point in time and then recover the database. Finally, if you’re very unsure of when you hope to end the recovery, you can specify WITH STANDBY so that you can take a look at the database to determine if more log files should be restored or if you have the data you need in the proper state. Once that determination is made, you can then recover the database manually. The right choice here again depends on your situation. Clicking the “Next” button brings up a summary window:
This window simply shows you all the choices you’ve made in the previous windows. If you’ve made a bad choice, you can just use the “Back” button to go and modify your choices. You can also click on the “View Scripts…” window to see how SQL Virtual Restore is going to execute the database restore process. We’ll go over how to perform this with TSQL in the next section. If you click on the “Restore” button, your database will be restored and you will see a summary of the actions performed and the space you have used with this process:
It’s that easy to restore the database using the GUI. Now let’s get into the guts and see how to do the same thing using TSQL statements so that we can restore to a point in time.
SQL Virtual Restore to a Point In Time
If you are already familiar with using TSQL to restore databases, you may not learn much new in this section. You’ll need only the slightest variation in your scripts in order to create a virtual database rather than a real one from TSQL statement. Here is my first TSQL statement for this restore process:
RESTORE DATABASE [MyDB_Virtual1] FROM
WITH MOVE N'MyDB' TO N'D:\Data\MyDB_MyDB_Virtual1.vmdf',
MOVE N'MyDB_log' TO N'D:\Data\MyDB_log_MyDB_Virtual1.vldf',
The one difference between this script and a perfectly normal restore script is the modification of the file extensions on the virtual database, adding the letter “v” to them. This is a signal to the Hyperbac Engine that these files are special. This will cause the Hyperbac Engine to create a virtual database. The TSQL script represents exactly the same process as the one that we kicked off from the GUI.
From there, the rest of the restore operations with the logs will look exactly as you’d expect:
RESTORE LOG [MyDB_Virtual1] FROM
RESTORE LOG [MyDB_Virtual1] FROM
WITH STOPAT = '3/28/2011 17:28:00';
RESTORE DATABASE [MyDB_Virtual1] WITH RECOVERY;
That’s a perfectly normal set of restore scripts to get to a point in time. No surprises. This means that any set of automation you’ve created around your scripts will only need very minor modifications to work to create a virtual database and recover that to a point in time.
Now that the we have a database and it’s been restored to just prior to the bad event, let’s do a comparison of the data using Red Gate SQL Data Compare.
Moving Data from the Virtual Database
You can connect up to the virtual database using SQL Data Compare exactly as you would any other database. You specify the server and select the database, then do the same for the production database and you should see an opening window similar to this one:
We’re assuming that the structures between the databases are the same, so we should be able to simply use the defaults in SQL Data Compare. Running SQL Compare results in a few differences in the data that was there:
You can again use the Synchronization Wizard to move this data. SQL Data Compare can also be automated through the command line. In a real production environment, if a single row or a few rows were modified by the Event and you hoped to only move them from the virtual database, you’ll probably want to go through the GUI so that you can pick and choose which of the rows are moved or modified rather than letting it occur automatically. You’ll also want to test the script prior to running it in production. Since you’ve got a second virtual database standing by, this testing is quick and easy.
Yes, losing data in a production system is a major problem, especially when it’s only a sub-set of the data. But using SQL Virtual Restore in combination with the other database tools from Red Gate should make recovery much easier and faster than it used to be. Just remember, you must be sure that you have good backups and good log backups, otherwise this entire approach is not possible. That makes one less reason to fear the phone ringing.
If you’ve found this article useful, you can download a free 14-day trial of Red Gate SQL Virtual Restore. Restore, verify, and optionally drop your backups in one automatic process. Get it here.