SQL Server’s RESTORE statement is insanely powerful. Not only because of the obvious fact that it can be handy in a disaster (when you have regular backups in place), but because it’s also so incredibly versatile. So versatile, in fact, that I’d wager that most DBAs only use a fraction of the various options and capabilities that it provides.
And, to that end, I wanted to take a multi-part look at how just one or two rarely used facets of the RESTORE command can be used to great benefit when it comes to increasing overall disaster preparedness.
Taking RESTORE VERIFYONLY For a Spin
I’m guessing that many DBAs have bumped into the VERIFYONLY argument of the RESTORE clause – as it’s something that many DBAs will either explicitly implement themselves as part of a scripted, regular, backup of their databases, or it’s potentially (hopefully) something that they’re doing with their backups through whatever GUI they’re using to schedule regular backups.
But, just to make sure we’re on the same page, here’s what Books Online says about it:
Verifies the backup but does not restore it, and checks to see that the backup set is complete and the entire backup is readable. However, RESTORE VERIFYONLY does not attempt to verify the structure of the data contained in the backup volumes. In Microsoft SQL Server, RESTORE VERIFYONLY has been enhanced to do additional checking on the data to increase the probability of detecting errors. The goal is to be as close to an actual restore operation as practical. For more information, see the Remarks.
From within SQL Server Management Studio, the option to run this command is ‘baked in’ to ad-hoc backups – as shown in Figure 1.
Figure 1: The ‘Verify backup when finished’ checkbox executes RESTORE VERIFYONLY under the covers.
There’s also a similar GUI option called ‘Verify backup integrity’ in the ‘Backup Database Task’ that can be defined as part of a SQL Server Maintenance Plan. Likewise, there is typically a similar option in just about every 3rd party backup tool that I’ve used.
To that end, it’s safe to say that most DBAs have, therefore, in some way or another interacted with the RESTORE VERIFYONLY command and argument.
But, there’s a very powerful, additional, command (or argument) that can be used with RESTORE VERIFY – that I’d wager hardly anyone uses: the Option to specify WITH LOADHISTORY.
RESTORE VERIFY ONLY WITH LOADHISTORY and Off-Box Backups
I’m a huge proponent of off-box backups. Meaning, that I advocate that off-box COPIES of mission-critical data is a best practice. Stated more succinctly, if you want to OPTIMIZE your protection from a disaster, you should do (at least) the following:
A) Regular, on-box backups (FULL, DIFF, T-LOG) that are held for as long as feasible or possible.
B) Copies of those same-backups stored in a secondary (or multiple secondary) locations as a redundancy effort – and as a means for keeping backups that extend LONGER into the past.
And here’s how that typically transpires. Usually your production system is running what I call ‘premium’ disk – in the sense that the disks on your production systems is hopefully MORE expensive than the other kinds of disk you have ‘laying around’ within your organization. To that end, this disk usually has a SMALLER capacity than say, a bunch of RAID1/RAID5/RAID10 SATA drives that might be laying around for use for backing of file-shares, backups, and so on.
To that end it would seem to make sense to try and put all of your backups on those ‘cheaper’ storage devices. EXCEPT if you do run into a disaster, the last thing you want to do is ‘slurp’ large backups over the wire from a set of slow disks. Not only will those slow disks potentially be a bottleneck, but network transfer is almost always going to add a serious limitation to your ability to be able to recover in a timely manner if your databases are more than a few GBs in size.
Consequently, I typically advocate keeping one to two full sets of backups on-box – where they can be EASILY and QUICKLY accessed in the case of a disaster. Then, to increase redundancy (keeping backups and data on the same server is a recipe for disaster if something ugly happens to your server) and to increase longevity of backups, I advocate copying backups to a secondary box or server – where backups can typically be kept for LONGER because there should, hopefully, be much more space available. In this fashion, immediate/typical disasters where the DB crashes and you need to recover immediately can be best addressed by on-box backups. Other kinds of ‘data purity’ problems that take a long time to figure out or surface can still be addressed – but will require additional effort involved in slurping data over the wire (back to the primary server). And, of course, cases where your server catches fire, or a drive explodes and takes out all the drives around it/etc can still, also be addressed – by means of grabbing off-box backups or pulling those off-box backups to another location.
And that, frankly, is where you can easily start working with the idea of transforming off-box backups into a sort of luke-warm standby. Because the cool thing about RESTORE VERIFYONLY WITH LOADHISTORY is that not only does it ‘verify’ that the media can be read, but it can also be used as a means to IMPORT backups into the msdb database on a server OTHER than the server where the backup was made.
So, in this sense, one option for off-box backups would be to push them to a secondary SQL Server that just sits by, idly, with the latest copies of all backups and which could be used in a worst-case scenario in which a primary server caught fire, blew a back-plane, or whatever.
Remote Off-Box Backups
Such an approach (i.e., the creation of a ‘luke warm’ backup server) is NOT as optimal as the use of a ‘warmer’ redundancy strategy such as Mirroring (or even Log Shipping), but could be used as an additional spare or backup option in cases where other HA solutions aren’t affordable or merited – but where a contingency still needs to exist for some sort of ‘failover’ scenario. And, in this regard, ONE option for creating a luke-warm failover server would be to send copies of backups to an idle server where a regular process could check for newly copied/added backups, and then run RESTORE VERIFYONLY WITH LOADHISTORY to pull all backup details into the local msdb – as a means of making backups easier to restore from the GUI in the case of a disaster.
Personally, I’m not sure I’d really advocate this approach (i.e., the one I just described) – simply because setting this up would be a bit of effort, and the last thing you want to be doing when recovering from a disaster is be messing around with the GUI. But, I did want to point this out as an option – because it might make sense in some environments under the right circumstances. That, and it serves as a semi-decent example of just how versatile the RESTORE statement can be in terms of allowing some interesting scenarios that many DBAs probably don’t think of.
As such, in subsequent posts (i.e., Parts II and III) we’ll look at some BETTER options for ‘luke warm’ failover servers, address the licensing considerations behind them (i.e., talk about WHY they can make more sense, under the right circumstances, than some of their more expensive and more available cousins such as Mirroring, Log Shipping, Clustering, and Replication), and talk about ways to minimize recovery time with them AS WELL as address ways in which they can be used as off-site options that provide additional coverage in ‘smoke and rubble’ scenarios as a form of additional protection or redundancy.