Skip navigation

Ensuring Up-to-the-Minute Database Recovery

A workaround for SQL Server 7.0

A little-known aspect of SQL Server 7.0 can prevent you from getting up-to-the-minute database recovery. But you can make some simple system-configuration changes to ensure that your organization's business data is protected.

The problem is that in SQL Server 7.0, if you can't access the primary data file for a database and you try to use backup log prod with NO_TRUNCATE, you'll see a series of error messages:

Server: Msg 3446, Level 16, State 1, Line 1
Primary file not available for database 'prod'.
Server: Msg 3013, Level 16, State 1, Line 1
Backup or restore operation terminating abnormally.

When you try to restore a broken database, a bug in SQL Server 7.0 prevents you from accessing or restoring any log records that SQL Server has written since the previous log backup. But don't panic. You can make some simple changes that will let you recover your data.

Follow the Rules

First, you need to ensure that you're following good database configuration practices. I follow three basic rules when I set up the physical file locations for a database.

  1. Keep the log and the data on separate physical drives.
  2. Keep the data and the master database on separate physical drives.
  3. Mirror the log.

These simple rules will help improve performance and form the foundation for no-data-loss recoveries.

Backing Up the Database

Suppose you back up your database weekly, on Sunday at midnight. Transaction log backups occur every six hours, at noon, 6:00 p.m., midnight, and 6:00 a.m. On Wednesday at 5:00 p.m., you discover that the data drive has failed, corrupting the database.

You need to recover the production database from the log and database backups. Because the last transaction log backup executed at noon and it's now 5:00 p.m., the transactions that have completed since noon are in the transaction log, but not on a tape. So, you can recover only transactions that committed before noon Wednesday, which is unacceptable. Wouldn't you love to have a transaction log backup from the point after the database broke?

An ordinary backup log scans the database's system files to discover the location of the log. But in this example, because the database is inaccessible, an ordinary backup log won't work. However, SQL Server lets you do a special log backup when you use the NO_TRUNCATE option of the backup command—even after the database is inaccessible.

How can SQL Server find the transaction log when the database is unavailable? In SQL Server 6.x and earlier, the master database can find the transaction logs by accessing the sysdevices and sysusages tables, which hold the log location information. When you use

backup log with NO_TRUNCATE 

SQL Server retrieves the log location from the master database instead of the prod database system tables. The SQL syntax for SQL Server 6.5 and earlier is

dump log with NO_TRUNCATE 

In SQL Server 7.0, Microsoft changed the syntax to

backup log with NO_TRUNCATE 

This command bypasses the database and talks to the master database, which directs the log backup in the absence of the data files.

When a drive corruption invalidates the data files, you need to use

Backup log prod to with NO_TRUNCATE 

Then, you can load the database backup, each of the previously backed-up transaction logs, and finally, the last special log backup that you made after the disk crash. If you follow this procedure, you won't lose any committed transactions.

Recovery in SQL Server 6.5

In SQL Server 6.5, the conditions that let this backup plan work are

  1. You need an intact copy of the log. If you mirror the log, as I recommended in rule 3, you will have an intact copy.
  2. The SQL Server machine must be running, which means the master database must be available. To ensure the master database is available, keep the data away from the master database, as I recommended in rule 2. The scenario I use in this example is designed to keep the master database and transaction logs undamaged even if the data becomes corrupt. But if you keep the data and the master database together and the data and the master database become corrupt, you can't use backup log with NO_TRUNCATE to restore your database.

The three rules I recommended help preserve the capability to restore the database. Figure 1 illustrates a scenario in which if the data drive (the D drive) malfunctions, the transaction logs also are damaged, so you lose any transactions that haven't been copied to tape or disk.

Figure 2 illustrates a scenario in which the master database and all user databases reside on the C drive, and all transaction logs are on the D drive. In this scenario, when the C drive goes down, you still lose transactions because the master database is unavailable for the backup.

If you use SQL Server 6.5 and earlier versions, you can ensure the ability to recover your database with no loss of data if you put the master database and transaction logs together on one drive and all data on another drive, as Figure 3 shows. However, if you use SQL Server 7.0, following this configuration isn't sufficient to protect your data.

Recovery in SQL Server 7.0

If you use SQL Server 7.0, you need to follow a different practice to ensure up-to-the minute database recovery. In SQL Server 7.0, the master database no longer contains information about the location of each transaction log, so you can't recover all transactions. In SQL Server 6.5 and earlier, the sysdevices and sysusages tables contain this information. In SQL Server 7.0, the sysusages table doesn't exist and the sysdevices table, which contains backup devices, is only for backward compatibility. SQL Server 7.0 keeps information about the transaction log location in the sysfiles table in each database.

To get up-to-the-minute recovery with no data loss in SQL Server 7.0, you need to protect the database system tables in the same way that you protect the transaction log. You need to place the system tables with the log and put the data in a separate location. You can also keep the system tables in a location separate from the transaction logs, as long as the system tables are separate from the user tables and indexes.

The primary filegroup contains the system tables and, in a default configuration, the data. If you mirror the system tables to protect them, you also mirror the entire database, which is an expensive proposition. I propose a scheme that separates the system tables from the data, so that you can mirror the system tables with the transaction log and store the data separately. You still need to keep the master database separate from the data. To set up a usual configuration in SQL Server 7.0, you can use the CREATE DATABASE command as in Listing 1. This configuration does not include the changes necessary to recover all your data.

The results of this command are similar to the results you'd get if you used the Create Database wizard or Create Database dialog box. The command creates a Primary filegroup, which contains one file, PRODSYS, and a separate transaction log file, PRODLOG, on another drive. Figure 4 shows this configuration. The Primary filegroup contains the system tables and user-defined tables and, in this case, the Primary file PRODSYS.

For backup log with NO_TRUNCATE to work successfully, you need to place the data files in a different filegroup from the system tables, then protect the system tables as you would the transaction log. Only then can you get up-to-the-minute recovery with no data loss.

To separate the system tables from the user tables and indexes, you can modify the CREATE DATABASE command, as Listing 2 shows, to add a second filegroup, called datagroup, and place one file in this group. Listing 2 shows the proposed changes to your initial CREATE DATABASE command. These changes will let backup log with NO_TRUNCATE work correctly. Figure 5 shows the configuration.

Then, to ensure that all new tables and indexes will go to datagroup by default instead of to the Primary group, mark datagroup as the default filegroup.

Remember that the default filegroup is Primary unless you change it. You can use the following statement to change the default:

Alter database prod modify filegroup datagroup default

Now, even if the database file PRODATA fails, you still have the transaction log and the system tables. The backup log with NO_TRUNCATE command will now work.

In a minimum configuration, you can put the Primary data file and the transaction logs on the same drive with the master database and put the data on a second drive.

In any case, you need at least three hard drives to get complete recovery. Figure 5 shows a proposed minimum configuration for a production database.

When you create a database is this manner (changing the default filegroup to datagroup), you probably won't have to change any CREATE TABLE or CREATE INDEX scripts. Check your scripts for the keywords ON PRIMARY:

Create table mytable
(id integer not null,
name char(10) not null)
ON PRIMARY

The ON syntax causes SQL Server to create the table (or index) on the filegroup named in the ON clause. In the example above, SQL Server will create the table in the Primary filegroup. But for the purposes of this article, you need to avoid this configuration. Commenting out (or deleting) the ON clause lets SQL Server create the table or index in the default filegroup, which in this case, is the datagroup filegroup. The correct syntax is

Create table mytable
(id integer not null,
name char(10) not null)

Protecting Existing Databases

If you already have a production database with data, you can protect the data without having to perform a backup and restore or do a logical rebuild. You need to create the datagroup, as in the previous example, and move all user tables and indexes to the datagroup. The goal in this example is to move the system tables from drive D to the mirrored drive E, while leaving the data on D. So, add another filegroup, called datagroup, to the D drive, and mark it as the default filegroup. Then, move the user tables and indexes from the Primary filegroup to the datagroup, leaving only the system tables on the Primary filegroup. Last, move the Primary filegroup to the mirrored log drive E. Listing 3 shows the syntax to create the datagroup with one data file and mark it as the default filegroup. Be sure to back up your databases before you begin. Also, make sure you have enough disk space on the appropriate drives.

No type of backup-and-restore routine exists that can help you move the tables and indexes away from the system tables. To move the tables and indexes, follow this procedure:

  1. Drop all foreign-key constraints.
  2. Move the tables with clustered indexes. To do so, you need to drop and recreate every index on tables that have a clustered index. Always drop non-clustered indexes first, then clustered indexes. Add the clustered index for a table first, then the non-clustered indexes. When you recreate these indexes, they will move to the default filegroup (datagroup). Because the clustered index includes the data, the table also moves.
    • Drop the non-clustered indexes first, regardless of whether you used Create index or Alter table add constraint to create them.
    • Drop the clustered index.
    • Recreate the clustered index.
    • Recreate the non-clustered indexes.
    • Add the foreign-key constraints.
  3. Move the heap tables (a heap table is one that doesn't have a clustered index).
    • Drop all the non-clustered indexes, regardless of whether you used Create index or Alter table add constraint to create them.
    • Add a clustered index on every heap table, which moves the heap table to the new location.
    • Drop the clustered index, which leaves the table in datagroup.
    • Recreate all the non-clustered indexes.
    • Add the foreign-key constraints.
  4. Use the attach/detach method to move the data files to their final location. Make sure no one's using the database, then use the following syntax in Query Analyzer:
    exec sp_detach_db prod
    
    
  5. Copy the Primary file to its final destination, the E drive. Use this syntax at the command line:
    Copy d:\mssql7\data\prodsys.mdf to e:\mssql7\data \prodsys.mdf
    
    
    In Query Analyzer, use this syntax:
    Exec sp_attach_db prod, 
    'e:\mssql7\data\prodsys.mdf', 
    ' d:\mssql7\data\prodata.ndf',
    'e:\mssql7\data\prodlog.ldf'
    

When you create the indexes, you can use the ON DATAGROUP syntax to explicitly place the objects on datagroup, but this syntax isn't necessary because datagroup is the default filegroup. Don't use the ON PRIMARY syntax when you recreate your indexes, because that syntax will replace the indexes on the PRIMARY filegroup.

In the steps I outlined, you must drop and recreate all indexes. Remember that primary-key and unique constraints also create indexes. To drop and recreate the indexes added by primary key/unique constraints, you use the Alter table drop constraint/Alter table add constraint command.

What Doesn't Work

In testing, I first used DBCC DBREINDEX to drop and rebuild the indexes. This method didn't move the indexes or tables, even after I made the datagroup the default filegroup. I also tried to use the SQL Server Enterprise Manager Generate SQL Scripts option, but this method does not drop foreign key, primary key, or unique constraints because it drops the entire table. Using the Data Transformation Services (DTS) Object Transfer option generates scripts, but this method doesn't generate drops for foreign-key, primary-key, or unique constraints, either.

Last Notes

Microsoft's article on this subject (Q218739) recommends that you "ensure redundancy for any drive used to store the primary data file." Until this problem is resolved in a future release of SQL Server, you need to put data files in a different filegroup from the system tables. Then, protect the system tables just as you would the transaction log. Only then, can you ensure up-to-the-minute database recovery with no data loss.

The method I propose in this article is simple to execute for new databases, but it involves extensive planning and testing for existing databases. Perhaps you can afford to lose some data. If not, get to work!

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish