Skip navigation

Keep SQL Server Up and Running

Availability enhancements in SQL Server 2000 can keep your business going 24 x 7

Many SQL Server 2000 marketing documents refer to this new release's enhanced "SAR-abilities," where SAR stands for scalability, availability, and reliability. Constant (or near constant) availability is essential for anyone running a round-the-clock business or a server for worldwide use. Availability features not only keep SQL Server up and running during hardware failures but also make it fully available in a 24 x 7 shop, in which you don't have a 6- to 8-hour window of low activity for performing maintenance or backup tasks. Some of SQL Server 2000's availability enhancements, such as failover clustering, deserve a whole article (or more) to themselves. Others are internal changes, such as improvements in the Database Consistency Checker (DBCC) maintenance routines that let them run faster and with less impact on your applications. Two specific areas in which Microsoft has improved SQL Server's availability support are database maintenance and database backup.

Learn more about New Features in SQL Server 2008 and SQL Server 2012

Database maintenance includes running consistency checking operations and removing fragmentation. SQL Server 2000 provides a new utility to let you remove one type of table fragmentation while keeping the table available for querying. Database backup has always been dynamic, meaning that the database is still available during the backup process, but you always have some degree of performance degradation during the backup operation. SQL Server 2000 increases the functionality available with the BACKUP command to allow some types of backup to run with much less noticeable impact on the system. Therefore, you can run your backups any time of the day or night, no matter how much user activity is also taking place.

Database Maintenance

Historically, the primary database-maintenance activity has been rebuilding indexes. You might need to rebuild indexes to remove fragmentation or to reestablish a fillfactor to leave space on your pages so that you can insert new rows. As with many database maintenance tasks, rebuilding indexes can severely degrade availability because your indexes (and thus your tables) are unavailable during the process. Let's look at the subject of removing fragmentation in more detail. In maintaining your databases, you'll probably need to deal with two types of index fragmentation: internal and external.

Internal fragmentation occurs when your index pages have space available—that is, when the index is taking up more space than it needs to. In such cases, when SQL Server has to scan the entire table, it has to perform more read operations than if the pages had no free space. However, internal fragmentation is sometimes desirable; to make sure an index's pages have room to expand, you can request internal fragmentation by specifying a low fillfactor value when you create or rebuild an index. Having room on a page means that you have space to insert more rows without having to split a page. The leaf level of any index keeps the index rows stored in order of the index key. For a clustered index, the leaf level is the data, so the logical order of the rows and pages is the index key's order. Logical order is the order in which you'd access the pages (and rows on those pages) if you followed the pointers from page to page. If you need to insert a new row on a full page, SQL Server allocates a new page and splits the full page so that half the rows are on the original page and half are on the new page. Then, it adjusts the page pointers to link the new page into the chain. Splitting is a relatively expensive operation, so avoid it if possible. Also, splitting leads to external fragmentation because, most often, the new page isn't contiguous to the page you're splitting.

External fragmentation occurs when the logical order of pages doesn't match the physical order or when pages that are logically sequential aren't located on the same extent. (An extent is eight contiguous pages in a database file.) For example, suppose the leaf level of an index contains pages 100, 101, and 102. If page 101 becomes full and you need to insert a new row onto that page, you need to find a new page. If the closest available page is page 200, that page will be linked into the page chain, so the logical order is now 100, 101, 200, 102. The table that contains these pages now has external fragmentation because the logical order of the pages isn't the same as their physical order. External fragmentation is really bad only when SQL Server is doing an ordered scan of all or part of a table or index. (SQL Server performs an ordered scan when it needs to retrieve all of a table's rows in order, as when the query contains an ORDER BY clause.) To retrieve all the pages of a fragmented table in logical order, SQL Server has to jump around on the physical disk instead of doing only contiguous-read operations. If you're searching for individual rows through an index, those rows' physical location is unimportant—SQL Server can easily find them. If SQL Server is doing an unordered scan, it can use the Index Allocation Map (IAM) pages to determine which extents it needs to fetch. Because the IAM pages list the extents in disk order, the fetching can be extremely efficient. IAM pages are bitmaps that identify which extents in a file belong to a certain table or index. Adjacent bits represent contiguous extents; a bit value of 1 means that the particular extent belongs to the table or index the IAM refers to. (For more information about extents and IAMs, see Inside SQL Server, "The New Space Management," April 1999.) You must follow the page chain only if the pages need to be fetched in logical order according to their index key values. If the pages are heavily fragmented, following the page chain during an ordered scan is more expensive than if no fragmentation existed.

Removing Fragmentation

You can choose from several methods for removing fragmentation from an index. You could rebuild the index and have SQL Server allocate all new contiguous pages for you. To rebuild the index, you can use a simple DROP INDEX and CREATE INDEX combination, but in many cases using these commands is less than optimal. In particular, if the index is supporting a constraint, you can't use the DROP INDEX command. Alternatively, you can use DBCC DBREINDEX, which can rebuild all the indexes on a table in one operation, or you can use the drop_existing clause along with CREATE INDEX. The drawback of these methods is that the table is unavailable while SQL Server is rebuilding the index. When you're rebuilding only nonclustered indexes, SQL Server takes a shared lock on the table, which means that users can't make modifications, but other processes can SELECT from the table. Of course, those SELECT queries can't take advantage of the index you're rebuilding, so they might not perform as well as they would otherwise. If you're rebuilding a clustered index, SQL Server takes an exclusive lock and doesn't allow access to the table, so your data is temporarily unavailable.

SQL Server 2000 lets you defragment an index without completely rebuilding it. DBCC INDEXDEFRAG reorders the leaf-level pages into physical order as well as logical order, but using only the pages that are already allocated to the leaf level. This command does an in-place ordering, which is similar to a sorting technique called bubble sort (you might be familiar with this technique if you've studied and compared various sorting algorithms). In-place ordering can reduce logical fragmentation to 2 percent or less, making an ordered scan through the leaf level much faster. DBCC INDEXDEFRAG also compacts the pages of an index, based on the original fillfactor. The pages won't always end up with the original fillfactor, but SQL Server uses that value as a goal. The defragmentation process attempts to leave at least enough space for one average-size row on each page. In addition, if SQL Server can't obtain a lock on a page during the compaction phase of DBCC INDEXDEFRAG, it skips the page and doesn't return to it. Any empty pages created as a result of compaction are removed.

The algorithm SQL Server 2000 uses for DBCC INDEXDEFRAG finds the next physical page in a file belonging to the index's leaf level and the next logical page in the leaf level to swap it with. To find the next physical page, the algorithm scans the IAM pages belonging to that index. In a database spanning multiple files, in which a table or index has pages on more than one file, SQL Server handles pages on different files separately. SQL Server finds the next logical page by scanning the index's leaf level. After each page move, SQL Server drops all locks and saves the last key on the last page it moved. The next iteration of the algorithm uses the last key to find the next logical page. This process lets other users update the table and index while DBCC INDEXDEFRAG is running.

Let's look at an example in which an index's leaf level consists of the following pages in the following logical order:

47 22 83 32 12 90 64

The first key is on page 47, and the last key is on page 64. SQL Server would have to scan the pages in this order to retrieve the data in sorted order. As its first step, DBCC INDEXDEFRAG would find the first physical page, 12, and the first logical page, 47. It would then swap the pages, using a temporary buffer as a holding area. After the first swap, the leaf level would look like this:

12 22 83 32 47 90 64

The next physical page is 22, which is also the next logical page, so no work would be necessary. DBCC INDEXDEFRAG would then swap the next physical page, 32, with the next logical page, 83:

12 22 32 83 47 90 64

After the next swap of 47 with 83, the leaf level would look like this:

12 22 32 47 83 90 64

Then, the defragmentation process would swap 64 with 83:

12 22 32 47 64 90 83

and 83 with 90:

12 22 32 47 64 83 90

At the end of the DBCC INDEXDEFRAG operation, the pages in the table or index aren't contiguous, but their logical order matches their physical order. Now, if the pages were accessed from disk in sorted order, the head would need to move in only one direction. Keep in mind that DBCC INDEXDEFRAG uses only pages that are already part of the index's leaf level; it allocates no new pages. In addition, defragmenting a large table can take quite a while, and you'll get a report every 5 minutes about the estimated percentage completed. However, except for the locks on the pages being switched, this command needs no additional locks. All the table's other pages and indexes are fully available for your applications to use during the defragmentation process.

If you must completely rebuild an index because you want a new fillfactor, or if simple defragmentation isn't enough because you want to remove all fragmentation from your indexes, another SQL Server 2000 improvement makes index rebuilding less of an imposition on the rest of the system. SQL Server 2000 lets you create an index in parallel—that is, using multiple processors—which drastically reduces the time necessary to perform the rebuild. The algorithm SQL Server 2000 uses allows near-linear scaling with the number of processors you use for the rebuild, so four processors will take only one-fourth the time that one processor requires to rebuild an index. System availability increases because the length of time that a table is unavailable decreases. Note that only the SQL Server 2000 Enterprise Edition supports parallel index creation.

Database Backup

Database backup is another area that can affect the availability of a 24 x 7 operation. From the beginning, Microsoft widely publicized that SQL Server's backup operations were dynamic and could take place while the database was in use. However, this ability doesn't mean that the backup operation doesn't affect users. In many cases, system throughput and response time can drop dramatically, and data that an application needs can be inaccessible until SQL Server has backed up the necessary pages.

SQL Server 7.0 made many improvements in database backup. First, the backup algorithms back up everything in its current state, no matter what pages are in use. Second, SQL Server 7.0 can effectively locate all the pages a database's objects use. SQL Server 7.0 uses bitmap pages called Global Allocation Maps (GAMs) and Secondary GAMs (SGAMs) to keep track of all pages allocated to database objects. By accessing these allocation pages, SQL Server can quickly determine which extents hold data and back up only those pages. Also, SQL Server 7.0 added a type of backup called differential backup, which lets you back up only pages that have changed since the last full database backup. This ability greatly speeds up the backup process because less data needs to be written from the database to the backup media. It also speeds up the process of restoring a database because you need to restore only the changed pages. However, the differential backup process takes extra time to determine which extents changed; SQL Server 7.0 has to check every page in a database to find which pages to back up during a differential backup operation.

SQL Server 2000 improved upon the differential backup mechanism by introducing a new type of bitmap page called a Differential Changed Map (DCM) page, on which each bit represents one extent in a file. Backing up a database clears all bits, and any time a page is modified, the DCM bit for the extent containing that page is set. When you perform a differential backup, SQL Server needs to look only at the DCM page to determine which extents need to be backed up; it doesn't need to look at every page in the database. Your backups take less time and have less performance impact on the system, which increases the availability of your SQL Server database.

Other improvements in SQL Server 2000's backup methodologies allow greater availability because they have less impact on the system. In SQL Server 7.0, differential backups are available only for the whole database. But SQL Server 7.0 also lets you back up individual files or filegroups, which is like making a full backup in that the backup process copies every extent to the backup device. The difference between a full database backup and a file or filegroup backup is that the extents in only the file or filegroup—not the entire database—are backed up. In SQL Server 2000, you can combine these two techniques and perform differential backups on individual files or filegroups. You must start with a full backup of the file or filegroup, which will set all the DCM bits to 0 for that file or filegroup. You can then make differential backups of just the changed extents. Be careful, though—you can't efficiently combine differential backups on files or filegroups with differential backups on the whole database. If you back up an individual file or filegroup in preparation for making differential backups, all the DCM bits will clear. If you then try to make a database differential backup, SQL Server won't be able to tell which extents have changed, so it will revert to examining every page to determine which ones have changed. You'll lose most of the benefits of differential backup. If you want to include differential backups in your backup strategy, I recommend that you decide whether to work with the whole database or with individual files or filegroups.

SQL Server 2000 also introduces support for snapshot backup-and-restore technologies in conjunction with independent hardware vendors (IHVs) such as Compaq and independent software vendors (ISVs) such as EMC. Snapshot backups minimize or eliminate the use of server resources for backup; this is especially beneficial for midsize to very large databases (VLDBs) in which availability is essential. For snapshot backup, you typically use a mirrored set of disks during everyday activity, then split the mirrored set, preserving the original as the backup. SQL Server 2000 Books Online (BOL) lists several benefits of this technology. You can typically create a backup in seconds, with little or no impact on the server. You can restore from a disk backup just as quickly. Also, another server can accomplish the backup to tape, with no impact on the production system. And you can instantly create a copy of a production database for reporting or testing.

No Time Off

For performing database maintenance and backup tasks, many database administrators still think in terms of needing a window during which users have little or no access to the database. Although DBAs want fast backups to minimize system disruptions, companies increasingly demand applications that perform well 24 hours a day, 7 days a week; any window when the database is unavailable is unreasonable. Instead of needing maintenance and backup activities to be fast, you need them to be unobtrusive. In a 24 x 7 shop, you need to be able to perform your administrative tasks while other work goes on. SQL Server 2000's online index defragmentation and many backup improvements (as well as other enhancements that I'll write about in upcoming issues) make SQL Server 2000 a truly available database system.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish