Skip navigation

Filegroup Usage for VLDBs

Deciding what type of user data should go into your filegroups can take some planning. To start, I recommend that the primary filegroup contain only one file (an .mdf) that contains only the system tables (sysobjects, sysindexes, sysusers, and so on). For all user-defined objects, I recommend user-defined filegroups. Also, I recommend that you have only one transaction log file (an .ldf). Only one transaction log file is necessary because frequent log backups minimize the space the changes require. If you back up your log every minute, the log needs to hold only 1 minute's worth of log entries.

However, long-running transactions might require a larger log. Make sure to test the sizing of your transaction log against all types of activity that will occur in your database. Although you might need a large transaction log, you get no performance benefits from having multiple logs except in rare cases. Creating two transaction logs won't cause the log to be striped (as it would in disk striping or in RAID 0 stripe sets) across the files. Log data fills the first file, then moves to the second file. The following three hints can help give you the best performance for your transaction log:

  1. Place the transaction log on a drive that's isolated from all other activity.
  2. Make sure the log's initial size is large enough to handle the bulk of the database activity that occurs between backups without autogrowth.
  3. If autogrowth is still necessary, make sure the autogrowth rate is set reasonably. If your database has several small autogrowths, the transaction log will be fragmented into many small virtual log files (VLFs). VLFs aren't visible—they're part of the transaction log and are for SQL Server's use only—but they can degrade performance. If a transaction log has many VLFs because of frequent small autogrowths, the performance of some log-related operations (such as backup) will suffer. For more information about transaction logs, see the SQL Server Books Online (BOL) topic "Checkpoints and the Active Portion of the Log."

After you've determined the primary and log files, you need to place user-defined data in non-primary files (.ndf files) and create user-defined filegroups. The number of files and filegroups should be based on the size of your database and the type of data you have. I don't recommend splitting tables and indexes into separate filegroups. Instead, partition some of your larger tables (for example, a sales table can be partitioned into 12 monthly sales tables per year), and place tables that you need to back up and restore to other locations in a separate location.

Your main guides for your file and filegroup configuration will be the size of the database and the type and limitations of hardware you're using. For example, each logical drive has a maximum size, based on the type of RAID configuration you choose and the maximum number of drives that can be logically grouped together. You'll need to use added caution when your database is on a SAN. If you have a very large database (VLDB) and you're setting up on a SAN, create the logical drives manually to make sure that no drive partitions end up competing for drive access. Why would drive partitions compete for space? On most SANs, the automatic configuration for creating logical drives builds your logical drives by taking "slices" of different physical drives, adding them together, and creating logical drives that span multiple physical drives. Although slicing disks can greatly improve performance by spreading even small logical drives over more physical drives than is necessary for the size of the logical drive you're creating, it can cause problems when multiple slices of the same physical drive are very active. For example, if part of a physical drive went to the log and another part of the physical drive went to an active portion of the data or to tempdb, performance problems could occur because of drive contention. So, spend time learning how your hardware configures your logical disks.

After creating the logical disks, you'll need to add the .ndf files (i.e., the data files that will be used for user-defined data). For each logical drive, you should create only one .ndf file. You can create multiple files on one logical drive, but doing so often leads to performance problems, not performance gains. You'll then group as many files as necessary to create a filegroup large enough to hold the data destined for that filegroup. Here are a few recommendations for filegroup assignments:

  1. One filegroup to hold a large table that might need backup and restore granularity at the table level
  2. One filegroup that holds read-only tables
  3. One filegroup for read/write tables
  4. One filegroup for text or image data (text, ntext, and image data types)
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish