Skip navigation

Fighting OS-Level Fragmentation

I've read about how fragmentation at the SQL Server level can hurt my queries' performance. I understand how to look for fragmentation within my SQL Server tables by using the DBCC SHOWCONTIG command. However, should I also worry about fragmentation at the OS level?

Fragmentation exists at both the SQL Server level and the file level within the OS. It sounds like you already use DBCC SHOWCONTIG to combat SQL Server-level fragmentation (for a DBCC SHOWCONTIG primer, see SQL Server Books Online—BOL). So, let's look at how and when OS-level defragmentation can speed up your SQL Server.

Remember that SQL Server can report 0 percent fragmentation even when the on-disk files are horribly fragmented. SQL Server doesn't know or need to know how the OS physically lays out the bits on disk; it's the OS's job to manage physical bits on disk. However, because SQL Server doesn't know how the bits are laid out on disk, SQL Server has no direct way to report about file fragmentation. Imagine you're performing a table scan and SQL Server reports 0 percent fragmentation but the file that contains the table is scattered all over your disk. In this case, performing an OS-level defragmentation could help performance by making the files more contiguous on disk. However, defragmenting at the OS level doesn't always have the effect that you might expect. SQL Server preallocates space when you create a file. For example, if you create a 100MB file on a disk that's been defragmented recently, SQL Server creates the file in contiguous space. And SQL Server can read and write to that file forever, and the file won't fragment at an OS level (unless you stop SQL Server and perform a file-copy operation that puts the file into noncontiguous space). But if you create that 100MB file on a disk that's heavily fragmented, the file will also be fragmented.

What happens if SQL Server creates the 100MB file in contiguous space but auto-grow operations increase the size of the file to 200MB? In this case, the new space added to the SQL Server file might fragment as the file grows. An OS-level defragmentation will improve performance if the files become fragmented and you're performing table-scan operations within SQL Server that look at ranges of data. So, I suggest scheduling a regular OS-level defragmentation. The only downside is that you'll need to stop SQL Server during the defragmentation operation because defraggers can't work on open files. Stopping SQL Server might not be a problem for small shops, but larger enterprise shops trying to maintain 4 or 5 nines (99.99% to 99.999%) of availability will be hard-pressed to find enough time for regular OS-level defragmentation. The best practice for highly available SQL Servers is to create your files on contiguous space that's already been defragmented, planning ahead of time for data growth and building that growth into the size of the initial files.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish