Skip navigation

Is Tempdb Affecting Your Day-to-Day SQL Server Performance?

Some simple investigation is all it takes

The tempdb database is essential to normal SQL Server operation and is used extensively by both internal system-level SQL Server processes and by user-generated requests. In fact, tempdb is often the most active database in a SQL Server instance in terms of sheer numbers of transactions per second. I'm not just talking about those times when you create a temporary table using the # or ## prefix. Many other activities utilize tempdb, too—for example, sorting or hashing operations and intermediate results from joins.

These types of operations differ from the activity in a regular user database in several key ways. First, objects such as temporary tables or internal work tables that support the aforementioned operations are typically created at a much higher rate than objects normally created in user databases. Second, there's often an enormous number of pages that need to be allocated when populating these temporary or work tables in a very short period of time. Together, these two activities can cause performance problems if not properly accounted for, and every DBA should be aware of them.

Don’t Contend With Me

One of the issues that arises from heavy tempdb use is contention amongst several of the pages in the data file that are used to track page allocations and usage. The two most common would be the Page Free Space (PFS) and Shared Global Allocation Map (SGAM) pages, which reside in the first and third pages after the header page of every data file and are repeated about every 4GB as the file grows. There's a lot of good information available on this subject, so I won’t dwell too much on the details, but in a nutshell, when you have many users attempting to create new tables or indexes at the same time, a bottleneck can occur on these pages because each new object creation requires an update to those pages.

By default, tempdb has only one data file—and hence a single set of these pages for a given size of the file. If this is a bottleneck in your system, you can usually spot it fairly easily by looking at the wait stats and, in particular, the PAGELATCH_UP wait type. The higher the wait time, the more contention you have.

You can confirm that the contention is in tempdb by looking at the Waitresource column in Master..Sysprocesses for resources of either 2:1:1 (PFS page) or 2:1:3 (SGAM page). The first number is the database ID (tempdb is always 2), the second is the data file ID, and the third is the page number in the file. See the "Managing Extent Allocations and Free Space" section of SQL Server 2008 Books Online for more details (msdn.microsoft.com/en-us/library/ms175195.aspx).

The most common way to reduce or alleviate this type of contention is to add more data files of equal size to the database, and thus spread the load across each of the files. Just don’t add too many files at once! Add a few, then check the waits again to see whether you need more. Too many files can cause other problems, so play it smart. The two white papers listed at the end of this article can help you determine how many files to use. 

Cache Me Out

The second problem I want to address involves the way new temporary table creation can be affected by some caching techniques that help performance in tempdb in SQL Server 2005, along with the cleanup of the temporary tables. Itzik Ben-Gan's "Caching of Temporary Objects" goes into great detail about how the caching mechanism works and what can affect the caching. The bottom line is that SQL Server can cache some of the metadata and page allocations from temporary objects for easier and faster reuse with less contention. Again, since Itzik’s article covers most of the situations that prevent cache reuse, I won’t go into the details here, but like any caching mechanism it can dramatically speed up operations that take advantage of the cache. When used properly, it can make a significant performance difference on high-volume systems.

But there's another aspect not covered in Itzik's article, and it relates to the explicit truncating and dropping of temporary tables in the users' code. Although it doesn't appear to be documented as such, I've confirmed that explicitly dropping a temporary table will essentially defeat the ability to use the caching mechanism. SQL Server implicitly cleans up temporary objects when they go out of scope, such as when the stored procedure is finished executing, so there's little need to do it yourself anyway. Truncating the table when you're done with it can also be harmful to performance and should be avoided.

To give you an example of the kind of impact this might have, I came across a situation in which someone was explicitly truncating and dropping the temporary table at the end of the stored procedure. I asked this person to try removing the Truncate and Drop commands, and the execution times decreased by over 50ms per execution. Because the user was executing these stored procedures hundreds of times per second, the removal resulted in a noticeable improvement in performance. There might be times when an explicit drop or truncate is necessary but they should be the exception rather than the rule.

Spend a Few Minutes

Checking for these common performance problems with tempdb is easy and can lead to better overall performance in your SQL Server instance. So, spend a few minutes optimizing tempdb by using this article's tips, along with other terrific information in two white papers about tempdb: "Optimizing tempdb Performance" (msdn.microsoft.com/en-us/library/ms175527.aspx) and "Working with tempdb in SQL Server 2005" (technet.microsoft.com/en-us/library/cc966545.aspx).

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish