Skip navigation

Tuning Your Database with SQL Trace

A NIFTY TOOL FOR ANALYZING DATABASE PERFORMANCE

If you believe Microsoft's claims, Microsoft SQL Server is the fastest relational database management system for Windows NT. But you can't assume that your applications will run fast just because SQL Server cranks out high numbers on the Transaction Processing Council's C benchmark (TPC-C). Here's some common-sense advice for getting your SQL Server to run faster.

Many factors--hardware, the network, SQL Server configuration options, and application design--affect the performance of a database system. Most knowledgeable database professionals realize that their data-access application is key; you must tune how the application interacts with SQL Server to achieve the best performance. Fortunately, Microsoft simplifies database-level application profiling with SQL Trace. I'll explain how to run SQL Trace and give you pointers for analyzing its output. In future articles, I'll address common application problems. The fixes are surprisingly simple, and can offer huge performance improvements.

What Is SQL Trace?
SQL Trace is SQL Server's built-in utility that monitors and records SQL Server 6.5 database activity. This utility can display server activity; create filters that focus on the actions of particular users, applications, or workstations; and filter at the SQL command level. SQL Trace operates by capturing SQL statements or remote procedure calls (RPCs) sent to any SQL Server 6.5 system. You can save trace activity to disk in an ISQL/w-compatible script that you can replay against a SQL Server, or you can set up an activity log file that captures the SQL command and a variety of performance information.

SQL Trace is the most powerful tool in my tuning bag of tricks. It lets me see the big picture of application-level interaction with SQL Server without knowing anything about the application or looking at the source code. Modern data access mechanisms such as Open Database Connectivity (ODBC) and Object Linking and Embedding Database (OLE DB) and their high-level distributed component object model (DCOM) abstractions such as Remote Data Objects (RDO), ActiveX Data Object (ADO), and Data Access Object (DAO) are great programming tools. But these tools hide what's going on between the client and the server. SQL Trace can show programmers what their application is doing.

Figure 1 demonstrates how ODBC applications often do surprising things. This simple SQL Trace output shows all the Transact-SQL (T-SQL) commands that Microsoft Access issued when I opened an attached link to the pubs..authors table in Datasheet View. You might be thinking, "Hey! I thought we were doing only a SELECT * FROM authors. Where did all these statements come from?" Here's what happens.

First, Access runs a SELECT statement that returns all the values for the au_id primary key column. Then, it creates a stored procedure that accepts 10 parameters and runs a SELECT statement similar to Listing 1. Because the authors table has 24 rows, Access runs its new procedure three times to return all 24 rows. The first two calls ask for 10 rows each, and the last call asks for the final 4 rows.

This example demonstrates an important point: Applications (e.g., ODBC) using data-access middleware rarely do exactly what you think they are doing. The final outcome the user sees might be the same as the result obtained without using middleware, but ODBC doesn't always get the data efficiently. To be fair, the inefficiency is not ODBC's fault. ODBC can be efficient and flexible, and it usually provides several ways to accomplish the same result so that developers can tune their system. This flexibility is a blessing and a curse. It's great if you know what your options are but miserable if the applications developer allows defaults that aren't best for your needs. SQL Trace lets you see ODBC's choices and whether its choices were appropriate for your environment.

Running SQL Trace
You can run SQL Trace as a GUI utility or through the xp_sqltrace interface. The GUI version--a visual wrapper around xp_sqltrace--is easy to use and helpful for ad hoc troubleshooting. You start SQL Trace from Programs, SQL Server, SQL Trace. Screens 1 and 2 show SQL Trace's GUI.

In Screen 1, I'm editing the filter properties of the trace Show Me The SQL!!!. I've selected options to display the trace onscreen, using a separate window for each connection. I've also selected the options to log the trace to an .sql file for replay and a .log file for future analysis. I'm capturing connections from all users, workstations, and applications, but I could filter on those attributes. The Filter dialog box on the Events tab that Screen 2 shows lets me filter at the statement level. I want to see only T-SQL statements that reference the TopSecretTable object.

The GUI version is handy, but I've found the extended stored procedure version more useful when I'm doing a full application profile. Xp_sqltrace offers more control over options, and the audit mode lets me run a long trace without being logged into a machine overnight. I can also stop and start a trace from a timed SQLExec task. Xp_sqltrace is well documented in Books Online (BOL), and I strongly encourage you to read the information. (However, BOL isn't perfect. The sidebar "Bugs in BOL's SQL Trace Table," page 135, describes two errors that you need to be aware of.)

One important xp_sqltrace command-line option is @FullText, which controls whether the trace captures the entire T-SQL batch being executed or defaults to a value of 0, which grabs only the first 255 characters. My experience has shown that you want to set this option to 1, which captures the entire batch. Nothing is more frustrating than trying to tune or troubleshoot a query that's chopped in half.

Running xp_sqltrace is straightforward, but repeatedly typing all those option settings is tedious. I like to create stored procedure wrappers around xp_sqltrace and set the options to my favorite defaults. That way, I don't have to type as much to stop and start a trace. Listing 2 shows two stored procedures you can use to get started.

What effect does running SQL Trace have on security and performance? Loading the SQL Server client utilities installs SQL Trace; therefore, many people have access to the tool. But you can easily control who can use it by setting appropriate permission levels on xp_sqltrace within the master database; by default, only the sa permission can run xp_sqltrace.

Monitoring database performance can degrade the performance you're trying to monitor. The trick is keeping overhead low to avoid significantly skewing or misrepresenting results. Fortunately, the performance impact of running xp_sqltrace is negligible. I haven't run a lot of simultaneous active traces or complex filters, but I've found that when I run one active trace in audit mode, logging results to a local text file on the server imposes almost no performance penalty. I don't like to log results directly to a local SQL table because this action can easily skew results in some tuning problems. But writing to a text file seems to have little effect if the server isn't completely I/O bound. If the server is I/O bound, write the log file to a remote network share and hope that the network isn't bottlenecked, too.

Loading SQL Trace Output for Analysis
SQL Trace output is almost useless in its raw form because it includes an overwhelming amount of information. To make my life easier, I load trace files back into a SQL Server table and run queries against it to help me understand what activity the server was performing. I can run a whole range of queries that help me quickly pinpoint inefficient parts of the application. SQL Trace's value is directly related to the cleverness of the queries you write to analyze the data. I'll show you one powerful analysis query here and describe other queries in future articles.

Table 1 lists the information that a SQL Trace log file captures. The log file is a standard ASCII file with tab-delimited columns.

I recommend using bulk copy program (bcp) to load your data into SQL Server for analysis. Bcp isn't the most user-friendly utility, but bcp'ing trace data is easy because the log file is tab delimited and you load the log file into a SQL Server table that exactly matches the file layout. My table looks like Listing 3. If you're new to bcp, the following command will get you started:

bcp pubs..SQL Trace in c:\activity.log 
-Usa -Psecret -S -c

As I mentioned earlier, you need to run xp_sqltrace with the @FullText option = 1 to capture the entire T-SQL batch. Unfortunately, SQL Server doesn't deal with character strings longer than 255 bytes very efficiently. To hold SQL commands longer than 255 bytes, you need to create a column with a text datatype; each text column will consume at least one 2KB database page. That's almost 200MB if you're working with a 100,000-command trace file. To deal with this situation, I often create two SQL Trace tables.

Listing 3 shows a table in which I keep only the first 255 bytes of the SQL command. When I'm working with large files, this approach saves a tremendous amount of space. Notice that the definition of the Data column is varchar(255). If I want to load the entire SQL command, I can create a second table called SQL TraceText that's identical to the first except the Data column's definition is text. This definition lets me load the entire SQL command if I have a lot of space or I'm dealing with a small trace file. I can always open the trace log directly to grab the entire command. Make sure you have a good text editor if you plan to open the trace log directly, because opening a 50MB file with Notepad takes time.

If you load SQL commands exceeding 255 bytes into your SQL Trace table, you'll see bcp warning messages such as DB-LIBRARY error:Attempt to bulk-copy an oversized row to the SQL Server. You can ignore these messages; they're telling you that bcp is truncating the data field because the target column isn't big enough.

Analyzing SQL Trace Output
You've loaded your trace log into a SQL Server table. How do you identify performance problems? Should you spend more time tuning a query that takes 2 minutes to run and executes once or a query that takes 1 second to run but executes 100,000 times during the application? The second choice is correct. With this priority in mind, start your trace analysis looking for two types of queries: slow queries and frequently executed queries (even if each individual execution is very fast).

I usually start this process with a query that looks like Listing 4. This query tells me the average, maximum, and total execution time of each type of query listed, starting with the most time-consuming queries. At this stage, focus on the big picture of what the application is doing, and don't get bogged down in the low-level query-tuning details. Table 2 shows output from an example SQL Trace.

The three most important rules of SQL Server tuning are

  1. Make the slow queries run faster.
  2. Spend the most time tuning queries that consume the most resources.
  3. Reduce the number of transactions SQL Server has to perform.

The first two rules are self-explanatory; the third rule is less obvious. If you need to return 1000 rows from a table called MyData, executing one query that returns 1000 rows is much faster than executing 1000 queries that return one row each. Surprisingly, many ODBC settings cause your applications to execute one query at a time.

With these rules in mind, examine the results of the example trace. First, don't worry about what the SQL commands are actually doing. Instead, pay attention to the three rules of SQL Server tuning: Which queries take the most time to run? Which queries are run most frequently? Can you reduce the total number of queries executed? These questions are easy to answer using the summarized SQL Trace data and almost impossible to answer without the data. What does the trace data tell you?

The first query in the table consumes the most time (as the Summary Duration column shows), and at 12,553 milliseconds (ms) has one of the highest average execution times. SQL Server executes this query only 80 times, but the query is slow. You need to tune this query. Maybe you can reindex or eliminate a join?

The second query in the table is the second largest time hog, but it's different from the first query. The second query eats time because it runs so frequently. As common sense tells you, you can solve that problem by not running the query so often. You don't need to lock the application so that users can't run the procedure; you need to rewrite the procedure so that it will require fewer invocations. Sometimes rewriting the procedure is easy, and sometimes it's impossible without changing the entire application. In either case, the first step to solving the problem is understanding it.

The third and fourth queries go hand in hand. The third query opens a server-side ODBC cursor, and the fourth query fetches rows from it. In the fourth query

sp_cursorfetch 201788208, 1, 0, 1

the last parameter (1) tells you how many rows are being returned with each invocation of the fetch. Remember, the third rule is to reduce the number of transactions SQL Server must process. In this case, you are fetching 3775 rows from the cursor opened in the third query, and you're using 3775 SQL calls to do it. ODBC lets you set the fetch size used for returning rows. Using a fetch size of 100 (reasonable, for client-side caching) reduces the number of times you must call the fourth query from 3775 to 38. That adjustment will provide a huge savings--50 percent to 100 percent--in total execution time.

Just the Beginning
Most people know that how efficiently a data-access application interacts with SQL Server has the greatest effect on application performance. But many people ignore this component because they don't they know how to measure the efficiency of this interaction.

In this article, I've shown you SQL Trace's usefulness and provided some hints to help you start profiling your applications. Future articles will delve deeper into this topic and explore many ODBC settings, explain how to tailor DCOM data layers such as ADO, highlight the importance of using query batches to reduce the number of roundtrips to the server, and suggest queries you can use to see what your application is doing.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish