Skip navigation

The 90:10 Rule for SQL Server Performance

10 tuning tips for 90% of your performance gains

Tuning Microsoft SQL Server 2000 and SQL Server 7.0 to boost performance can be hard work, but in most cases, you can get major benefits from expending just a little effort. It's the old 90:10 rule: You can get 90 percent improvement with only 10 percent effort, but realizing that final 10 percent performance gain will take 90 percent of your tuning efforts.

This 90:10 rule doesn't apply to all database products or even to earlier versions of SQL Server. To achieve reasonable performance, some products require you to set dozens—or even hundreds—of server-level parameters and numerous other options in your SQL code. By contrast, Microsoft made SQL Server 2000 and SQL Server 7.0 self-tuning, and these products give you reasonable performance right out of the box.

But to attain a higher level of performance than what you get on average out of the box, you only need to give SQL Server a little attention. Applying the following 10 tips will help you achieve that initial 90 percent performance gain. If you find that you need more information as you put these tips into practice, check out the resources in the sidebar "Knowing Is 9/10 of the Battle."

Tip 1: Don't skimp on hardware
I don't usually recommend hardware improvements first on a list of tuning tips because I don't want to mislead you into thinking you can simply throw hardware at performance problems. However, in the case of SQL Server performance, consider hardware first.

If you already use a decent system, upgrading your hardware will rarely bring more than a 10 percent overall performance improvement. However, if you're running a SQL Server based application on a server to which several hundred users simultaneously connect, and the server has only one physical hard disk and the absolute minimum of 64MB of RAM, then simply increasing RAM to the recommended minimum of 128MB will bring a more drastic performance improvement.

Beyond 128MB of RAM, you ideally need another 10MB for each 10 simultaneous user connections, and you need enough additional RAM to store all user data, system data, and indexes. I recommend choosing a disk configuration with which you store user data (.mdf and .ndf) files and log (.ldf) files on separate physical disks that have separate controllers. Store the user data files on the best RAID system your budget allows. For processing power, purchase two of the fastest processors you can afford. These configurations are the very skimpiest you should consider.

Tip 2: Don't overconfigure
Microsoft designed both SQL Server 2000 and SQL Server 7.0 to self-tune. For example, the SQL Server engine can determine optimal values for memory utilization, number of locks to allow, and checkpoint frequency.

Only consider changing the out-of-the-box configuration options that don't affect performance. These nonperformance configurations include user options and the two-digit year cutoff option (the user options bitmap indicates which options you want to enable for each user connection; the two-digit year cutoff option controls how SQL Server interprets a two-digit year value).

Microsoft made the performance-related options available for configuration in the rare case in which you need to make an adjustment to them. However, your best bet is to let SQL Server's configuration options continue self-tuning.

If you use SQL Server 7.0, the max async I/O parameter might be an exception to this recommendation. You configure max async I/O for the level of sophistication and number of controllers in your I/O system. The max async I/O value determines the maximum number of outstanding asynchronous I/O requests that the server can issue to any file. If a database spans multiple files, the value applies to each file.

The max async I/O default setting of 32—only 32 reads and 32 writes can be outstanding per file—is an optimum value for many systems. Read SQL Server Books Online (BOL) to determine whether you need to change the default value for your system. SQL Server 2000 doesn't include the max asynch I/O parameter but can determine the optimum value internally.

Tip 3: Take time for design
In this age of rapid application development (RAD), the popular goal of quick initial project implementation might tempt you to sacrifice high-quality relational database design. If you yield, performance suffers. Poor design is a difficult problem to fix because fixing design frequently requires changing much of the written and tested code.

For a well-designed database, start by creating a normalized model in at least the third normal form. This practice minimizes redundancy and reduces overall data volumes. You can then systematically denormalize design, documenting each break from the normalized form.

A denormalized design is different from an unnormalized design. A denormalized design introduces redundancy for specific performance reasons. For example, if you consistently need to look up a customer's outstanding orders by the customer's name, and one table stores the customer ID number and the customer's name whereas another table stores the customer ID number and the order status, SQL Server needs to perform a join of these tables to retrieve names of customers with outstanding orders. In some cases, joins can be expensive. A denormalized model might add the customer name to the table that contains the order status and customer ID number and eliminate the need for a join.

Tip 4: Create useful indexes
Without the help of useful indexes, SQL Server must search every row in a table to find the data you need. If you're grouping or joining data from multiple tables, SQL Server might even need to make multiple passes through the tables to get your results. Because reading from disk is one of the most expensive actions SQL Server performs, reducing the number of required disk reads is one of the most effective tuning techniques you can employ.

If you create useful indexes, you can see performance improve by orders of magnitude, rather than by a few percentage points. For example, without an index, SQL Server might need to read all 10,000 pages in a table. An index would bring a 100,000 percent improvement in the number of pages SQL Server needs to read if the index reduces the number of necessary page-reads to 10.

A thorough knowledge of index architecture and the SQL Server query optimizer will help you create the best possible indexes, but until you reach that knowledge level, you can use SQL Server's Index Tuning Wizard to create useful indexes. Open this tool from SQL Server Enterprise Manager by clicking the Wizards button in the toolbar and looking under Management Wizards.

Before the wizard makes a set of index recommendations, it needs to know how you'll be accessing data. The best way to collect this information is with SQL Server Profiler. During a few hours of peak usage, capture the SQL command batches that your client applications send to your SQL server. You can use this information to tell the wizard how client applications access a table.

If your application isn't in production yet, you can provide the Index Tuning Wizard with a set of representative SQL statements that access the tables you want to tune. You can use Query Analyzer to create this set. Simply input the names of the stored procedures you'll be running, and make your best guesses about other ad hoc SQL statements that users will run.

Tip 5: Use SQL effectively
SQL is a set processing language, not a row-at-a-time processing language. T-SQL, Microsoft's dialect of the SQL language, can use server cursors to access one row at a time; however, most solutions that use server cursors will be orders of magnitude slower than solutions that use SELECT statements and UPDATE statements to perform the equivalent task. The more SQL programming experience you have, the more comfortable you'll be using the SQL language most effectively. Taking advantage of features such as subqueries, derived tables, and CASE expressions to manipulate sets of rows will speed your solutions and help you maximize SQL Server performance.

For example, suppose a table contains a row for each product in your inventory, and another table contains a row for the quantity of each sale of that product. You want to denormalize the database and store the sum of each product's sales in the product inventory table. To generate these sums, you could use a cursor and step through the product table one row at a time. For each row, you could then find all matching rows in the sales table, add up the quantity values, and use that sum to update the product inventory table. In this example, using server cursors to collect figures for total sales is possible but unbelievably inefficient.

You can use the following UPDATE statement and correlated subquery to perform the same task. This statement uses the titles table in the pubs database as the products table, and for each title, the statement adds the values in the sales table's qty field.

UPDATE titles
SET ytd_sales =
(SELECT sum(qty) FROM sales
WHERE title_id = titles
.title_id)

Tip 6: Learn T-SQL tricks
Microsoft's T-SQL is an enhanced version of standard ANSI-SQL. Taking advantage of these enhancements will help you improve performance.

For example, suppose you want to put all products on sale and base each product's sale price on the quantity sold in the past year. You want the sale price to be 25 percent off the current price for products that sold fewer than 3000 units; you want the reduction to be 20 percent for products that sold between 3000 and 10,000 units; and you want a 10 percent discount for products that sold more than 10,000 units. You might think you need to issue an UPDATE statement with the appropriate discount values after you use a cursor to look at the products' rows individually for quantity-sold information. However, the T-SQL CASE expression lets you use one statement to calculate appropriate discounts.

The following sample UPDATE statement uses the pubs database's titles table, which has a price field that the statement will update and a ytd_sales field that stores the year-to-date sales quantity. (If you've already run Tip 5's sample statement, this query won't work; the sample will have updated ytd_sales to a set of different values.)

UPDATE titles
SET price = CASE
	WHEN ytd_sales < 3000 THEN price * 0.75
	WHEN ytd_sales between 3000 and 10000 THEN price * 0.80
	WHEN ytd_sales > 10000 THEN price * 0.90
END
WHERE price IS NOT NULL

Other T-SQL features that improve query performance are the TOP expression, when you use it with ORDER BY; indexed views (SQL Server 2000 only); and partitioned views.

Tip 7: Understand locking
Locking and blocking problems often cause performance degradation in multiuser systems. I advise against forcing SQL Server to lock data in the ways you might think SQL Server should. Instead, I recommend that you increase your understanding of how SQL Server typically locks data, how much data it usually locks, and how long it holds the locks. After you understand how SQL Server locking and blocking works, you can write your applications to work with SQL Server, rather than against it.

For example, after your application modifies data in a transaction, SQL Server locks that data and makes it unavailable to any other processes until you either commit or roll back your transaction. If you take a long time to issue a commit or rollback command, the data will be locked a long time. Therefore, I advise you to keep your transactions as short as possible. I also advise against allowing user interaction in the middle of your transaction.

By default, SQL Server holds exclusive locks—acquired when you insert, update, and delete data—until the end of a transaction. SQL Server holds share locks—acquired when you select data—only until you finish reading the data that you selected.

You can change your transaction isolation level to cause SQL Server to hold share locks until the end of a transaction, which means that after you retrieve and read data, no one else can modify the data. Changing transaction isolation levels might sound like a good idea for keeping data for only your use. However, these changes aren't a good idea if you have multiuser systems from which many users need to access the same data. I recommend that you keep your transaction isolation level at Committed Read (which is the default transaction isolation level) and change the level only when you can't accomplish your performance goals any other way.

Tip 8: Minimize recompilations
The first version of SQL Server could store and reuse the execution plan of a stored procedure that your application executes multiple times. However, until SQL Server 7.0, this feature didn't provide big savings. For many queries, the cost of generating the query plan through compilation (compilation also includes query optimization) was only a fraction of the cost of executing the query; you hardly noticed the millisecond or two you saved by not creating a new plan.

Microsoft rewrote the product's query optimizer for SQL Server 7.0 to include dozens of new query processing techniques. As a result of the new features, the query optimizer typically spends more time than earlier versions producing an execution plan. This extra time makes the query plan reuse feature more valuable for saving time.

SQL Server 2000 and SQL Server 7.0 also offer a mechanism for saving the execution plans of ad hoc queries. This feature can be a help when a stored procedure isn't available. This capability is automatic, but not guaranteed. The mechanism follows a strict set of rules and is less predictable than the reuse of stored procedure plans. Therefore, I recommend that you write stored procedures for all your SQL code wherever you possibly can.

Using precompiled plans can save you time, but occasionally you'll want to force a recompilation, and sometimes SQL Server will decide independently to recompile the plan for a procedure. Profiler can tell you when recompilations occur, and the System Monitor tool can tell you how often these recompilations have occurred.

Tip 9: Program applications intelligently
The more you, as a client programmer, know about how SQL Server works, the better the code you can write. For example, you'll know not to allow user interaction in the middle of a transaction, as Tip 7 also warns against.

Another poor programming practice is writing your client application to begin a transaction, send an update statement to SQL Server, then bring up a message box asking users whether they want to continue. In this case, SQL Server would hold any acquired locks until the user, who might have stepped out to a long lunch or gone home for the day, returns and clicks OK in the message box.

Tip 5 warned against using server cursors. However, client cursors are a different topic. Programming your client application to process in a row-at-a-time fashion a result set that SQL Server used a set-based operation to generate is an acceptable practice. However, you need to read your API's documentation to maximize performance of the many variations of client cursors.

One variation of a client cursor is the Fast Forward-Only cursor, useful for fetching data sequentially for one-time-only read-only purposes. You can use this cursor to save two round-trips to the server; SQL Server fetches the first row when the cursor is opened and closes the cursor when SQL Server fetches the last row. Even if you're only fetching a few rows, if you frequently use the section with which you use the Fast Forward-Only cursor, those two round-trips you save will add up.

Tip 10: Stay in touch
If these tips don't address your particular tuning concerns, familiarize yourself with the myriad sources of free public support, in which many longtime SQL Server experts read and respond to specific questions from SQL Server users. I recommend Microsoft's public newsgroups. You can use any newsreader software (e.g., Microsoft Outlook Express) to search in the msnews.microsoft.com server for the newsgroups that have sqlserver in their names.

Tips of Icebergs
These tips are just that—tips—like the tips of icebergs, which show only 10 percent of the total berg. At the very least, the tips show you what's available for tuning SQL Server. Now that you know what you can tune, you can look for more information about how to tune. Before long, you might even be up to exerting that 90 percent effort to tune SQL Server to reach that last 10 percent for maximum performance.

TAGS: SQL
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish