Skip navigation

Six Easy Steps to Getting Its Best

Microsoft SQL Server version 6.0 has a more powerful database engine and supports more Windows users than its previous versions. However, figuring out how to tune it to get the best performance possible is a bit of a mystery. Here are some tips that will help you solve that puzzle.

  • Spend some money on RAM.
  • Set the data and procedure caches.
  • Use stored procedures.
  • Look at the application's requirements for SQL Server's temporary database.
  • Select an appropriate data-protection scheme.
  • Analyze the application to determine its primary type of processing.

You should keep in mind that no amount of tuning makes up for poor application and database design. The best way to avoid problems is to build a logical data model first and then create your applications. If you're careful, you can do these concurrently as long as the database-design team is actively involved in the early stages of application development. Once the application and logical data models are complete, you can generate the physical database from the logical data model. You can do it manually, but we recommend using one of the available tools that help automate this process. Make sure you pick a tool that provides strong support for physical database designs, such as Erwin from Logic Works.

1) Buy More RAM
Adding more RAM is one of the best ways to improve SQL Server performance. More memory enables SQL Server to cache more data and reduce the amount of physical disk I/O necessary. Disk access is the slowest part of your server and should be minimized. A minimal installation should have 64MB of RAM or more, although the DBMS can function quite well with as little as 32MB. Remember, the operating system requires that part of RAM be available to the server.

Physical Memory
(in megabytes)
SQL Server
Memory Allocation
(in megabytes)
16
24
32
64
128
256
512
4
8
16
40
100
216
464

The system-overhead requirement for a Windows NT-based system can grow as you enlarge the SQL Server configuration. As the parameters are raised, NT needs more RAM for such things as additional threads, page tables, and so on. The table above contains a guideline to help you determine how much memory to allocate to SQL Server, depending on how much physical memory is on the system.

Due to the expense of additional memory, you should carefully analyze your system requirements and perform user- and data-volume simulations. The results of these tests will determine how much RAM you need for SQL Server to optimize both the application speed and the server hardware cost.

Hardware limitations may restrict your ability to expand memory. For example, consider a system with four memory-expansion slots, each of which allows, at most, a 16MB single integrated memory module (SIMM). This system can be expanded to a maximum of 64MB of RAM.

While such limitations are normally considered during system design before the hardware is purchased, it may sneak up on you if, for example, your customer base suddenly increases.

2) Data and Procedure Cache
There are two types of cache memory for use with SQL Server: data and procedure. The procedure cache holds the optimized query plans for the most recently executed stored procedures, including triggers and system-stored procedures. The data cache contains pages (2KB blocks of data) from the most recently accessed objects--mostly tables and indexes.

As the size of the data cache increases, the speed of data retrieval also increases, especially on subsequent retrievals. Cache is allocated to data and procedures depending on the application. Applications that use triggers need more procedure cache--triggers are treated as stored procedures for cache purposes. The default for SQL Server is 30% of cache for procedures and 70% for data. However, this may not be sufficient for your application.

To estimate the size of the procedure cache, you should run the system command dbcc memusage to determine the size of your largest plan. Multiply this size by the maximum number of concurrent users you have and then add 25% to determine the optimal size for your procedure cache. For example, if your largest plan uses 300KB and the application has a maximum of 30 concurrent users, then the procedure cache should be 11.25MB in size:
30 x 300KB x 125% = 11.25MB.

To illustrate the advantages of caching, let's look at an example. A mortgage-loan origination application can potentially retrieve several hundred rows of loan-pricing data using a complex SQL statement with multiple table joins. The initial retrieval takes 17 seconds. Sub-sequent retrievals for other customers take less than 3 seconds each since the application data is already in the cache. The key is that the loan-pricing data is basically fixed, even though multiple customers are being served.

The default installation size for SQL Server memory, including all cache plus kernel and server structures, is 8MB for a system with 32MB of RAM or less (16MB for systems with more than 32MB of RAM). If you have 64MB available, you should increase the cache to at least 40MB. Larger is better in this case because after the cache fills up, it is flushed on a FIFO basis. Therefore, you want to have enough cache to handle multiple SQL statements. If the cache is too small, it has to empty every time a new SQL statement executes, and the benefit is lost.

While the system is under a typical load, use the SQL Performance Monitor to check the SQL Server cache-hit ratio. If it's over 90%, adding more memory won't help. Additional memory is used mainly for SQL Server data cache to increase the hit ratio. In this case, the hit ratio is already high, and the maximum improvement possible is quite small. If the hit ratio is less than 90%, adding more memory may improve it and, thus, performance.

When you're searching for memory bottlenecks, you should also look at SQL Server I/O Lazy Writes per second. This counter measures how often the Lazy Writer process flushes dirty pages to disk to make room in the cache. The process ensures that the number of free data buffers stays higher than a predefined threshold. More memory may improve performance by reducing unnecessary stress on your I/O subsystem.

The kernel and server structures consist of configurable options: user connections (37KB per user connection), open databases (1KB per open database), open objects (70 bytes per object), and locks (32 bytes per lock). As these options increase in number, the caches suffer. If 10 user connections are added at 37KB each, the memory available for the caches is reduced by 370KB.

3) Use Stored Procedures
There are many advantages to using stored procedures, especially for frequently used SQL queries. The query in a stored procedure is stored in its parsed form. Thus, it's already compiled and doesn't need to be checked for syntax as an interactive SQL statement would. Therefore, after it's been loaded from disk, it has a faster runtime.

Stored procedures are reusable but not reentrant. If two users execute the same procedure simultaneously, two copies of the plan are loaded into memory. Since procedure query plans are built and optimized the first time they are read from disk, there's no guarantee that the plan is optimal every time the procedure is invoked. If your stored procedures accept parameter lists that may require different forms of optimization, consider using exec with recompile.

Since stored procedures are resident on the server, using them reduces network traffic. Only the requests must be sent across the network. For example, if you need to retrieve and examine 50 rows to modify three of them without a stored procedure: A select must be executed to retrieve the 50 rows; the result set must be sent from the server to the client; the rows must be examined; three update statements must be sent to the server; and the result of the updates must be returned to the client. With a stored procedure, all this takes place on the server, and only the result is sent to the client.

You can also use stored procedures to guarantee referential integrity--with triggers--by ensuring that no child rows are added without a parent row and no parent row is deleted without first deleting its child rows. Thus, you don't need additional coding on the client to ensure database integrity.

In terms of performance, the impact of triggers is low. The time involved to run a trigger is spent mostly to reference other tables--either in memory or on the database device. Tables that the user is issuing inserts or deletes against are always in memory and, thus, don't significantly add to processing time. The locations of any other tables referenced in the trigger determine how long the operation takes.

Stored procedures also simplify security administration. Users are granted privileges to execute specific stored procedures, rather than to perform specific operations, such as delete, insert, update, and select, on individual tables or views. In addition, the database is more secure since users can only execute stored procedures, not manipulate data in tables or views.

Stored procedures can also help to maintain code modularity. For example, there may be several places in an application where you need to update a customer list. If you create a stored procedure to perform the update, you just execute it each time you need to update the list. You don't have to code separate but equivalent queries in various locations.

4) Temporary Database Size
SQL Server uses a temporary database, tempdb, to store temporary tables and provide working storage. (SQL Server refers to tempdb as a work table in its output.) The default size of tempdb is 2MB (it is not located in RAM). Chances are that's not enough for your application.

The most significant factors in determining the space required for tempdb are the size of the temporary tables, the activity on them (which fills up the tempdb log), the size of any sorts, subqueries, and aggregates that use the groupby clause, and the number of open cursors. In addition, any query that uses the distinct clause automatically uses tempdb--it's created by default on the master device.

You can add more space on the master device or on any other properly initialized device. You can also use the showplan function to determine whether a query uses tempdb. But the easiest way to determine its proper size is to observe its usage during normal system operations.

Putting tempdb in RAM can significantly impact overall server performance. Queries that sort a lot often benefit because that eliminates a lot of disk accesses. Don't consider this option, however, unless you have at least 64MB of RAM. If you have less, SQL Server can probably manage the data cache better than you can.

5) Data Protection
The level and frequency of data protection you have for your application also affects performance. The mechanisms and procedures you use to prevent losing data can exist in either hardware or software. We recommend hardware protection since it provides better performance.

6) Application Analysis
An application that has lots of users adding to a database (e.g., on-line transaction processing, or OLTP) should be tuned in a manner different from one where managers query the database to analyze production and status (e.g., a decision-support system, or DSS). A DSS-type application doesn't need frequent backups since data isn't continuously being added to it. An OLTP-type application, on the other hand, requires frequent backups to avoid the possibility of losing large quantities of data.

One factor that affects the size of a database is the number of indexes it uses. OLTP applications don't need as many indexes because data is inserted and modified more often than it is read. You may want to create a clustered index for those OLTP tables subject to a high volume of insert and update activity. If no clustered index exists, new rows are inserted on the last page of the table. (Remember, an update is really a delete followed by an insert.) Contention for the last page can bring your server to its knees!

A clustered index forces SQL Server to insert all data in index-key order, eliminating the "last-page hot-spot" problem. Distributing inserts across the whole table has a positive impact on the transaction volume your application can support. DSS applications, however, can require many indexes per table due to intensive database queries. Without indexes, you can spend a lot of time waiting to display the result set.

Another difference between OLTP and DSS applications is in how the data is stored on an index page. In OLTP, you should set the fill factor to 50 to keep 50% of each index page of data empty--allowing for inserts. For read-only DSS, you should set the fill factor to 100 so that the application uses 100% of the 2KB index page. This enables you to process fewer data pages for a retrieval. You should also set the fill factor to 100 for any index on a table containing read-only data, such as a code table, regardless of the application. The fill factor takes effect when the index is built, so you should rebuild indexes frequently for an OLTP application.

Another thing that improves an OLTP application's performance is to update its statistics frequently. Regardless of when you run your backups, you should run the Update Statistics function on all user tables at least once a week. You should run it even more frequently on tables that regularly have a lot of activity. Tables with mass insert, delete, update, and bulk-copy (BCP) activity are prime candidates. If a query suddenly begins to run slowly, this is the first step to take.

You could write a stored procedure to run Update Statistics on all user-defined tables, updating the statistics on the distribution of key values in the indexes. This would give the SQL Server optimizer better information about which indexes to use when it processes a query, and it would make queries on frequently modified tables run more efficiently.

Most Significant Factors
Analyzing your application and running SQL Server utilities can greatly improve your application's performance. The most significant factors are the amount of RAM on your server, your choice of operating system, how memory is allocated between the caches, your use of stored procedures, the size of your temporary database, your data-protection scheme, and setting your application up appropriately for OLTP or DSS.

Contact Info
Logic Works * 609-252-1177
Microsoft * 206-882-8080
TAGS: SQL
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish