Skip navigation

Reusing Query Plans

Learn how the optimizer works--and when plan reuse doesn’t work

Download the Code iconI've worked with SQL Server, in its Sybase and Microsoft incarnations, for almost 18 years. From the first, I've heard that one of the most important reasons for using stored procedures is that they're precompiled. But, as I mentioned last month in "Tracking Recompiles," before SQL Server 7.0, the time savings you gained was often insignificant.

The performance benefits of plan reuse improved a bit in SQL Server 7.0 and again in 2000 as Microsoft made the optimizer much more complex. Reusing a plan could often save a lot of time. And using stored procedures has other benefits, such as modularity and reusability of code, as well as reductions in network traffic.

Since you might well save compilation time when using stored procedures and gain other benefits besides, you might think you should always put your code into stored procedures. What could it hurt? The answer is that it could hurt your performance severely if your code is in a procedure that reuses its execution plan inappropriately. The more you understand about the process of query tuning and optimization, the better you'll be able to determine whether reusing a stored procedure plan will benefit your performance.

The most important fact about reusing the plan for a stored procedure is that SQL Server develops (optimizes and compiles) the plan the first time the procedure is called, when it has no other suitable plans in cache. This compilation can happen the first time the procedure is called after you create or alter it, after SQL Server restarts and the cache is empty, or after you remove all plans from cache. As I demonstrated last month, you can remove all plans from cache by using DBCC FREEPROCCACHE, or you can remove all plans from one database with DBCC FLUSHPROCINDB. Later, I tell you about some other ways that existing plans can be removed from cache.

My main example uses the same table that I used last month, a copy of the Orders table from the Northwind database. Run the code that Listing 1 shows to create a new database, copy the Orders table into that database, and create a stored procedure that accesses the Orders table. The script also builds a nonclustered index on the CustomerID column of the Orders table.

Remember that a plan for a procedure includes a plan for every data-manipulation language (DML) statement in the procedure. Most of your stored procedures probably include many SQL statements, and SQL Server must optimize and compile them every time it compiles the procedure. However, for my example, I use a simple stored procedure that has only one SELECT statement, which Listing 1 creates.

Rather than looking at the query plan for the procedure, let's look at the procedure's performance on execution. Before running any statements, you need to set STATISTICS IO ON. This option reports how many page reads SQL Server performed while executing the query. The output includes values for both logical and physical reads, but the primary measurement of whether a query is well tuned is to just look at logical reads. That value indicates how many pages SQL Server has to access. Physical reads tell how many of those pages SQL Server needs to read from disk, so you should never have more physical reads than logical reads. Reading from disk (a physical read) is much more expensive than reading a page that's already in cache (a logical read), but you don't always have control over the physical reads. The number of reads from disk is affected by SQL Server's available memory and whether other queries are accessing the same data. Your goal in tuning is to minimize the logical reads because that guarantees a small upper limit for physical reads.

Before you run the procedure, let's look at the performance for the statement inside the procedure. First, execute the statement with a highly selective value in the WHERE clause:

SELECT * FROM orders
WHERE customerID = N'BOLID'

The CustomerID BOLID occurs only three times, and the STATISTICS IO report tells you that SQL Server needed five logical reads to access these three rows. It uses the nonclustered index and must read two index pages in accessing a page for each row of data.

Next, execute the statement with a much less selective value in the WHERE clause:

SELECT * FROM orders
WHERE customerID = N'QUICK'

The CustomerID QUICK occurs 28 times, and the STATISTICS IO report tells us that SQL Server needed 21 logical reads to access these rows. It performs a table scan and reads every page of the table.

If you think that SQL Server should have used the nonclustered index for the preceding query, you can check whether you're right by using index hints. Rerun the query and force SQL Server to use that index:

SELECT * FROM orders WITH (index = CustID_index )
WHERE customerID = N'QUICK'

STATISTICS IO shows that SQL Server needed to perform 30 logical reads when it used the nonclustered index, as opposed to only 21 if it scanned the table. Forcing the index gave worse performance than a table scan did.

Note that the number of logical reads can be greater than the number of pages in the table if SQL Server must reread a page. If your system has sufficient memory, chances are good that a subsequent read of the same page will be from cache, but there's no guarantee of that. Each logical read could potentially correspond to a read from the disk.

Using the same connection to SQL Server in which STATISTICS IO is on, run the getorders procedure. First, clear out the cache to make sure no plans for this procedure are in cache, then pass in a selective parameter, then one that's not so selective:

DBCC FREEPROCCACHE
EXEC getorders N'BOLID'
EXEC getorders N'QUICK'

The first execution with the selective parameter ran as expected, with five logical reads. SQL Server created a plan that uses the nonclustered index and saved that plan. The second execution used the existing plan and the nonclustered index, resulting in suboptimal performance. It needed 30 logical reads, which is more than a table scan would take.

Now let's run the same test again, but pass the parameters in the opposite order:

DBCC FREEPROCCACHE
EXEC getorders N'QUICK'
EXEC getorders N'BOLID'

This time, the non-selective first parameter caused SQL Server to create a plan that used a table scan, so the second time we called the procedure, it again used a table scan. Which is better? Having SQL Server use a nonclustered index when a table scan would be more efficient, or using a table scan when a useful nonclustered index is available?

Before we answer that question, let's look at an extreme situation. First, update the Orders table so that many more rows have a certain value, then update the table's statistics:

UPDATE orders
SET customerID = N'RANCH'
WHERE customerID > N'RANCH'
UPDATE STATISTICS orders

Now let's run the stored procedure twice, first with a selective argument, and then with the extremely non-selective argument:

DBCC FREEPROCCACHE
EXEC getorders N'BOLID'
EXEC getorders N'RANCH'

Again, the first execution of the procedure resulted in a plan that uses the nonclustered index and needs five logical reads. Using that plan for the second execution was disastrous, requiring 262 logical reads although the table has only 21 pages. Obviously, SQL Server reads many of the pages more than once and might still have them in cache on subsequent reads, but you can't guarantee it will happen on a busy server. All 262 logical reads might become physical reads.

The problem with reusing a precompiled stored procedure plan is that sometimes the stored plan isn't the best plan for all subsequent executions. This is frequently the case when different parameter values require different optimal query plans. So, what are your options? What can you do to get good performance from this procedure no matter what parameter values you pass in?

One possibility is to change the index on CustomerID from nonclustered to clustered. But don't define a clustered index for a table on the basis of a single query. You get only one clustered index per table, so you need to carefully consider which column should be the key. In this case, there might be a better choice than clustering on CustomerID.

Alternatively, you could recreate the procedure, using the WITH RECOMPILE option as Listing 2 shows. Creating the procedure WITH RECOMPILE means that every time the procedure runs, SQL Server will create a new plan based on the actual parameter. Because SQL Server knows it's going to generate a new plan for every execution, it won't save any plans. If you examine the procedure cache by looking in syscacheobjects, you'll see that as soon as SQL Server recreates the procedure, it removes all existing plans for the procedure.

Getting a new plan every time you run a procedure improves performance in some cases, but it means that you'll never be able to reuse a plan, even when it might have been good for performance. In addition, WITH RECOMPILE applies to the entire procedure. In stored procedure with dozens or even hundreds of statements, you might have only one statement that gets a bad plan because of inconsistent parameter distribution (i.e., some parameter values are very selective, and some are the opposite). But with this option, SQL Server must recompile every statement in the procedure every time, which is a lot of work.

Reusing a precompiled plan often benefits your system's performance, but not always. Test your procedures' performance with a wide variety of parameter values to determine whether using the same plan will always give optimal performance. If you determine that reusing the plan will give worse performance the next time, recreating the procedure WITH RECOMPILE can sometimes help.

TAGS: SQL
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish