Seek and You Shall Scan Part II: Ascending Keys

Another classic case of a scan versus a seek is what’s known as the ascending key problem.

Itzik Ben-Gan

September 14, 2015

13 Min Read
Seek and You Shall Scan Part II: Ascending Keys

In another column I cover cases in which the optimizer uses table or index scans versus ones in which it uses index seeks. I explained when the optimizer’s choices were efficient by default and when they weren’t (and provided solutions for when they weren’t). In this column I continue the discussion by covering a problem known as the ascending key problem.

I’d like to thank my friend Miloš Radivojević; some of the discoveries that I will discuss in this article are a result of interesting discussions that we had.

Note that some of the formulas that I present in this article also are a result of my attempts to reverse engineer cardinality estimates, so there’s certainly a possibility for errors. Also, I ran my tests in SQL Server 2014 RTM, and there’s a possibility that later builds will change cardinality estimation formulas.

Ascending Key Problem

Another classic case of a scan versus a seek is what’s known as the ascending key problem. To demonstrate the problem I’ll use a table called Orders2, which you create and populate with 900,000 rows by running the following code:

SET NOCOUNT ON;USE PerformanceV3; -- http://tsql.solidq.com/books/source_code/PerformanceV3.zipIF OBJECT_ID(N'dbo.Orders2', N'U') IS NOT NULL DROP TABLE dbo.Orders2;SELECT * INTO dbo.Orders2 FROM dbo.Orders WHERE orderid <= 900000;ALTER TABLE dbo.Orders2 ADD CONSTRAINT PK_Orders2 PRIMARY KEY NONCLUSTERED(orderid);

To enforce the primary key constraint, SQL Server creates a nonclustered index called PK_Orders2 with the orderid column as the key. As part of the index creation, SQL Server creates a histogram on the orderid column. Run the following code to examine the histogram:

DBCC SHOW_STATISTICS('dbo.Orders2', 'PK_Orders2') WITH HISTOGRAM;

The output of this code is shown in Table 1.

Table 1 Histogram:

RANGE_HI_KEY RANGE_ROWS  EQ_ROWS  DISTINCT_RANGE_ROWS  AVG_RANGE_ROWS------------ ----------- -------- -------------------- --------------1        0       1    0            1900000       899998      1    899998           1

Currently all orderid values from the table are modeled by the histogram, with the minimum being 1 and the maximum being 900,000. If you issue queries that filter a range of orderid values, you should get accurate cardinality estimates, and in turn optimal plans. For example, consider the following two queries:

-- Query with high selectivitySELECT empid, COUNT(*) AS numordersFROM dbo.Orders2WHERE orderid > 899900GROUP BY empid;-- Query with low selectivitySELECT empid, COUNT(*) AS numordersFROM dbo.Orders2WHERE orderid > 700000GROUP BY empid;

The first query filters a small number of rows and the second a large number. The plans for these queries are shown in Figure 1 and Figure 2, respectively.

Figure 1: Plan with high selectivity

Figure 2: Plan with low selectivity

Observe how in both cases the cardinality estimates are accurate. For the query with the high selectivity (small number of filtered rows) the plan:

  • Applies a seek and a few lookups

  • Is serial

  • Uses a sort followed by a stream aggregate

  • Has a sufficient memory grant for the sort activity (no spill warning)

For the query with the low selectivity (large number of filtered rows) the plan:

  • Applies a scan

  • Is parallel

  • Uses a local hash aggregate and a global stream aggregate preceded by a sort operation

  • Has a sufficient memory grant for the hash and sort operations (no spill warnings)

The ascending key problem typically arises when querying the recent range in a column that keeps getting ascending values. For example, in our Orders2 table, new order IDs keep increasing. The histogram normally gets refreshed every 500 plus 20 percent of changes in the column. In our table, next time this will happen is after adding 180,500 rows. What this means is that, usually, the recently added range is not modeled by the histogram. For example, run the following code to add 100,000 rows, query the table, and examine the histogram again:

-- Add 100,000 more rowsINSERT INTO dbo.Orders2  SELECT *  FROM dbo.Orders  WHERE orderid > 900000 AND orderid <= 1000000;-- QuerySELECT empid, COUNT(*) AS numordersFROM dbo.Orders2WHERE orderid > 900000GROUP BY empid;-- Show statistics again (same result as shown earlier in Table 1)DBCC SHOW_STATISTICS('dbo.Orders2', 'PK_Orders2') WITH HISTOGRAM;

You get the same histogram shown earlier in Table 1, indicating that the maximum recorded value is still 900,000. The 100,000 new values are not modeled by the histogram. What happens now when you query a range that exceeds the maximum value in the histogram is very interesting and in some cases surprising. The method that SQL Server uses to estimate the filter’s cardinality in such a case depends on a number of factors: the cardinality estimator (CE) version (pre-2014 legacy CE or new CE), the type of the column (integer/numeric with a scale of zero or other) and whether the column is unique or not.

When the Column is Unique, and is of an Integer or Numeric Type with a Scale of Zero

Prior to SQL Server 2014, if you query a range of order IDs, the cardinality estimator relies on the values that are modeled by the histogram. This means that if the range you filter exceeds the maximum that is recorded by the histogram, you will typically get a grossly underestimated row count. This can lead to a slew of suboptimal choices by the optimizer. The following query demonstrate this problem. (Since I’m running the query in SQL Server 2014, I’m forcing the legacy CE with trace flag 9481.):

SELECT empid, COUNT(*) AS numordersFROM dbo.Orders2WHERE orderid > 900000GROUP BY empidOPTION(QUERYTRACEON 9481);

Figure 3: Plan with underestimated cardinality

The estimated number of rows based on the histogram is 1. It’s actually supposed to be 0, but the minimum that the cardinality estimator is allowed to estimate is 1. The actual number is 100,000. Consequentially, the optimizer makes a number of suboptimal choices:

  • Applies a seek and many lookups instead of a scan

  • Uses a serial plan instead of a parallel one

  • Uses a sort followed by a stream aggregate instead of using a local hash aggregate followed by a global stream aggregate

  • Doesn’t have sufficient memory for the sort and therefore has two rounds of spills to tempdb

There are a number of ways to address this problem:

1. You can add a job that manually updates the statistics for this column more frequently.

2. You can enable trace flag 2371, causing SQL Server to reduce the percentage for statistics refresh as the number of rows increases. See details here.

3. You can enable trace flag 2389 to cause SQL Server to identify ascending key columns and, for those, create a mini histogram step in memory to model recent changes when a plan is recompiled. See details here.

4. Use SQL Server 2014. The new cardinality estimator identifies that the query filter exceeds the maximum value in the histogram. It has access to the column modification counter (call it in short modctr), so it knows how many changes took place since the last refresh. When the column is unique and of an integer type or numeric with a scale of zero, SQL Server assumes it’s ascending. So it interpolates the estimate based on the distribution of the values within the existing histogram and the number of changes that took place since the last refresh. The result is that the estimate tends to be more accurate.

Run the following query in SQL Server 2014 without the trace flag that forces the use of the legacy cardinality estimator:

SELECT empid, COUNT(*) AS numordersFROM dbo.Orders2WHERE orderid > 900000GROUP BY empid;

Figure 4: Plan with accurate estimate

The above examples use the predicate orderid > 900000, which doesn’t intersect with the maximum value in the histogram (900000). When the predicate does intersect, the old CE takes the original estimate based on the existing histogram, and scales it based on modctr.

As an example, for the predicate orderid >= 900000 it calculates: 1 * (1 + 100000 / 900000) = 1.11111.

For the predicate orderid >= 899995 it calculates: 6 * (1 + 100000 / 900000) = 6.66666.

The new cardinality estimator uses same interpolation method like when the predicate is nonintersecting.

As examples, for the predicate orderid >= 900000 it accurately estimates 100001, and for the predicate orderid >= 899995 it accurately estimates 100006.

When the Column is not Unique, and is of an Integer or Numeric Type with a Scale of Zero

Like I mentioned, the column uniqueness and type can affect the cardinality estimation method that SQL Server will use. This section describes the methods SQL Server uses when you don’t enforce uniqueness on the column, and the type is integer or numeric with a scale of zero. Run the following code to recreate the sample data but this time with a nonunique index:

-- Populate the table with 900,000 rows and create a nonunique indexIF OBJECT_ID(N'dbo.Orders2', N'U') IS NOT NULL DROP TABLE dbo.Orders2;SELECT * INTO dbo.Orders2 FROM dbo.Orders WHERE orderid <= 900000;CREATE INDEX idx_orderid ON dbo.Orders2(orderid);GO-- Show statistics again (same result as shown earlier in Table 1)DBCC SHOW_STATISTICS('dbo.Orders2', 'idx_orderid') WITH HISTOGRAM;GO-- Add 100,000 more rowsINSERT INTO dbo.Orders2  SELECT *  FROM dbo.Orders  WHERE orderid > 900000 AND orderid <= 1000000;-- QuerySELECT empid, COUNT(*) AS numordersFROM dbo.Orders2WHERE orderid > 900000GROUP BY empid;GO-- Show statistics again (same result as shown earlier in Table 1)DBCC SHOW_STATISTICS('dbo.Orders2', 'idx_orderid') WITH HISTOGRAM;

In both cases that present the histogram in this code you get the same result as shown earlier in Table 1.

The following query uses a filter that doesn’t intersect with the maximum value in the histogram and forces the use of the old CE:

SELECT empid, COUNT(*) AS numordersFROM dbo.Orders2WHERE orderid > 900000GROUP BY empidOPTION(QUERYTRACEON 9481);

As when the index was unique, you get the same gross cardinality underestimation of 1.

The new CE does something very curious that is quite different from what it does when the index is unique. It probably doesn’t assume anymore that the column is ascending. So it applies a hard-coded percentage that is based on the operator that you use to modctr. The percentages are the same as the ones used to make estimates for predicates with unknown inputs, but, again, applied to modctr. For example, when you use the operator >, it applies 30 percent; when you use BETWEEN, it applies 9 percent. In our case, modctr is 100000. So a query with the predicate orderid > 900000 will get the estimate 0.30 * 100000 = 30000. Here’s such a query:

SELECT empid, COUNT(*) AS numordersFROM dbo.Orders2WHERE orderid > 900000GROUP BY empid;

The plan for this query is shown in Figure 5, confirming that this cardinality estimation method was used.

Figure 5: Plan in 2014 for nonunique integer column with no intersection

If the column is ascending, very likely you will get an underestimation; but,still, it will be a higher estimate than what you get with the legacy CE. If the column isn’t ascending, you will likely get a more accurate estimate.

Surprisingly, when the filter predicate does intersect with the maximum value in the histogram, both the legacy CE and the new CE apply an estimate based on the existing histogram and scale it based on modctr. The reason that this is surprising is that with the new CE, like you just saw, the estimate for the filter orderid > 900000 is 30000, but for orderid >= 900000 it’s 1.11111! The computation used is 1 + (100000/900000). Use the following query to see this for yourself:

SELECT empid, COUNT(*) AS numordersFROM dbo.Orders2WHERE orderid >= 900000GROUP BY empid;

The plan for this query is shown in Figure 6.

Figure 6: Plan in 2014 for nonunique integer column with intersection

It’s one thing when the different CEs use different methods, but another when the same CE estimates a lower number for >= than for > with the same input.

When the Column is Unique, and is not of an Integer or Numeric Type with a Scale of Zero

This section covers cases where the column is unique and the type is neither integer nor numeric with a scale of zero--for example, when the type is NUMERIC(12, 2), FLOAT, DATETIME, and so on. To test this, in the script that generates the sample data, simply cast the type of the orderid column to NUMERIC(12, 2) and make sure you create a unique index. I’ll start with filter predicates that don’t intersect with the maximum value in the histogram.

The legacy CE consistently produces an estimate of 1. The new CE seems to be using the formula (reverse engineered):

(percent for operator with unknown input * modctr) / (1 + modctr / new input cardinality).

For example, for the predicate orderid > 900000 you get: (0.30 * 100000) / (1 + 100000 / 1000000) = 27272.7

This seems to be a slightly scaled down version of the formula used when the type is integer or numeric with a scale of zero, which I described in the previous section. The same predicate in such a case gets an estimate of 30000.

When the filter predicate does intersect with the maximum value in the histogram, the legacy CE scales the histogram estimate based on modctr, just like when the type is integer or numeric with a scale of zero.

The new CE does something close but not exactly the same. When the filter predicate includes just the last high boundary point (for example, orderid >= 900000), the estimate is 1. When the predicate interval starts before the maximum value (for example, orderid >= 899995), the formula seems to be (again, reverse engineered):

(histogram estimate - 1) * (1 + modctr / old input cardinality) + 2

So, for the predicate orderid >= 899995 you get: (6 - 1) * (1 + 100000 / 900000) + 2 = 7.55555.

When the Column is not Unique, and is not of an Integer nor Numeric Type with a Scale of Zero

This section covers cases where the column is not unique and the type is not integer nor numeric with a scale of zero. To test this, in the script that generates the sample data simply cast the type of the orderid column to NUMERIC(12, 2) and make sure you create a nonunique index.

As I did before, I’ll start with filter predicates that don’t intersect with the maximum value in the histogram.

The legacy CE, like with all other cases, generates an estimate of 1 row. The new CE uses the same method like it does when the column is not unique, and is of an integer or numeric type with a scale of zero--namely, applies the percent for operator with an unknown input to modctr. So, with modctr being 100000 and the operator being >, you get an estimate of 30000.

When the filter predicate intersects with the maximum value in the histogram, the legacy CE consistently scales the histogram estimate based on modctr.

With the new CE, when the filter predicate includes just the last high boundary point the estimate seems to be the histogram estimate scaled based on modctr. When the predicate interval starts before the maximum value in the histogram, the estimate seems to be: (histogram estimate + 1) scaled based modctr.

For example, for the predicate orderid >= 900000 the estimate is 1.11111. For the predicate orderid >= 899995 the estimate is: (6 + 1) * (1 + 100000 / 900000) = 7.77777.

Table Summarizing Cardinality Estimation Methods

Table 2 summarizes the different methods that the different CEs use in the different circumstances--again, based on my attempts to reverse engineer the formulas.

Conclusion

I’m having a lovely pint of beer while writing this, but I suspect that I'm a bit dizzy due to the variety of methods used by the new cardinality CE in the different circumstances. Hopefully, this article will help you get a better sense of what’s going on behind the scenes, even if sometimes things don’t seem to be so logical. And, if all else fails, have a pint.

 

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like