When I use different SQL Server tools to return statistics information about an event, I sometimes receive widely varying results. For example, when I run Listing 1 in Northwind with Query Analyzer's Show Server Trace option turned on, the server trace shows that the SQL:StmtCompleted event performed 185 reads. However, Figure 1 shows the event's statistics when I execute the query with STATISTICS IO enabled—1711 total reads. And if I use SQL Server Profiler to trace the query's execution, I get 1721 reads for SQL:StmtCompleted. Which result is correct, and why do the tools return different results?
I don't know which result is correct. But when you review I/O and other information from various SQL Server tools, you need to realize that each tool reports the information in a different way. The Microsoft article "INF: Differences in STATISTICS IO, SQL Profiler and Sysprocesses IO Counters" (http://support.microsoft.com/?kbid=314648) explains what the different tools measure and how they report that information. For example, Profiler's values for reads are often different than what STATISTICS_IO reports for reads. Profiler charges to the query reads associated with the act of parsing and compiling the query, whereas STATISTICS_IO doesn't. Also keep in mind that the Heisenberg Uncertainty Principle works in performance tuning as well as quantum physics: You can't watch something without affecting its behavior. So, even the act of monitoring the queries produces varying results.
It can be frustrating when different tools report the same data differently, especially if you want your numbers to match. But in performance tuning, it rarely makes a big difference if a query does 1672 logical reads or 1721. I find the statistics that Profiler and other sources report to be accurate enough when I'm troubleshooting performance problems. I work with these tools a lot, so I'm always interested in articles that explain why the numbers I get from the tools don't always match up. But I don't lose sleep over different I/O results. With the exception of discrepancies that user-defined functions (UDFs) can cause, the differences in performance metrics from the various tools have never been a significant factor when I investigate a performance-tuning problem. However, I describe one potential problem with statistics for UDF reads in "Tip: Query Analyzer Doesn't Accurately Report I/O for UDFs," at left, InstantDoc ID 43898.