This is How We Have Always Done It Does Not Apply
In a previous article I outlined how in the CTP 2 and 3 of Microsoft SQL Server 2016 we're seeing changes in how the de-facto Wait Stats query so widely circulated in blog posts, presentations at PASS SQLSaturdays, PASS Summit, chapter meetings, and a myriad of other events is not longer providing valid results when compared to earlier versions of Microsoft SQL Server. In fact, it appears that this has been the case in SQL Server 2014 as well.
This discovery led me to seek out a second set of eyes to confirm my testing was accurate. There were no better eyes I could think of than my friend Joe Sack, Principal Software Architect for Microsoft SQL Server within the Customer Advisory Team (CAT) at Microsoft. He was able to confirm my suspicions that "something" was amiss with the standard "wait stats" query we in the SQL Server community had been using for ten years. What we were seeing were duplicate results being presented using that query in the latest versions of SQL Server as well as odd behavior with how the running percent of waits was presenting itself. (Details in those associated articles above.)
The Original Wait Stats Query
Below is the code that is no longer appropriate for accurate results across all versions of SQL Server that make use of Dynamic Management Objects (those since SQL Server 2005):
WITH Waits AS ( SELECT wait_type, wait_time_ms / 1000. AS wait_time_s, 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct, ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn FROM sys.dm_os_wait_stats WHERE wait_type NOT IN (/* Insert list of benign wait types here to filter out irrelevant waits */) ) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 95; -- percentage threshold;
The calculation issues appear to resolve themselves within the effects under the covers of the recursive join back to the CTE in the body of the query. When testing the population of the CTE, the code that comprises the source of the CTE behaves as expected. Therefore we set upon revising the query but leaving it's structure and output fairly intact.
The New Universal Wait Stats Query
The new query had to remove the undiagnosed issues with the recursive join of the CTE out of the mix. In order to do that we broke down the query into three parts:
- Creation of a temp table to act as a store for the wait stats from the sys.dm_os_wait_stats DMV.
- A simplified CTE based upon the temp table - rather than direct population from the DMV.
- The body of the query which is quite similar to the previous query but with a few modifications on my part to remove the irrelevancies of signal wait time (the time between when the wait resource is freed/granted (memory, locking/latching, parallel threads completing and so forth) and the time the waiting thread begins work.
The spirit of the query is to identify resource waits and signal waits blur that information and subsequent decisions. Look at signal wait time separately as a percentage of total waits for diagnosing possible CPU pressure but I advocate for subtracting them out of this query and decisions to be based upon it. You'll see that manifests itself in the omission of signal wait time from the results in both the waits columns and the percentage column. There are other flavors of the original wait stats query that isolate signal wait out (but still presents it as a separate column) but I usually see signal waits still commingled into the percentage calculation.
Additionally I bookend those stages with cleanup of the temp table. Also if this was a query that I'd execute frequently or was of a high cost on an instance with tempdb issues I'd also create the temp table first along with a clustered index and populate it with an INSERT...INTO statement from the DMV. Since that is not the case a creation / population construct available via SELECT...INTO is acceptable.
That all being said, the new query is provided below:
SET NOCOUNT ON; IF OBJECT_ID('tempdb..#dm_os_wait_stats','U') IS NOT NULL DROP TABLE #dm_os_wait_stats; GO SELECT wait_type , (wait_time_ms - signal_wait_time_ms) / 1000. AS wait_time_s , waiting_tasks_count , CASE waiting_tasks_count WHEN 0 THEN 0 ELSE (wait_time_ms - signal_wait_time_ms) / waiting_tasks_count END AS avg_wait_ms , 100. * (wait_time_ms - signal_wait_time_ms) / SUM(wait_time_ms - signal_wait_time_ms) OVER ( ) AS pct , ROW_NUMBER() OVER ( ORDER BY wait_time_ms DESC ) AS rn INTO #dm_os_wait_stats FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ( /*** filter out additional irrelevant waits ***/ 'BROKER_TASK_STOP', 'BROKER_RECEIVE_WAITFOR' , 'BROKER_TO_FLUSH', 'BROKER_TRANSMITTER' , 'CHECKPOINT_QUEUE', 'CHKPT', 'DISPATCHER_QUEUE_SEMAPHORE' , 'CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT' , 'FT_IFTS_SCHEDULER_IDLE_WAIT', 'KSOURCE_WAKEUP' , 'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE', 'MISCELLANEOUS' , 'ONDEMAND_TASK_QUEUE', 'REQUEST_FOR_DEADLOCK_SEARCH' , 'SLEEP_TASK', 'TRACEWRITE', 'SQLTRACE_BUFFER_FLUSH' , 'XE_DISPATCHER_WAIT', 'XE_TIMER_EVENT', 'DIRTY_PAGE_POLL' , 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP', 'BROKER_EVENTHANDLER' , 'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP' , 'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP', 'SP_SERVER_DIAGNOSTICS_SLEEP' , 'HADR_FILESTREAM_IOMGR_IOCOMPLETION', 'HADR_WORK_QUEUE' , 'HADR_NOTIFICATION_DEQUEUE', 'HADR_LOGCAPTURE_WAIT' , 'HADR_CLUSAPI_CALL', 'HADR_TIMER_TASK', 'HADR_SYNC_COMMIT' , 'PREEMPTIVE_SP_SERVER_DIAGNOSTICS', 'PREEMPTIVE_HADR_LEASE_MECHANISM' , 'PREEMPTIVE_OS_GETFILEATTRIBUTES', 'PREEMPTIVE_OS_CREATEFILE' , 'PREEMPTIVE_OS_FILEOPS', 'PREEMPTIVE_XE_DISPATCHER', 'XE_LIVE_TARGET_TVF' ); WITH Waits AS ( SELECT wait_type , wait_time_s , waiting_tasks_count , avg_wait_ms , pct , rn FROM #dm_os_wait_stats ) SELECT W1.wait_type , CAST(W1.wait_time_s AS DECIMAL(12, 1)) AS wait_time_s , W1.waiting_tasks_count , CAST(W1.avg_wait_ms AS DECIMAL(12, 1)) AS avg_wait_ms , CAST(W1.pct AS DECIMAL(12, 1)) AS pct , CAST(SUM(W2.pct) AS DECIMAL(12, 1)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn , W1.wait_type , W1.waiting_tasks_count , W1.avg_wait_ms , W1.wait_time_s , W1.pct HAVING SUM(W2.pct) - W1.pct < 95 /* percentage threshold */ ORDER BY W1.pct DESC; IF OBJECT_ID('tempdb..#dm_os_wait_stats','U') IS NOT NULL DROP TABLE #dm_os_wait_stats; GO SET NOCOUNT OFF;
This results in the following output which provides a listing of the most-frequent resource waits since the last time the wait statistics were flushed either manually or by a SQL Server service restart or server reboot:
The results accumulate until they hit a threshold of 95% because after that point the results would otherwise descend into a listing of non-impactful waits taking up 1% or less (usually) of total wait time incurred. Usually what I focus on most is the top 2-3 waits and usually a picture of resource constraints tends to paint the performance tuning picture for me. In this case using the results above I'd look at issues focused around the amount of latching and locking being incurred by the SQL instance. This could be due to a number of things: high amounts of physical reads, insufficient RAM to handle workload, underlying issues causing transactions to run longer resulting in increased lock time, or inefficient queries or plans resulting in unnecessary levels of reads that may be tuned out. Wait stats are meant as a first stop examination of where your constraints are on your SQL Server instance and helps to rule out issues as much as it is to point you towards where to start your tuning process.
The important thing to keep in mind is that these statistics are cumulative over time - since stats for waits were reset by one of the aforementioned actions. This means they're inclusive of all loads at all times including backup and other maintenance times as well as when (perhaps) you suffered through rogue queries or odd activity that would not be indicative of your normal loads. It's best to query and store this info periodically then look at the deltas rather than consume the results as a lump sum.
Regardless of how you observe these metrics the old ways are no longer accurate.