Question: I’ve often wondered why it is that I can drop a table with billions of records in it and it completes immediately. Even if it it just deallocating pages, that’s going to take a lot of time. Can you explain what’s going on?
Answer: The answer is the deferred-drop mechanism.
- Perform a lock ‘probe’ of each of the eight pages in the extent (i.e. perform an acquire-exclusive-lock-and-immediately-release-it operation).
- If all eight pages locks can be probed successfully, mark the extent deallocated.
The problem is that the extent X lock needs to be held until the end of the transaction, otherwise another thread could allocate and use the extent, making rollback of the drop or truncate very problematic. The more extents there are, the more X locks there are, and the more lock memory is used. With a large enough table trying to be dropped/truncated in one go, back in the SQL Server 2000 days it was possible to run out of memory and the operation would fail.
In SQL Server 2000 SP3 we introduced the deferred drop mechanism. This works by unhooking the allocations for a table and placing them on the deferred drop queue. A background task then does the deallocation process in relatively small batches, ensuring there is no possibility of running out of lock memory and the operation failing.
The upshot of this is that a drop/truncate of a large table will complete immediately now – as the only operation being performed at the time of the drop/truncate is the manipulation of allocation metadata.
This also explains why a drop/truncate appears to be non-logged. The transaction that does the allocation metadata manipulation doesn’t generate many log records. If you wait a few minutes and look in the log again though, you’ll see thousands of log records – each deallocating an extent. That’s the deferred drop background task at work.