SQL Server Questions Answered

Troubleshooting Page Life Expectancy Drops

Q: What is a good value for page life expectancy, and if the value is low, does that mean I need more server memory?

A: There’s a lot of ambiguous and out-of-date information available about page life expectancy. Page life expectancy is an instantaneous measure of how long (in seconds) a data file page is expected to remain in memory in the buffer pool (also referred to as the buffer cache). Page life expectancy isn’t a rolling average, so it should be taken with a pinch of salt.

Related: Page Life Expectancy a Reliable Indicator of SQL Server Memory Pressure

The commonly cited value to aim for is 300 seconds or more. This translates to five minutes, meaning a page will remain in memory for only five minutes. Therefore, all of the contents of the buffer pool will be renewed every five minutes, given the current pressure on the buffer pool to find new buffers for pages. For example, if you have a buffer pool that’s 60GB, your SQL Server system is reading 60GB of data every five minutes, which is a pretty large amount.

The value of 300 seconds is a Microsoft recommendation in a white paper that was written five years ago. It’s not a relevant guideline nowadays, and even back in 2006 I would have argued that aiming for a set value isn’t a valid recommendation.

There’s no set page life expectancy value that can be recommended. What you need to do is monitor the page life expectancy value trend and take action when the page life expectancy drops and stays below the typical value for your SQL Server instance. That value is going to be different for everyone.

If page life expectancy drops from your typical value and stays there, that means there’s increased pressure on the buffer pool. Does that mean you need more memory? Possibly. If your workload has simply increased and the page life expectancy has been slowly trending downward, it’s probably time to add more memory. If your workload hasn’t changed, but there’s increased buffer pool pressure, it could be that something else is using more memory (e.g., plan cache bloat). However, it’s more likely that a query plan has changed unexpectedly (maybe from out-of-date statistics) and is now doing a large table scan rather than using nonclustered indexes, thus driving more reads through the buffer pool and lowering page life expectancy.

It’s important to be careful when looking at page life expectancy that you don’t just look at a single value of it and make a snap decision about the root cause of a downward trend. As with any performance analysis, trending is the key to good decision making.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.