Troubleshooting Page Life Expectancy Drops

Q: What is a good value for page life expectancy, and if the value is low, does that mean I need more server memory?

A: There’s a lot of ambiguous and out-of-date information available about page life expectancy. Page life expectancy is an instantaneous measure of how long (in seconds) a data file page is expected to remain in memory in the buffer pool (also referred to as the buffer cache). Page life expectancy isn’t a rolling average, so it should be taken with a pinch of salt.

Related: Page Life Expectancy a Reliable Indicator of SQL Server Memory Pressure

The commonly cited value to aim for is 300 seconds or more. This translates to five minutes, meaning a page will remain in memory for only five minutes. Therefore, all of the contents of the buffer pool will be renewed every five minutes, given the current pressure on the buffer pool to find new buffers for pages. For example, if you have a buffer pool that’s 60GB, your SQL Server system is reading 60GB of data every five minutes, which is a pretty large amount.

The value of 300 seconds is a Microsoft recommendation in a white paper that was written five years ago. It’s not a relevant guideline nowadays, and even back in 2006 I would have argued that aiming for a set value isn’t a valid recommendation.

There’s no set page life expectancy value that can be recommended. What you need to do is monitor the page life expectancy value trend and take action when the page life expectancy drops and stays below the typical value for your SQL Server instance. That value is going to be different for everyone.

If page life expectancy drops from your typical value and stays there, that means there’s increased pressure on the buffer pool. Does that mean you need more memory? Possibly. If your workload has simply increased and the page life expectancy has been slowly trending downward, it’s probably time to add more memory. If your workload hasn’t changed, but there’s increased buffer pool pressure, it could be that something else is using more memory (e.g., plan cache bloat). However, it’s more likely that a query plan has changed unexpectedly (maybe from out-of-date statistics) and is now doing a large table scan rather than using nonclustered indexes, thus driving more reads through the buffer pool and lowering page life expectancy.

It’s important to be careful when looking at page life expectancy that you don’t just look at a single value of it and make a snap decision about the root cause of a downward trend. As with any performance analysis, trending is the key to good decision making.

Discuss this Blog Entry 4

on Oct 18, 2011
Pretty outstanding explanation.... as usual, you guys make all the technical stuff seem pretty easy to understand! Kudos to both of you!
on Jun 28, 2013

Is there somewhere - perhaps a DMV - which would let me show the average PLE, or the trend over time?

on Jun 28, 2013

Great question -- we'll try to find out!

on Nov 9, 2014

To watch the PLE over time, you can create a perfmon procedure and select it as an item to log. Brent Ozar has an excellent column on this; I use this as one of my major monitoring tools.
http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/

Please or Register to post comments.

What's SQL Server Questions Answered?

Practical tips and answers to many of your questions about SQL Server including database management and performance issues.

Contributors

Paul S. Randal

Paul Randal worked on Microsoft's SQL Server team for nine years in development and management roles, writing many of the DBCC commands. Randal was ultimately responsible for SQL Server 2008'...

Kimberly L. Tripp

Kimberly L. Tripp has been working with SQL Server since 1990, and she’s worked as a consultant, trainer, speaker, and writer specializing in core SQL Server performance tuning and availability...
Blog Archive

Sponsored Introduction Continue on to (or wait seconds) ×