The effect of optimize for adhoc workloads–is this right?

Question: I recently read Kimberly’s post on USECOUNT=1 plans, and your related article Should I Optimize My SQL Server Instances for Ad Hoc Workloads in the March 2011 issue of SQL Server Magazine.  I turned on the “optimize for ad hoc workloads” on one of our production servers this past Sunday and was surprised at the result. The number of single-use plans in cache went from 46k to 58k and the amount of memory stayed the same but the percentage of memory that was my “single-use plan cache” went up.

Answer(s): This is really interesting and there are a number of reasons for all of these changes and comments. First, the easy one – the amount of memory allocated stayed the same. Yes, the amount of memory that SQL Server will use for the plan cache is based on the physical memory on your server. The option – optimize for adhoc workloads – will not affect the total amount of memory allocated to plan cache. If you’re interested in learning more about how much total memory can be allocated to the plan cache, there’s a whitepaper that describes the “Memory Allocated To Plan Caching” and the changes that have occurred between SQL Server 2000, SQL Server 2005 and SQL Server 2008. The whitepaper is called Plan Caching in SQL Server 2008.

Second, the number of single-use plans went up from 46k to 58k. This is possible and in fact, probable if your system has a lot of dynamic statements being executed. The reason the number of plans went up is because the amount of space that’s required for statement in the plan cache is lower after turning on the optimize for adhoc workloads option. For example, if a plan (query_plan_hash) is 25k a query_hash might only be 500 bytes. As a result of storing only the query_hash (for single-use statements) and not the query plan, you will be able to store more of them. While this explains the increased number it does not solve your problem (which is that you’re wasting a lot of cache with statements that just never get reused). Ultimately, what you really want to do is allocate less memory to the single-use plans. However, SQL Server doesn’t allow you to configure this directly. To help reduce the overall amount of memory allocated, you could consider clearing the cache (or part of the cache) if it exceeds a certain amount of memory (for example 2GB). To help you with the code required, check out my blog post titled: Plan cache, adhoc workloads and clearing the single-use plan cache bloat.

Finally, the percentage. If you’re looking at the percentage based on the number of plans then yes, it’s likely that the size is the same but the number of plans is higher. However, if it’s the size (in MB) that’s higher then there are still some reasons for this but it is definitely harder to explain. If the amount of physical memory actually increased for single-use plans then it could be that you cleared cache and then reviewed the numbers too soon after setting the option to get a good feel for the overall state of your cache. This could have also been a reflection of the workload that occurred between changing the option and re-checking the values. However, the overall problem still remains the same – you are wasting cache. Regardless of the percentage you need to clean this up. Outside of clearing – are there any other options? Unfortunately, the answer for now is going to be “it depends” and I’ll answer more on this one in my next post. To give you a taste of the possible options:

  1. Changing from adhoc to using stored procedures
  2. Setting the database option: FORCED PARAMETERIZATION

If you decide to learn these options on your own, please be very careful with FORCED PARAMETERIZATION and test it thoroughly before turning this on in production. There are cases where this can have a very positive effect/benefit but there are also cases where it can be VERY problematic.

Please or Register to post comments.

What's SQL Server Questions Answered?

Practical tips and answers to many of your questions about SQL Server including database management and performance issues.

Contributors

Paul S. Randal

Paul Randal worked on Microsoft's SQL Server team for nine years in development and management roles, writing many of the DBCC commands. Randal was ultimately responsible for SQL Server 2008'...

Kimberly L. Tripp

Kimberly L. Tripp has been working with SQL Server since 1990, and she’s worked as a consultant, trainer, speaker, and writer specializing in core SQL Server performance tuning and availability...
Blog Archive

Sponsored Introduction Continue on to (or wait seconds) ×