Common Misconceptions about CXPACKET Waits

A number of things can contribute to CXPACKET waits within SQL Server. But when I'm tuning servers for clients who don't (or haven’t) had DBAs on hand for quite a while and start to see high numbers of these waits (i.e., up in the top 5 or 10 adjusted waits) coupled with a decent number of SOS_SCHEDULER_YIELD waits, I usually take this as a good sign—as it usually means that they're probably missing a lot of important or needed indexes.

Common Recommendation to Ignore

Bumping into high amounts of these waits on most OLTP workloads, however, is NOT an indication that I should take away SQL Server's ability to run ugly queries (or queries with missing indexes) against multiple threads of execution. Or, in other words, high numbers of these waits aren't an indicator that I should go into sp_configure and 'cripple' server-wide execution by means of cobbling the 'max degree of parallelism' setting.

Related: Dissecting SQL Server's Top Waits

Yet, unfortunately, this is exactly what I see as a common recommendation on large numbers of internet forums and elsewhere. (In fact, a few years ago I actually ran into an environment where all sorts of larger queries were taking 'forever' because a support engineer at Microsoft had (amazingly) recommended setting max degree of parallelism down to 1 because some bigger, more unruly, operations were capitalizing resources and blocking smaller operations.)

Don’t Throw the Baby out With the Bath Water

Stated differently: Setting the 'max degree of parallelism' at a low level to simply get rid of CXPACKET waits is like throwing the baby out with the bathwater. It's terribly bad form—as it can drastically restrict SQL Server’s ability to parallelize operations and 'speed them through the system.' Yet, as with the system I alluded to above, I occasionally see very expensive systems ‘cobbled’ such that while they may have gobs of expensive processors/cores on-hand, SQL Server has been, effectively, prevented from using them as efficiently as possible.

And that’s not to say that there’s NEVER a reason to set 'max degree of parallelism' at the server level.

And it’s also worth noting, as a bit of an aside, that if a DBA HAS set 'max degree of parallelism' at the server level down to something like 1, developers can still BYPASS that with something like OPTION (MAXDOP = 8) in their code. Consequently, since the MAXDOP hint can/does override server-level settings, that means that it’s commonly going to be a MUCH better alternative for targeting the occasional unruly, problematic, or stubborn query than merely taking some of the bad advice available on the internet and trying to reduce your entire system down to the lowest common denominator as a means of dealing (typically) either some missing indexes or a couple of unruly/problematic operations.

Discuss this Blog Entry 2

on May 30, 2013

Believe it or not, I have to agree with that MS support engineer, in my last work environment (my smallest db is 1+TB, it is more OLTP-like, and it is SQL 2005 EE), I have to set MAX DOP=1 to solve the blocking and slow performance of big queries. I guess MAX DOP > 1 is probably more usable if we have many tables with partitions and in more OLAP like environments.

on Jun 1, 2013

jxhyao, I think the current recommendation is to set 'cost threshold for parallelism' to a higher value (default is 5) to discourage parallelism on relatively simple queries. Any stubborn exceptions should have OPTION (MAXDOP) set. I still like the old rule of thumb of half the physical processors with a ceiling of 8 as a starting point.

Please or Register to post comments.

What's Practical SQL Server?

Practical advice, insight, and help for core SQL Server considerations.


Michael K. Campbell

Michael K. Campbell is a contributing editor for SQL Server Pro and Dev Pro and is an ASPInsider. Michael is the president of OverAchiever Productions, a consultancy dedicated to technical evangelism...
Blog Archive

Sponsored Introduction Continue on to (or wait seconds) ×