When Are Cursors Better?


I really enjoyed Itzik Ben-Gan's article "Sequential to Set-Based" (November 2001, InstantDoc ID 22431) about set-based versus sequential processing of result sets. I've used the information from this article as a catalyst to rewrite many of my old cursor-based stored procedures. However, I haven't been able to find a decent article yet about when to use cursors instead of a set-based approach.

Thanks for your feedback. I haven't written specifically about when to use cursors. But in several articles, I've discussed the uncommon scenarios where an iterative cursor-based solution performs better than the comparable set-based solution. You can find examples of when cursor solutions out-perform set-based solutions in my Web-exclusive T-SQL 2005 columns "Calculating Row Numbers in SQL Server 2005" (InstantDoc ID 42302) and "Ranking Functions" (InstantDoc ID 42646) and in my T-SQL Black Belt column "Calculating Concurrent Sessions" (March 2003, InstantDoc ID 37636).

Solutions that have the potential to perform better with cursors are those in which you need to rely on a certain order of access to the data and you need to accumulate values while proceeding. Using a cursor, you can scan the data only once and keep accumulating the values, whereas a set-based solution would access all rows from the first row to the current one to accumulate the values. So, the cursor-based solution has linear performance degradation as the table grows larger, while the set-based solution has exponential performance degradation.

Another scenario where you need to use cursors is when each row contains values you use as arguments to invoke a stored procedure and the activity the stored procedure implements can't be applied in a user-defined function (UDF). Similarly, you need a cursor to invoke maintenance operations for which you construct the code dynamically by going over a result set a row at a time.

Trace Files Compress Well


In his SQL Server Savvy Q&A "Running Profiler on a Busy Server" (December 2004, InstantDoc ID 44306), Brian Moran shared a nice idea to use the maximum file size as a way to kill a SQL Server trace. We performed some timed profiling of a heavily used SQL Server. We ended up with some very large files (1.6GB for a 10-minute time slice). But eventually, after loading the trace results into a separate table, we got some interesting results. By the way, if you need to reduce file size, I found that the trace files compress very well. And using compression is a lot better than moving large files around your network.