Efficient index maintenance using database mirroring

Question: We’ve moved to using database mirroring for disaster recovery and we’ve hit a problem. We’re used to switching to the Bulk-Logged recovery model when performing index rebuilds but we can’t do that now the database is mirrored. The index rebuild is generating way too much log traffic for the mirror and so we had to stop index maintenance – but now performance is suffering! What can we do?

Answer: This is a common problem that people encounter when moving a production database to be mirrored. Although many people conduct testing of workload performance when prototyping a mirroring environment, in my experience the majority do not include maintenance operations in their testing. Regular maintenance is a part of your business cycle and so should be included in all testing scenarios.

Related: The Curious Case of the Failed Database Mirroring Failover

As a quick bit of background, using the bulk_logged recovery model allows an index rebuild operation (using ALTER INDEX … REBUILD) to generate less transaction log, meaning the transaction log does not have to be as large compared to performing the operation in the full recovery model. Database mirroring only allows the full recovery model to be used, however, and so index rebuilds will be fully logged. The amount of 'extra' log that is generated can easily saturate the network link between the principal and mirror, and contribute to a very large SEND queue on the principal. If the SEND queue becomes large, it can equate to more potential for data loss (if the principal crashes) than your data-loss service level agreement can tolerate.

So what can you do?

The answer is to do staggered index maintenance using ALTER INDEX … REORGANIZE. Reorganizing an index only addresses fragmentation that exists, and is interruptible without loss of already-completed work. Rebuild, on the other hand, always builds a new index no matter how much (or how little) fragmentation exists, and if you interrupt it you get nothing – everything is rolled back.

For the larger indexes you have that are not practical to rebuild, perform the following steps:

  • Day 1: Start an ALTER INDEX … REORGANIZE during your maintenance window. Let it run for an hour or so. Kill the command. It won’t roll anything back and will have made some progress through the index removing fragmentation.
  • Day 2: Start the reorganize again. It doesn’t remember where it got to on day 1, but should quickly traverse the work it did on day 1 and start removing fragmentation from the next portion of the index. Kill it again after an hour or so.
  • Repeat until the fragmentation level drops below whatever threshold you’re comfortable with, or just continue the day-by-day process indefinitely.

This method allows you to limit the amount of transaction log that is generated by your regular index maintenance. If you wanted to get more advanced, instead of killing the reorganize process after a certain amount of time, you could monitor how much transaction log is being generated and kill it once a threshold is reached (see Script: open transactions with text and plans for details of how to do that).

Discuss this Blog Entry 3

on Oct 10, 2011
Thank you, Paul. Very good.
on Dec 3, 2011
We use the approach of rebuilding indexes over 30% fragmentation, otherwise reorganize. But it isn't reindexing that's the big problem for us, its updating statistics. We're currently updating stats at 90% scan rate for 12 hours each weekend on our primary client database -- its about 1.3TB and 45 million records currently and we ran in full recovery for DR, not for mirroring.
on Oct 10, 2011
Do you suggest a threshold for REORGANIZE versus REBUILD? Thank you

Please or Register to post comments.

What's SQL Server Questions Answered?

Practical tips and answers to many of your questions about SQL Server including database management and performance issues.

Contributors

Paul S. Randal

Paul Randal worked on Microsoft's SQL Server team for nine years in development and management roles, writing many of the DBCC commands. Randal was ultimately responsible for SQL Server 2008'...

Kimberly L. Tripp

Kimberly L. Tripp has been working with SQL Server since 1990, and she’s worked as a consultant, trainer, speaker, and writer specializing in core SQL Server performance tuning and availability...
Blog Archive

Sponsored Introduction Continue on to (or wait seconds) ×