In the past month or two I’ve bumped into a couple of new clients with individual servers that have each been running for over 100 days. That’s pretty impressive in terms of what it means for overall uptime. Only, while it is cool to see a SQL Server running for months at a time, that ‘coolness’ is lost when everything during those 100 plus days has been logged to the same log—as pulling data up when there are potentially 100s of thousands or even millions of rows takes forever.
For most servers or workloads, I typically prefer to keep only about a weeks’ worth of log data per log entry. Or, in other words, I typically prefer to create a weekly job that cycles the SQL Server event log (i.e., terminates the current/existing log and spins up a brand new one its place). This way, instead of having a single log/entry in the screenshot below that stretches over the past 100 days, you’d end up having, effectively, a new entry/log per week:
On some servers or with some workloads, however, cycling the logs on a nightly (or even monthly) basis might make more sense. By and large there’s no single ‘best’ approach to determining when to cycle the logs—but I find that if you’re troubleshooting a problem, looking to audit access, or trying to use the logs for any reason, having to wait while records are fetched can be a problem. Moreover, if you’re connecting to the server remotely (a best practice for/against production servers) then if there are too many rows to ‘fetch’ you’ll commonly end up with the dreaded: “SSMS is busy” dialog:
Once you’ve determined an interval at which to regularly cycle your logs, cycling them is actually quite easy. To tackle this need I just set up a SQL Server Agent Job that I run (typically) weekly, and which uses the following to cycle the error log:
-- Error Log:
You can also use a similar technique to cycle the SQL Server Agent Log as well:
-- SQL Server Agent Error Log:
Moreover, on servers where I’ve already got jobs set up to periodically truncate backup and job history, I commonly just add these two routines in as additional steps as part of an overall ‘maintenance’ job that trims and cycles history.
For security purposes it’s a best practice to try and keep fairly large numbers of error logs on hand. Hackers, for example, love trying to cover their tracks if/when they’re able to breech a system—meaning that if they’re able to execute sp_cycle_errorlog they could (effectively) try to just cycle the log enough to potentially cover when they had gained access to your system or done something evil. By default a SQL Server instance will keep up to 6 error logs on hand—so if you’re dealing with highly sensitive information or an environment where auditing is very important you might want to push this number up—all the way to 99 (or even to unlimited if you prefer—just make sure you keep an eye on disk space). Otherwise, I typically prefer to keep about 12-25 logs on hand in most environments (where security/auditing are not critical concerns).
To specify the number of log files retained (i.e., other than the default) you can either edit the registry or just use SQL Server Management Studio to edit the registry for you. To use SSMS, just right click on the SQL Server Logs node on the instance in question, and click on Configure:
Then, you can set whatever options or choices you’d like from the ensuing dialog:
(Note too that you can also use the ‘Script’ button/dialog to grab the exact changes you’ll be making—in case you want to push similar changes out to other servers.)