It's important to have a backup strategy, and it's also important to have a plan to handle when that strategy just isn't enough. There's a lot of activity that SQL Server records in the transaction log, and if your database is in the Full or Bulk Logged recovery model, that activity needs to be backed up, or you're going to find yourself running out of space in the transaction log. It's important to have a transaction log backup schedule that keeps up with your normal activity, but there are always exceptions.

Related: Intelligent Transaction Log Backups

A great feature of SQL Server is the ability to set-up alerts based on performance conditions. One of the performance conditions you can specify in an alert is the 'Databases|Percent Log Used' performance counter. The goal here is to create an alert for each database that's not in the Simple recovery model (and isn't a system database) that checks the Percent Log Used counter, and if it goes over 50, run a job that does a transaction log backup of that database.

Create PowerShell Script

Before we do anything else, we need to create a PowerShell script which will accept an instance name and a database name as parameters, and will perform a transaction log backup for that database. In my case, I'll place this script, called New-TLogBackup on the C: drive in the \Scripts directory. The key logic in the script looks like this.

$svr = New-Object 'Microsoft.SqlServer.Management.SMO.Server' $inst
$bdir = $svr.Settings.BackupDirectory
$dt = get-date -format yyyyMMddHHmmss
$bfil = "$bdir\$($dbname)_tlog_$($dt).trn"
Backup-SqlDatabase -Action Log -ServerInstance $inst -Database $dbname -BackupFile $bfil

We're going to loop through the databases on the server and check to see that the database is not a system database, is accessible, and is not in Simple recovery model. Once we've established these conditions, we need to create a job specifically created to perform a transaction log backup for that database. (Before we create the job, we'll check to see if it exists, and if so, then we'll drop it, so we can make sure it is doing exactly what we want it to do.)

$srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $inst
 
# Cycle through the databases
foreach ($db in $srv.Databases) {
if ($db.IsSystemObject -eq $False -and $db.IsAccessible -eq $True -and $db.RecoveryModel -ne 'Simple') {
$dbname = $db.Name
$job = $srv.JobServer.Jobs | where-object {$_.Name -eq "$($dbname)_log_dump"}
if ($job.Count -gt 0) {
$job.Drop()
}

Notice that the name of the job for the AdventureWorks database is AdventureWorks_log_dump. This name is a throwback to the early days of SQL Server, and it's unlikely to conflict with any jobs you're currently running. Once the existing job has been dropped, we can create a new one. We need to capture the job's ID so we can provide that when we create the alert for the database. Also, notice that the Command property for the job step will be a short PowerShell script calling the Backup-SqlDatabase cmdlet (I discussed this in a recent article, so please refer to that for more clarification). This step could be done in T-SQL as well, but my recent article discusses my reasons for using PowerShell.

$j = new-object ('Microsoft.SqlServer.Management.Smo.Agent.Job') ($srv.JobServer, "$($dbname)_log_dump")
$j.Description = "Transaction Log Dump for $($dbname)"
$j.OwnerLoginName = 'sa'
$j.Create()
$jid = $j.JobID                # Save the Job ID for the Alert
 
$cmd = "C:\Scripts\New-TLogBackup.ps1 $inst $dbname"
$js = new-object ('Microsoft.SqlServer.Management.Smo.Agent.JobStep') ($j, 'Step 01')
$js.SubSystem = 'PowerShell'
$js.Command = $cmd
$js.OnSuccessAction = 'QuitWithSuccess'
$js.OnFailAction = 'QuitWithFailure'
$js.Create()
 
$jsid = $js.ID
$j.ApplyToTargetServer($srv.Name)
$j.StartStepID = $jsid
$j.Alter()

Build Counter Details

The job has now been created so we can create the alert. This alert is going to monitor a performance counter so we need to build the counter details. If the instance we're monitoring is a default instance, the counter details start with the literal "SQLServer," but if the instance is a named instance, the counter details start with the SQL Server service name—meaning an instance named "INST01" has a service name of "MSSQL$INST01". Next, we use the counter Category name, the Counter name, the Database name and the condition, all separated by a vertical bar character ("|"). We're going to set our condition to look at the Databases/Percent Log Used counter and test it for each database to see if it exceeds 50.

$srvr = $inst.Split("\")
if ($srvr.Count -eq 1) {
$srvnm = 'SQLServer'
}
else {
$srvnm = 'MSSQL$' + $srvr[1]
}
$cond = "$($srvnm):Databases|Percent Log Used|$($dbname)|>|50"

The alert name is the database name with the "_threshold" suffix, so we'll make sure that an alert with that name doesn't already exist, and drop it if it does. Then, we're able to create the alert, specifying the PerformanceCondition property to the condition we just built, and setting the JobID property to the job we created to perform the database backup.

$nm = "$($dbname)_threshold"
 
$alt = $svr.JobServer.Alerts | where-object {$_.Name -eq $nm}
if ($alt.Count -gt 0) {
$alt.Drop()
}
$a = New-Object ('Microsoft.SqlServer.Management.Smo.Agent.Alert') ($srv.JobServer, $nm)
$a.PerformanceCondition = $cond
$a.JobID = $jid
$a.Create()
}
}

Better Protected From Unusual Transaction Activity

Once we run this script, we'll have alerts set up so that we're better protected from unusual transaction activity in our databases, and hopefully, keep our transaction log files at their ideal size.

Related: Set Up Operators and Alerts in PowerShell