Once you've got your SQL Server environment set up and working the way you want it, it's time to start gathering your baseline performance data. A baseline is critical because every workload is different, and understanding what the performance characteristics of your servers is critical to resolving problems when they come up. The baseline tells you what the performance counters are during normal operations, so when some problem shows up, you can look for deviations from that baseline and have a head start on what to fix.

Related: 4 Essential PowerShell Provider Commands for SQL Server

How and When to Gather Data From Selected Counters

Once you've decided what counters you are interested in monitoring, you have to capture the counter data on a regular basis. What counters you choose is really up to you, and there are many good articles on what counters are important, so I won't address that here.

I want to show you how to gather the data from the counters you've chosen. How often should you gather the data? Well, that depends. It's important to understand that the more often you gather the counter data the more of an impact gathering the data will affect your server. I've found that once a minute works pretty well on most of my client systems, but if those systems were processing loads of 50,000 batch requests per second, I'd probably slow down the collection to once every five to 10 minutes.

Import-Counter cmdlet

The standard tool for gathering the performance data is perfmon.exe, and in some of my clients I use it, and send the results to a binary log file (.blg). Then, in PowerShell I can use the Import-Counter cmdlet to pull the counter data into a PowerShell object. That object is a collection of counter objects, and each counter object has a Timestamp property indicating when the counter was collected, and a CounterSamples collection, which is a collection of the set of counters you've specified in the perfmon log. Let's look at the top loop.

$lc = Import-Counter $logfil
Foreach ($ctr in $lc) {
$dtm = $ctr.Timestamp
$clst = $ctr.CounterSamples

Once we've got the CounterSamples collection we can iterate through each counter, pulling out the counter data we're interested in.

Foreach ($c in $clst) {
                switch -wildcard ($c.Path) {
                        '*% Processor Time*' { $ppt = $c.CookedValue }
                        '*Page life expectancy*' { $ple = $c.CookedValue }
                        '*Batch Requests/sec*' { $brs = $c.CookedValue }
                         }
                }

Now I can associate the $dtm value with the counters returned in $ppt, $ple and $brs and insert them into a table for later analysis.

I use this method whenever I'm monitoring servers where I can't directly create a collection database at the client site and want to keep the data on a server that can't connect to the servers being monitored. (Some high security environments require this separation.) If I can write the counter data directly to my analysis database I use PowerShell directly, and don't use perfmon.

New-Object cmdlet

Because PowerShell works with the .NET Framework, objects that already exist within the framework are readily available. One of those objects is the System.Diagnostics.PerformanceCounter object. By instantiating a PerformanceCounter object (using the New-Object cmdlet), you can define and capture whatever performance counters you want. Here's how I get the same values I got earlier using this object.

$dtm = Get-Date
$pptc = New-Object System.Diagnostics.PerformanceCounter
$pptc.CategoryName = 'Processor'
$pptc.CounterName = '% Processor Time'
$pptc.InstanceName = '_Total'
$ppt = $pptc.NextValue()
$plec = New-Object System.Diagnostics.PerformanceCounter
$plec.CategoryName = 'SQLServer:Buffer Manager'
$plec.CounterName = 'Page life expectancy'
$ple = $plec.NextValue()
$brsc = New-Object System.Diagnostics.PerformanceCounter
$brsc.CategoryName = 'SQLServer:SQL Statistics'
$brsc.CounterName = 'Batch Requests/sec'
$brs = $brsc.NextValue()

And now the same variable names have the same values as from the earlier example.

These are two good methods for getting performance counter data into a place where you can store the values into a database, where you can become familiar with the patterns, and look for the deviations from those patterns to spot performance problems.

Related: Create Alert-Based Log Backups with PowerShell