"First, do no harm" is the essence of the Hippocratic oath, which physicians adhere to when caring for patients. It's also good advice for DBAs. When you're trying to fix a problem, a misstep that accidentally brings down a server might also bring down your career. I use SQL Server Profiler daily to avoid such missteps. It's the central tool for my performance-tuning work, which consumes most of my time.

Most SQL Server professionals need to use Profiler regularly to monitor and improve their system's performance. However, many DBAs I work with hesitate to use Profiler on their production servers because they're afraid Profiler will corrupt data or degrade performance. Although these customers want to use Profiler to monitor and improve their system's performance, they're afraid of the unknown and know that their first priority is to "do no harm."

Is running Profiler on a production server dangerous? It depends. In general, Profiler won't harm your data or crash the server. Although Profiler could conceivably crash the server if an evil bug is lurking in the Profiler code path, the same is true of any part of SQL Server. Running Profiler is a calculated risk that I take every day. And my years of experience using Profiler tell me that the risk is extremely low.

When clients ask whether Profiler will slow performance, I like to explain the Heisenberg Uncertainty Principle, which applies to the movement of subatomic particles. In layman's terms, the principle is simply that "you can't observe the behavior of a system without affecting the system's behavior." This subatomic-particle truism holds in the performance-tuning world as well. The act of monitoring performance by using an active tool alters the performance of the system you're monitoring. The keys to avoiding a situation where your tool begins seriously degrading performance are to understand the tool you're using and to make sure that the tool's effects are negligible. For example, you probably don't want to capture trace output to your system's busiest drive if you know the server is having I/O problems.

Profiler can be intrusive and cause significant performance degradation if you trace lots of events and data columns on a busy server. I start my Profiler expeditions with a simple trace. For example, I include SQL:BatchCompleted and RPC:Completed events, which measure round-trips to the server. But I don't include SP:StmtCompleted events, which capture the completion of individual statements within a procedure, because a busy server might have a huge number of SP:StmtCompleted events. Some servers even produce a lot of round-trips to the server. In most cases, many of these round-trip events will show a 0 value for CPU and Duration, indicating that the events consumed only a small amount of time, so you can ignore these events. Adding a >0 filter to the CPU and Duration columns usually produces a manageable trace size. I also monitor my traces' growth rates when they're running to ensure traces don't grow too large too fast. I've run Profiler on systems performing more than 3000 batches per second with little observable performance degradation.

Profiler can tell you a lot about the behavior of your applications. And like any tool, Profiler is safe in the hands of competent users. Don't be afraid to use it. You might want to experiment on a lightly loaded production server. But find a way to add this powerful tool to your performance-tuning arsenal.

I'd like to revisit this topic in a few weeks and answer your questions about running Profiler in a production environment. Email me your questions. I'll answer the most common ones in a future issue of SQL Server Magazine UPDATE and tackle the more complex ones in my SQL Server Savvy column in SQL Server Magazine.