SQL Server 2000 Profiler is an invaluable investigative tool that lets you measure query performance, debug applications, diagnose problems, and replay saved traces. Profiler provides the client-side functionality—the GUI—that lets you define and run traces from the client against both SQL Server 2000 and SQL Server 7.0. Tracing in SQL Server 2000 and 7.0 also includes a server-side component called SQL Trace. Microsoft totally rewrote both Profiler and SQL Trace in SQL Server 2000, mainly because SQL Server 7.0 can't guarantee that it will trace all the events that you want to trace. In SQL Server 7.0, if the consumer thread that's responsible for clearing the events queue doesn't clear the queue quickly, the queue can get full and can't hold new events. In SQL Server 2000, when you have the server handle the trace data, a trace file writer is responsible both for collecting the events from the queue and for writing them to the trace file. Guaranteeing that SQL Server will trace all requested events is mandatory for those who need to fully audit SQL Server activity, especially security audit events such as object access and permission usage. "Trace That Event with SQL Profiler," April 2000, and "Problem-Solving with SQL Profiler," April 2000, include extensive coverage about how to use SQL Server 7.0 Profiler to track events; Kalen Delaney's "Tracking Down Event Clues," page 39, contains updated details about tracing with SQL Server 2000.
To define a new SQL Server 2000 Profiler trace, open Profiler from the SQL Server program group and click the New Trace button, which is the leftmost button in the toolbar—or you can choose File, New, Trace. The resulting Connect to SQL Server dialog box requests your server and security information for logging in to SQL Server. After you connect, you'll see a tabbed dialog box in which you can define general options, the events you want to capture, the data columns you want to receive with the traced events, and any pertinent filters. Many of the options in SQL Server 2000 Profiler are new, including the file-rollover option; server-side processing of trace data; certain events, data columns, and filtering capabilities; and more. Let's explore the option tabs one by one.
On the General tab, which Figure 1, page 34, shows, you can name the trace and change the target server for tracing. You can also choose a trace template that comes with a predefined set of events, data columns, and filters and that lets you reuse a previously saved trace definition. Some trace templates come with SQL Server, but you can also create your own templates. One template that comes with SQL Server is SQLProfilerSP_Counts, which lets you trap invocations of stored procedures.
On the General tab, you can specify a file for SQL Server to save the trace output to. Saving a trace to a file is much more efficient with SQL Server 2000 Profiler than with the Profiler in SQL Server 7.0 because the 2000 trace file uses a self-describing format. The use of a self-describing file format means that for each event, SQL Server 2000 saves only the relevant data columns, not white spaces, and some events simply point to previous events. For example, completed events such as SQL:BatchCompleted and SQL:StmtCompleted can point to their corresponding starting events—SQL:BatchStarting and SQL:StmtStarting—to save storage space. An important advantage of saving the trace output to a file is that when you reopen the file later for examination or for replay, it loads asynchronously—that is, you can start examining the file while it's still loading.
When you specify that SQL Server should save the trace output to a file, you must also set a maximum file size in the range of 1MB to 1000MB. In addition, the General tab lets you enable file rollover and specify that the server will process the trace data. If you specify a maximum file size but don't enable the file-rollover option, SQL Server stops writing events to the file after the file reaches the size you specified. Note that because newer events don't overwrite previous events, you get only the first events that fit the specified file size. However, SQL Server continues to write events to the screen. If you enable the file-rollover option, as soon as the trace file reaches the specified maximum size, SQL Server creates a new trace file.
The first trace file's name is the name you specified in the Save to file text box. SQL Server uses three elements to construct the next file's name: the file name you specified, an underscore, and a number, starting with 1 and incremented by 1 with every new file. For example, suppose you specify C:\traces\mytrace.trc as the trace filename and you leave the default maximum file size of 5MB. If you trace 17.5MB of trace data, you'll have four files: mytrace.trc, my-trace_1.trc, mytrace_2.trc, and mytrace_3.trc.
The file-rollover option gives you some important capabilities. You can start examining files while Profiler continues to send output to newer files; you don't have to wait until the trace stops before you start analyzing the trace output. Also, if your trace generates large amounts of data and your disk is nearly full, you can move the full inactive files to another disk or even to another server.
By enabling the option Server processes SQL Server trace data, you specify that the server, not the client, should handle the trace data. This means that a server-side trace file writer, which in SQL Server 2000 is based on the transaction log writer, is responsible for writing the trace data to the file. The default in Profiler is for the client to process the trace data. When you enable the option Server processes SQL Server trace data, you're guaranteed to capture all events that you requested. But this option can degrade the server's performance under stress conditions, when the server is already under a heavy load. However, if you don't enable this option—letting the client handle the trace data instead, as SQL Server 7.0 Profiler does—SQL Server might not capture all events under stress conditions.
On the General tab, you can also send the trace output to a table and specify the maximum number of rows (in thousands) that Profiler can write to that table. Saving the trace output to a table lets you analyze the trace data by using T-SQL queries or even a Microsoft Excel pivot table or Analysis Services cube to manipulate the trace data dynamically. Note that even if you send the trace output to a file instead of a table, you can use the function fn_trace_gettable() to return the trace data in table format. The fn_trace_gettable() function accepts the trace filename and optionally a maximum number of files to process as parameters; it returns a rowset containing the trace data. For example, suppose that Profiler generated the following three files for a certain trace: C:\traces\mytrace.trc, C:\traces\mytrace_1.trc, and C:\traces\mytrace_2.trc. To copy the content of only the first file to a table, you would use fn_trace_gettable(), specifying 1 in the second parameter, as follows:
FROM ::fn_trace_gettable('C:\traces\mytrace.trc', 1)
Note that a bug in SQL Server 2000 (discussed in the Microsoft article "BUG: fn_trace_gettable Function Cannot Read Rollover Files Generated by SQL Profiler" at http://support.microsoft.com/support/kb/articles/q273/9/72.asp) prevents you from using the fn_trace_gettable() function to load several rolled-over Profiler files into the same table all at once. As a workaround, you can use system stored procedures instead of SQL Server 2000 Profiler to create the trace, or you can load each trace file separately to the destination table, as the script in Listing 1 shows.
The last option available on the General tab is Enable trace stop time, which lets you specify a date and time for the trace to stop capturing events. Note that if you use SQL Server 2000 Profiler to trace a SQL Server 7.0 machine, the options Server processes SQL Server trace data and Enable trace stop time are unavailable.
On the Events tab, which Figure 2 shows, you can choose which events you want to include in your trace. SQL Server 2000 Profiler introduced many new events that aren't available with SQL Server 7.0 Profiler and also changed the categorization of some events. Although the list of available events on the Events tab is the same whether you define a trace against SQL Server 2000 or SQL Server 7.0, the events that you can capture are dependent on what the server-side components generate. Later in this section, I'll elaborate on some differences in captured events depending on which release of SQL Server you define your trace against.
Among the new events in Profiler, you can find the Data File Auto Grow, Data File Auto Shrink, Log File Auto Grow, and Log File Auto Shrink events in the new Database category. In the Performance category, which is also new, you can find the Execution Plan event and the Show Plan Text and Show Plan All events. Also in the Performance category, you can find four Degree of Parallelism events, one for each Data Manipulation Language (DML) statement (SELECT, INSERT, UPDATE, DELETE). Note that the four Degree of Parallelism events have different meanings only if you're tracing a SQL Server 7.0 server. If you're tracing a SQL Server 2000 server, any of the four Degree of Parallelism events is sufficient to capture all types of DML statements, so you can include just one of the four. The statement type appears in the EventSubClass data column. (I cover data columns in the next section.)
The Security Audit category is also new. It includes 21 database audit events that let you track failed or successful login attempts; the addition and removal of logins, database users, and database roles; the use of statement and object permissions; backup and restore operations; the issuing of DBCC commands; and more. In addition, this category gives you 10 user-configurable events, whereas SQL Server 7.0 Profiler gives only 5.
Starting and Completed events, which you can find in the Stored Procedures and TSQL categories, are two of the most commonly traced events. In general, Starting events are important when you want to track the existence and chronological order of certain activities. For example, if you want to analyze the types of queries issued against your server during various periods of the day, you need to capture the text of those queries and the time they were issued. For this purpose, capturing a Starting event is sufficient.
Completed events are important when you want to get performance measures of activities such as CPU time and logical disk reads, because performance measures are known only after activities have finished. For example, you would use Completed events to locate slow-running queries. In some situations, you'd want to capture both Starting and Completed events. For example, suppose you're troubleshooting a certain client application that experiences a lot of timeouts, such as lock request time out period exceeded errors. You need to pinpoint the queries that start but never finish. If you capture only Completed events, you won't get the queries that experienced timeouts because those queries never completed. However, you could look for Starting events that don't have corresponding Completed events to find the queries that never finished.
You can divide Starting and Completed events by their level of detail; we call these events the less granular (not so detailed) and more granular (very detailed) events. The less granular events include RPC:Starting, RPC:Completed, SP:Starting, SP:Completed, SQL:BatchStarting, and SQL:BatchCompleted. SQL Server generates these less granular events when remote procedure calls, stored procedures, and batches start and finish. SQL Server generates the more granular events—SP:StmtStarting, SP:StmtCompleted, SQL:StmtStarting, SQL:StmtCompleted—for individual statements inside stored procedures and batches. The behavior of the more granular Starting and Completed events is different depending on whether you're tracing a SQL Server 2000 machine or a SQL Server 7.0 machine, even if you're using SQL Server 2000 Profiler to trace both versions.
In SQL Server 2000, regardless of whether you select to trace statements under SQL or SP, SQL Server generates only one kind of event—SQL. You can't request to trace only individual stored procedure statements or only individual batch statements. However, if you're tracing a SQL Server 7.0 machine and you include the SP:StmtStarting and SP:StmtCompleted events, you'll get the requested events only for each individual statement inside a stored procedure. If you include the SQL:StmtStarting and SQL:StmtCompleted events, you'll get the requested events only for each individual statement inside a batch.
Whether you include only the SP:StmtStarting and SP:Stmt-Completed events, only the SQL:StmtStarting and SQL:Stmt-Completed events, or all of these events, you'll get the SQL:StmtStarting and SQL:StmtCompleted events for each statement inside a batch and also for each statement inside a stored procedure. To demonstrate this point, I used the script that Listing 2 shows to create the tables t1, t2, t3, and t4 and the stored procedures proc1 and proc2 in both SQL Server 2000 and SQL Server 7.0. Then, I used SQL Server 2000 Profiler to trace the following batch against both SQL Server releases:
I ran four traces with different combinations of events against each release; Table 1, page 36, shows the results. The first trace (SP + SQL) includes all possible Starting and Completed events. The second trace (SP) includes all Starting and Completed events except SQL:StmtStarting and SQL:StmtCompleted. The third trace (SQL) includes all Starting and Completed events except SP:StmtStarting and SP:StmtCompleted. The fourth trace (blank) includes only the less granular Starting and Completed events. In Table 1, I check-marked the less granular events with red, the events for individual batch statements with blue, and the events for individual stored procedure statements with black. Note that when tracing a SQL Server 7.0 machine, you can decide which of the more granular events to include in the trace, whereas you can trace only all of the more granular events or none of them when you're tracing a SQL Server 2000 machine.
I found this all-or-nothing change in behavior between SQL Server 7.0 and SQL Server 2000 surprising. If you've done tracing against SQL Server 7.0, you need to be aware of this significant change in behavior before you start tracing against SQL Server 2000. You also need to change your perspective on tracing individual batch and stored procedure statements. When tracing a SQL Server 2000 machine, treat the SP:StmtStarting and SP:StmtCompleted events as redundant, and be aware that when you trace the SQL:StmtStarting and SQL:StmtCompleted events, you actually get both stored procedure and batch statements.
Data columns are attributes of events. On the Data Columns tab, which Figure 3 shows, SQL Server 2000 lets you choose from 43 data columns, as opposed to the 25 data columns available in SQL Server 7.0 Profiler. Data columns give you information about the events in your trace. SQL Server Books Online (BOL) contains details about all of Profiler's data columns and the relationship between data columns and events; not every data column is relevant for every event. Two data columns—EventClass, which shows the type of event, and SPID, which contains the server process ID—are mandatory. The other 41 data columns are optional.
Some of the new data columns—such as Success, which indicates a success (1) or failure (0) of a certain permission check—are relevant to the new Security Audit events. Other new data columns, such as DatabaseName and ObjectName, give you more descriptive details about the traced events. DatabaseName and ObjectName don't exist in SQL Server 7.0 Profiler, but if SQL Server 7.0 Profiler can connect to the SQL Server machine that the trace data was captured from, it tries to populate the Database ID and Object ID with the database and object names, respectively. SQL Server 2000 Profiler has separate data columns for object and database IDs and for object and database names.
Data columns generally fall into three groups: performance, informative, and event-specific. Performance data columns include CPU, Duration, Reads, and Writes. Informative data columns give you information such as the start or end time of the event, database ID, login name, and database username. Event-specific data columns include EventSubClass, IntegerData, and BinaryData. I call these data columns event-specific because they provide different information for different events—and no information at all for some events. For example, for the Audit Add Login to Server Role event, the EventSubClass data column shows 1 for Add and 2 for Drop. For the Audit Backup/Restore event, the EventSubClass data column shows 1 for Backup and 2 for Restore. For the Data File Auto Grow event, the IntegerData data column provides the number of pages by which the file increased, but for the Lock:Deadlock event, it provides the deadlock number (starting with 0 for the first deadlock since the server was started). For the Lock:Acquired event, the BinaryData data column contains the resource type, but for the Existing Connection event, it contains a bitmap that stores information about the session-level settings.
In short, make sure you consult BOL when you want to know what kind of information you can get from the event-specific data columns for each event. You can find this information on the Contents tab under Administrating SQL Server, Monitoring Server Performance and Activity, Monitoring with SQL Server Profiler, Monitoring with SQL Server Profiler Event Categories. Each event category has a section that describes the events in that category and the data columns relevant to each event.
On the Data Columns tab, you can also determine which columns you want to group the results by when you see them on the screen. You just place the desired data columns under the Groups section. If you're used to grouping events with SQL Server 7.0 Profiler, you need to be aware of some changes in SQL Server 2000 Profiler. In SQL Server 7.0 Profiler, each group gets a header that includes the values that the group represents and the number of events belonging to it. For example, if you group the results by the DatabaseID column, you'll get a different header for each database ID, with the database ID itself and the number of events that belong to that database ID. You can expand the group by clicking the plus sign that appears to the left of the group's header to see the individual events, or you can collapse the group by clicking the minus sign to hide the individual events. SQL Server 2000 Profiler sorts events by the data columns you specify under the Groups section, rather than grouping them. So, you don't get a separate caption for each group. For example, I traced activity against the Pubs sample database and requested the results be grouped by CPU. Figure 4 shows the results of that trace. Notice that all events are sorted by the CPU column, which appears first.
Make sure you're selective in choosing both events and data columns to include in your trace. Including only events and data columns that you really need will reduce the amount of data that SQL Server will collect and transfer to Profiler.
Filters let you limit the events that a trace collects. On the Filters tab, which Figure 5 shows, you can place filters on 34 of the 43 data columns that SQL Server 2000 Profiler offers. The data columns you can't filter are BinaryData, EventClass, EventSubClass, IntegerData, LoginSid, NestLevel, ServerName, TargetLoginSid, and TransactionID. SQL Server 2000 considerably enhances Profiler's trace-filtering capabilities by supplying powerful new event criteria options. Each filter lets you specify one or more combinations of the following options: Like, Not like, Equals, Not equal to, Greater than or equal, and Less than or equal. However, not all event criteria options are available with each filter. In general, textual data columns such as ApplicationName, DatabaseName, and HostName let you specify the Like and Not like options. Numeric data columns such as DatabaseID, CPU, and SPID let you specify the Equals, Not equal to, Greater than or equal, and Less than or equal options. Temporal data columns, which include StartTime and EndTime, let you specify only the Greater than or equal and Less than or equal options.
SQL Server 2000 Profiler also introduced the new ability to specify multiple values in one filter. You can specify multiple values with the Like, Not like, Equals, and Not equal to event criteria options. For example, to capture events for SPIDs 52 and 54, expand the SPID filter, expand the Equals criteria option, specify 52 in the empty text box, press the Enter or semicolon (;) key, and specify 54 in the second text box. In SQL Server 7.0 Profiler, you can specify only one value for such a filter.
The check box Exclude system IDs lets you exclude events that reference system objects. By selecting this check box, you add the filter ObjectID >= 100. Because most system objects have an object ID that is less than 100, you eliminate most—but not all—references to system objects. This filter can considerably reduce the number of traced events because SQL Server itself performs a lot of activity against system objects.
To efficiently examine trace output, you don't necessarily need to pre-filter—place filters when you define a trace. By specifying the filters after you reopen the trace output file in Profiler, you can post-filter events that you've already collected. Pre-filtering is very important in SQL Server 7.0: Without adequate pre-filters, the trace output could become huge, causing SQL Server to miss some events. The consumer, which is the component responsible for collecting events from the queue, can't always clear the events fast enough to free space for new events when the queue reaches its threshold. The problem with pre-filtering is that you might accidentally filter out events that are vital for your examination. For example, if you're trying to find the cause of an increasing number of deadlocks but you don't know which SPIDs are involved in the deadlocks and which application issues the code that causes the deadlocks, pre-filtering on certain process IDs or certain applications might exclude the events that you need to examine.
In SQL Server 2000, pre-filtering is still important for reducing the size of the trace output and for reducing the load on the server, but you can be more lenient with filters. The trace architecture is much more efficient and stable. So as a rule, filter out what you're certain you don't need, but if you're not sure, don't specify a filter. By reopening the trace file and using post-filters, you can examine the trace output more dynamically.
SQL Server 2000 Profiler's templates let you recycle previously saved trace definitions when you create new traces. They replace the Profiler Trace Wizard, which is available with SQL Server 7.0's Enterprise Manager, and the Trace Definitions, which are available with SQL Server 7.0 Profiler. A template includes a certain combination of events, data columns, and filters. SQL Server provides nine predefined templates. For example, you can use the SQLProfilerTSQL_Replay template, which includes the events and data columns necessary for replaying traces, to keep from having to memorize and specify the exact list of items required for replay.
You can also create your own templates. To create a new template, select New, Trace Template from the File menu. You'll get a tabbed dialog box in which you can choose your events, data columns, and filters. When you're done, click the Save As button on the General tab to save your template. Alternatively, you can select File, Save As, Trace Template to create a template based on a trace that you create. This trace can be active or stopped—or even a reopened trace file or table. Before your template will appear in the list of available template names, you have to save it to \Program Files\Microsoft SQL Server\80\Tools\Templates\SQL Server Profiler. To use a template when creating a new trace, from the General tab, simply choose the trace from the list of available template names or point to the template file.
SQL Server 2000 Profiler lets you trace SQL Server's activity much more efficiently than you can with SQL Server 7.0 Profiler. The new Profiler also gives you more information and investigative capabilities than its predecessor. You can examine the activities that applications are performing against SQL Server behind the scenes, locate slow-running queries, determine the cause of deadlocks, and more. By investing the time to master this tool, you can fully exploit its capabilities to fine-tune your SQL Server system. (For more tuning information, see the sidebar "Spotlight on the Index Tuning Wizard.")