Although the options seem endless, if you start with the provided samples and wizard-generated traces, you can quickly create custom traces to investigate database issues.
Example 1: Creating a table that contains an SQL script for replay
If you examine the first sample (Sample 1:T-SQL), you’ll see that it includes no filters. It’s grouped on application name and event class with Text and StartTime as data columns. If you modify the trace to write to a table within a database, and then execute sp_help
------------ ----- -------
TextData ntext 16
Application nvarchar 256
StartTime datetime 8
The object does not have any indexes.
The large size of the TextData field (1,073,741,823 characters) lets you capture all the information you’ll need. The length given in the table above is the size of the pointer to this data. If you plan to use this table to log large amounts of information, you can generate summary data and index the table after you finish writing to it.
You can also load the table into SQL Profiler to replay it. Select File, Open, Trace Table. Then complete the dialog box information for Server, Database, Owner, and Table. After you load a valid SQL Profiler trace table into SQL Profiler, you can send output to a file for later examination. You can also replay the table against a server other than the one it was originally created for.
Example 2: Determining why a certain stored procedure is slow
Let’s say that as you’re testing a new version of an application, you realize one part of the system is dramatically slower than you expected. The problem is that several developers made improvements to this part of the system. Although each of them assured you that they had tested their changes in isolation, with no noticeable effect in performance, the tester reported timeouts on a regular basis.
First, you need to determine which stored procedure is running slowly. SQL Profiler’s stored procedure performance wizard-generated trace is ideal for this application. You need to set a minimum value for duration, similar to the filter defined for slow-running queries, then identify the slow-running stored procedure. Then you can run the stored procedure in the SQL Query Analyzer to determine the execution plan, or run the sample trace provided for T-SQL and stored procedures. Let’s say that in this example, you determined that developers inserted many additional, independently written conditional statements throughout the stored procedure, which caused a table update with several large triggers attached. You then can consolidate the SQL statements to return to the previous performance level.
Example 3: Profiling stored-procedure performance, variation, and frequency of execution
You can create a trace that features only the SP:Completed event to generate a profile of how often each stored procedure is called within the system. In addition, you can save this frequency information to a table. You also can designate ObjectID as a featured data columns, which will be quicker than using the text field for analysis purposes. Then you can determine, simple queries, the answers to such questions as:
- "What stored procedures are never called?"
select name from sysobjects where type = 'p' and id not in (select objectid from spinfo) order by name
- "What stored procedures are called the most?"select name from sysobjects join (select top 30 count(*) freq,objectid from spinfo group by objectid order by freq Desc) as tblmax on tblmax.objectid = id
- "What stored procedure takes longest across all executions to run?"
"select name from sysobjects join (select top 1 sum(duration) dur,objectid from spinfo group by objectid order by dur Desc) as tblmax on tblmax.objectid = id"
Don’t query system tables directly because Microsoft doesn’t guarantee that the table format will remain constant. The queries above are for informational purposes.