Executive Summary:

You already know how to parse trace files by using home-grown methods that involve SQL Server's common language runtime (CLR) functionality. That methodology remains powerful, but Microsoft has recently updated its RML Utilities, including the Reporter and ReadTrace executables. The utilities are well documented and easy to use.

You already know how to parse trace files by using home-grown methods that involve SQL Server's common language runtime (CLR) functionality. That methodology remains flexible and powerful, but Microsoft has recently updated a set of utilities that you can use right away—without needing to resort to the CLR. This updated functionality lets you perform several SQL Server tasks, but I want to focus on the utilities' trace-reporting aspects. (You’ll find these utilities—the Reporter and ReadTrace executables—in the RML Utilities blog post at http://blogs.msdn.com/psssql/archive/2007/12/18/rml-utilities-for-microsoft-sql-server-released.aspx.) The utilities are well documented and relatively easy to use, so my goal this month is simply to make you aware of them and show you how they can be a great benefit to your arsenal.

Get Your Act Together

To use the Reporter utility, you must first use the ReadTrace utility to read and parse the SQL trace files. However, before you can parse the trace files, you need to create them! Refer to my article, "Generating Server Side Traces" (see the Learning Path) if you need guidance for creating a server-side trace. These utilities also require a SQL Server database as a repository for the reports. For this purpose, I recommend that you create a separate database and use the SIMPLE recovery mode. The database’s name can be different from what the documentation states is the default, but note that you’ll need to specify the name in some later commands. The database doesn’t need to be very large, but its size will be dependent on the amount and type of trace data to process.

Get Crunching

After you have the database created and the files ready, you can use the ReadTrace utility to read the trace data directly from the files. You perform this procedure from the command line. (It’s very easy to script and automate this part of the process.) For example,

readtrace.exe -IYourPath\YourTrace.trc -oYourLogFolder -SYourServer_Instance –dYourDBName

This command will perform some parsing and normalizing, similar to my previous article’s use of the CLR method.

I want to point out a key drawback with the utility (at the time I wrote this article): In my testing, the parser treated all dynamic SQL (using EXEC or sp_executesql) the same and wouldn’t differentiate between two totally different SQL statements. This was one reason why I never used these utilities in the past and instead created my own CLR-based tools. I reported this flaw, along with a few other bugs or suggestions, to the PSS SQL Engineers responsible for the product. They plan to fix them in a future release. Hopefully, the fixes will be available before this article sees print. Once ReadTrace reads and parses the trace data, it places the data into various tables in the database, where it’s accessible for reporting. If the tables don’t already exist, the tool will create them for you at this time.

Reporting Gone Wild

Here’s where the Reporter utility comes into play. If you started the process by running the ReadTrace utility, it will automatically invoke the Reporter GUI after populating the tables with the parsed data. However, you can also open the Reporter utility at any time by simply running reporter.exe. If you aren’t using the default database name and a default instance, you’ll be required to make the appropriate changes to point to the correct repository. In either case, you’ll see a screen that looks like Figure 1.

This screen provides a summary of what occurred within those trace files, broken down by various resources (e.g., CPU, Reads, Writes). It also shows you the number of batches started and completed. Although you can’t see it in the figure, this screen also lets you view the breakdowns by time based on a preset interval. Just expand the View Details link to see this data.

Obviously, if you didn’t include certain events or columns in your trace definition, you won’t be able to report on them here. However, the tools process only those events and columns that make sense for the intended purpose. Please read the documentation to ensure that you understand these nuances before going too far into your project.

There are six other types of reports that you can access from the main report page, but I’ll concentrate on the one titled Unique Batch TopN. Figure 2 shows the top-most portion of this report, in which it breaks down the 10 most resource-consuming query types by CPU, Duration, Reads, and Writes. In this example, I see that 32.2 percent of my CPU was consumed by a query that ran 892 times during this reporting period. Clicking on that bar in the graph will bring me to the section of the report that lists each of the types of queries parsed earlier, along with metrics specific to each. From there, I can drill down even further to see a report specific to that query type with even more metrics. It also gives me a sample query for that type. Note that this is just a random individual sample of a real query and might not accurately depict all the queries for this type in terms of performance or parameters shown.

Give Them a Try

This has been a brief introduction to the RML utilities, but I’m sure you’ll agree that they’re valuable tools for reporting on trace data alone. The utilities are simple to use and require only a small portion of your time to get started. So, what are you waiting for?