Editor's Note: Send your SQL Server questions and comments to SQL Server MVP Brian Moran at savvy@sqlmag.com.

What are the advantages of saving SQL Profiler trace information to a table instead of to a text file and vice versa?

SQL Profiler's trace information is the same regardless of whether you store it in a table or a file. However, I find saving the trace data to a file more convenient than saving the data to a table. Although you can more easily access the trace information when it's in a table, you can more easily move trace data from one machine to another for staging purposes when the data is in a file. Files also scale better than tables because with files, you don't create additional workload on the SQL Server machine you're tracing. If you trace directly into a table, you generate a potentially large number of INSERT statements on the server that is capturing the trace data. If you store your trace data in a table on a server that you're not monitoring, you don't have this performance problem, but you're bound by your network bandwidth.

When I use SQL Server Enterprise Manager to delete and update certain rows in one of my tables, I receive the error message Key column information is insufficient or incorrect. Too many were affected by update. What am I doing wrong?

Let's investigate this problem by opening a new query window and running this batch

USE tempdb
SELECT * INTO tempdb..authors
FROM pubs..authors
GO

to create a test data set. From Enterprise Manager, open the new table in tempdb by expanding the tables tree for TEMPDB, right-clicking the Authors table, then selecting Open Table Return All Rows. Now, update the au_lname column to WhiteX (where au_lname equals White) by simply putting your cursor in the au_lname column and changing the au_lname value White to WhiteX. This action will work fine.

Next, run the following batch:

USE tempdb
DROP TABLE authors
SELECT * INTO tempdb..authors
FROM pubs..authors

INSERT INTO tempdb..authors
SELECT * FROM pubs..authors
GO

This batch drops and recreates the tempdb..authors table but creates a duplicate of each row. Now, open the Authors table in Enterprise Manager and try to update the au_lname column from White to WhiteX. Don't forget that two rows now have an au_lname of White, although Enterprise Manager probably won't display these rows sequentially. You need to update only one of the White rows; it doesn't matter which one.

This time, your attempt to update the row will raise the error message you received. What's the difference between the two operations? In the first update example, the row didn't have a duplicate; in the second example, it did. The error message you're getting tells me you're trying to update a row that has a duplicate. However, if your table had a primary key, you couldn't have a duplicate row and wouldn't have a problem. As a general rule, all tables should have a primary key or unique index.

I used SQL Profiler to save trace data to a SQL Server table and I'd like to query the data by event class, but I don't know how to determine what the EventClass column's integer values mean. How can I translate the integer values to the event class names in the SQL Profiler GUI?

With SQL Server 7.0, you can execute master..xp_ trace_geteventnames to get a result set that contains EventClass's integer values and their corresponding EventNames. I haven't found a corresponding procedure in SQL Server 2000, but SQL Server 2000 Books Online (BOL) gives you values for sp_trace_setevent. You can use these values to map the integer-valued EventClass to a readable text name.