Editor's Note: Send your SQL Server questions and comments to SQL Server MVP Brian Moran at savvy@sqlmag.com.

I want to analyze the number of logical reads that SQL Server performs for certain queries. Although I can use SQL Profiler to easily capture and display this information, I prefer to save the trace data to a table so that I can run my own analysis queries against it. However, SQL Server stores the integer value for the number of reads in a column called READS with a data type of image(16). This value doesn't seem to be a regular hexadecimal number because I can't use CONVERT to translate the varbinary number to an integer. What am I doing wrong?

SQL Profiler is a powerful tool for analyzing your SQL Server system's activity, and analyzing the number of logical reads performed is an excellent idea. If you save a trace file to SQL Server, it stores the value in the READS column as an image data type. The value is in an Intel-specific binary format, so the process to translate this value to an integer isn't obvious.

Table 1 shows sample READS column values from a SQL Trace file that is stored in SQL Server, as well as the values' integer representations. SQL Server shows READS column values in little-endian format, which places the least significant digits on the left. Thus, when you translate the value to an integer data type, treat the first byte of the image data type value as the last byte and vice versa. For example, to translate the image data type value in the first row to the correct integer value, you must translate OxD302000000000000 to a hexadecimal value of 2D3, then convert that value to an integer (723). The order of the bytes in the second row translates to a hexadecimal value of 715, which becomes 1813 in decimal.

Listing 1 shows a query that uses the CONVERT and substring functions to reorder the READS column bytes so that you can display the values as integers. (The READS column can store an 8-byte number. You probably won't need to store a number this large, but the query converts the values to bigint rather than int just to be safe.)

I set up a program that runs as a Windows NT service with MSSQLServer as a dependency so that the system starts MSSQLServer before starting my service. The first action my service performs is to query SQL Server. However, the query always fails, and the system provides the following error message:

Cannot open database requested in login 'PDB'. Login fails., SQLErrorInfo: 42000, Source: Microsoft OLE DB Provider for SQL Server.

Two seconds after this error message appears, the event logs display the following entry:

17052 : Recovery complete.

However, if I start my service when SQL Server is already running, the query works. How can I set up my service to wait to query SQL Server until SQL Server is ready to accept queries?

You've already done the hard work: You've figured out how to make your service dependent on the SQL Server service so that your application won't start until SQL Server is running. However, one of the first actions that SQL Server performs after it starts is to bring the user databases online. The error message you're receiving tells you that SQL Server has completed automatic recovery for the database in question, which means that the database is now available for query access. The trick is configuring your service-based application to recognize when your user databases are available. To do so, you can use the SQL Server 2000 DATABASEPROPERTYEX function. SQL Server 7.0 includes a similar function called DATABASEPROPERTY, which SQL Server 2000 includes for backward compatibility. Although these functions are similar, the calling parameters are slightly different.

The following SQL Server 2000 query will return ONLINE if the Northwind database is online and available for use:

SELECT DATABASEPROPERTYEX( 'northwind' , 'status' )

Table 2 lists all possible values that the function's Status option returns. You do need to be careful, however, when you use this function; I've noticed that if you specify a nonexistent database name, it returns an empty result set instead of returning an error that tells you that the database doesn't exist. Figure 1 lists the other options available for use with the DATABASEPROPERTYEX function. If you're using SQL Server 7.0, you can run the following query to determine whether the database is in recovery:

SELECT DATABASEPROPERTY ( 'northwind' , 'IsInRecovery' )

SQL Server 7.0 lets you load data by using Data Transformation Services (DTS), bulk copy program (bcp), or the new BULK INSERT statement. What's the fastest way to load a large amount of data?

As you noted, SQL Server 7.0 offers several high-speed mechanisms for loading data. Bcp is a high-speed file-import utility that SQL Server has supported since the early days of the database management system. Bcp lets you quickly load large files and is often a good choice, but it's not user friendly.

In SQL Server 7.0, Microsoft extended SQL Server's data-import capabilities with DTS and the new T-SQL command BULK INSERT. DTS offers a tremendous amount of data-handling flexibility, but BULK INSERT can be twice as fast as either bcp or DTS when used in comparable circumstances.

Why is BULK INSERT so much faster? BULK INSERT is a T-SQL command, so it runs in-process with the SQL Server engine. Thus, SQL Server doesn't need to pass the data along the normal client API network-abstraction layer called a Network Library (NetLib). Bypassing the NetLib layer saves a huge amount of time.

In addition, SQL Server 7.0 supports a custom task add-on that lets you write a BULK INSERT task directly from a DTS package. (Microsoft also integrated this feature into SQL Server 2000.) If you're looking for the best combination of speed and programmatic workflow control, BULK INSERT from DTS might be the answer.

You can start a column name with a number (e.g., 1year). However, to use SQL Server 7.0 to access the column, you must put square brackets around the column name (e.g., SELECT \[1year\] FROM tablename). Is the best practice not to start column names with numbers?

Using column and table names that you must bracket so that SQL Server can recognize them is a bad idea. This method works, but it requires extra key strokes for developers to reference objects, and some third-party tools won't understand the syntax. Whether to use column and table names that require brackets is an especially common quandary for people who are upgrading from Microsoft Access or other databases that permit less restrictive naming conventions than SQL Server. Although using the less restrictive naming convention is tempting, you'll regret the choice.