Editor's Note: This issue marks the debut in SQL Server Magazine of SQL Server Savvy, a collection of SQL Server tips in question-and-answer format that has run in Windows 2000 Magazine (formerly Windows NT Magazine) since 1997. Send your SQL Server questions and comments to SQL Server MVP Brian Moran at savvy@sqlmag.com.

I need to generate a query that counts the number of records, in hourly intervals, for a range of dates. The output should look something like this:

January 28, 2000 8:00 am - 9:00 am 100
January 28, 2000 9:00 am - 10:00 am 147
January 28, 2000 10:00 am - 11:00 am 246

I defined the ActivityDate field, which holds date and time information, as a datetime data type. Can I use SQL to easily generate such a query? Has someone written a stored procedure to perform this kind of date formatting?

Date formatting isn't as hard as it seems. Imagine that you have a table that contains a column called StartTime, and you want to find the number of rows that fall within a given StartTime range, down to the minute. The following query will do the job:

SELECT
CONVERT(char(16), StartTime, 20),
count(*)
FROM
sqltrace
GROUP BY
CONVERT(char(16), StartTime, 20)

This query uses a standard GROUP BY construct that counts the rows in a particular group of data. The trick is to use the CONVERT function to chop off the parts of the datetime data you don't want. You can reference SQL Server Books Online (BOL) for more information about CONVERT—especially the optional STYLE parameter, which lets you mask the date to a particular format. In this case, specifying a value of 20 for the STYLE parameter formats the date as yyyy-mm-dd hh:mi:ss, which measures time in 24-hour intervals. To group the rows down to the minute, you simply chop off the seconds information by converting the datetime data type to char(16).

You can also use the CONVERT function to easily strip the time from a datetime field to get only the date:

SELECT CONVERT(datetime,CONVERT(char(10), getdate(),101))

Table 1, page 20, shows the codes CONVERT uses.

Someone told me that SQL Server includes a command-line utility that queries a LAN to find all active SQL Server systems. I've never heard of this; have you?

SQL Server doesn't have a command that finds all active servers on the LAN, but you can use the -L options in either the isql or osql command-line utilities to find the SQL Servers running in the domain you're in. However, you can't count on the Windows NT networking calls that isql and osql use to always find all servers. The ­L parameter for each of these command-line utilities won't find servers in other domains. SQL Server's tight integration with Active Directory (AD) in Windows 2000 (Win2K), however, should make this process much easier.

I know that I can enable and disable triggers, but how can I programmatically determine which triggers in my database are disabled?

Imagine that you have a trigger called TriggerTest_insert on a table called TriggerTest. The following command returns 0 if the trigger is enabled and 1 if the trigger is disabled:

SELECT OBJECTPROPERTY(object_id('TriggerTest_
insert'), 'ExecIsTriggerDisabled')

SQL Server 7.0 introduced a new class of functions that return a wide range of property information that isn't readily available in SQL Server 6.5 and earlier versions. These functions have three general classes: TypeProperty, which returns information about a data type; ObjectProperty, which returns information about objects in the current database; and ColumnProperty, which returns information about a column or procedure parameter. Together, these classes include dozens of property values you can query for database objects. For example, the following statement gives you a quick way to see all disabled triggers in a particular database:

SELECT OBJECTPROPERTY(object_id(name),
   'ExecIsTriggerDisabled'), name
FROM sysobjects WHERE type = 'tr'

I have a database (DB1) that systems administrator (sa) owns, and I have a user (USER1) who is a member of the public and dbowner roles for DB1 and who owns several tables. If I log in as USER1, I can select * from any of his tables. But if I add USER1 to the sysadmin server role, I can't reference the tables unless I prefix the table name with the username (e.g., SELECT * FROM user1.the_table). As soon as I remove the sysadmin role, the query works again. What am I missing?

What you've discovered is typical, if puzzling, sysadmin behavior. SQL Server maps members of the sysadmin fixed server role to a special user, called dbo, inside each database, and any object that a member of the sysadmin fixed server role creates automatically belongs to dbo.

For example, if user Andrew is a member of the sysadmin fixed server role and creates table T1, T1 belongs to dbo and is qualified as dbo.T1, not as Andrew.T1. Conversely, if Andrew isn't a member of the sysadmin fixed server role but is a member of the db_owner fixed database role and creates table T1, T1 belongs to Andrew and is qualified as Andrew.T1.

In your case, when USER1 is a member of the sysadmin role, SQL Server translates the statement SELECT * FROM the_table to SELECT * FROM dbo..the_table. When you remove USER1 from the sysadmin role, the database translates the statement as SELECT * FROM USER1..the_table.

I have a table with several columns. The value of col_1, the table's primary key, comes from information in col_2 and col_3, which don't allow NULLs. Now I want to write a trigger that will fire for insert and update events against the table. When users insert data into the table, they shouldn't have to provide data for col_1. The trigger should get the data from values in col_2 and col_3, concatenate those values, then put the resulting value in col_1. How can I create this application when SQL Server doesn't support before-event triggers?

SQL Server triggers always fire after an event, but that doesn't keep you from writing a trigger that will populate col_1 with the correct value after the database executes an insert or delete statement. The insert (or update) trigger would simply reupdate the same row, setting col_1 1 = to the correct value. To populate col_1, you can retrieve col_2's and col_3's values from the INSERTED virtual table materialized inside the trigger. However, this technique requires an extra update statement for each insert or update to the base table. This extra statement might not affect performance in low-volume transaction environments, but it could become a problem in high-volume transaction environments.

Another option is to declare col_1 a computed column—a "virtual" column that the table doesn't physically store. You would use the other columns in the same table to compute the column's value from an expression. For example, a computed column could have the definition cost AS price * qty. Unfortunately, you can't use a computed column as a primary key.

Alternatively, you could simply perform the operation in your code before performing the insert or place the logic in a stored procedure. Either of these techniques would be more efficient than the trigger approach as it exists in pre-SQL Server 2000 versions.

SQL Server 2000 introduces several trigger enhancements, including INSTEAD OF triggers, which execute an action different from the usual action. You can even define INSTEAD OF triggers on views to fine-tune the types of updates a view supports. SQL Server 2000 also features AFTER triggers, which the database executes after any declarative-referential actions. Specifying AFTER is the same as specifying FOR, which is the only option available in current versions of SQL Server, but you can specify AFTER triggers only on tables.

I'm running the 120-day evaluation version of SQL Server 7.0 and have done a fair amount of work developing a prototype application for this SQL Server 7.0 system, but I'm about to reach the 120-day limit. How can I find out when this version expires, and what happens when I reach 120 days?

As more people install and use the SQL Server 7.0 Evaluation Edition, more users are overstaying their 120-day limit. What happens if you try to use the evaluation edition after it expires? SQL Server 7.0 simply doesn't start.

You can find out when you installed the product by looking at the created date for your SQL Server installation's root directory. Simply select mssq1 (or the name you gave the root directory), right-click, then select Properties. The created date should be the date on which you installed the software. If you reach the expiration date and SQL Server won't start, you need to uninstall the evaluation edition and install the full edition—no direct upgrade path exists. However, you can use sp_attach to attach the physical OS files to an existing licensed server.

You can upgrade to the full edition of SQL Server 7.0 if you do so before the 120-day evaluation period expires. For information about this upgrade, see the Microsoft article "INF: How to Upgrade From SQL Server 7.0 Evaluation Edition to Retail Version" (http://support.microsoft.com/ support/kb/articles/q237/3/03.asp). And for step-by-step instructions about upgrading SQL Server 7.0 Evaluation Edition to the retail version, see Neil Pike's SQL Server section of the Windows 2000 Magazine Network FAQ site at http://www.ntfaq.com/ntfaq/sql39.html#sql39.

When I run SELECT @@version against SQL Server Service Pack 1 (SP1), SQL Server SP1 reports a build number of 699. However, Help About in Query Analyzer on a SQL Server 7.0 SP1 system reports a build number of 694. What's the correct build number for SQL Server 7.0 SP1?

The correct build number for SQL Server SP1 is 699. Running SELECT @@version from Query Analyzer or Interactive SQL (ISQL) will return a build number of SQL Server 7.0.699. However, the build number for client tools is 694, even after you apply the SP1 patches. This numbering is confusing, and we hope Microsoft syncs up these numbers in future versions.