Answers from Microsoft

I set up a global NT group called SQLpublic; assigned users to the group; and in SQL Server, assigned the group login rights. If I use NT's User Manager to drop a user name from the group, the user can still log on. I stopped and started SQL Server, and it still didn't pick up my security change. The only way that I have found to update the security is to reboot the server. Can I make SQL Server update its security information without rebooting the server?

The problem is on the client side. Removing someone from an NT group does not take their group membership away while they are logged in to the SQL Server machine. The user must log off and then log on again to get a new NT security token. Only then will the group membership SID be removed from the user's access token. You should never have to reboot the server to pick up this info.

The data in my 70GB database and two 20GB databases on SQL Server 7.0 is mainly static and updated only once a month. Where do I find tape backup hardware for off-site storage?

White papers on the Microsoft Web site ( sql/70/gen/perform.htm#backup and sql/70/whpprs/hpbenchmark.htm and sql/70/ whpprs/highperf.htm) have information about tape drives that will work for you.

The SQL Server 7.0 Upgrade wizard features an Exhaustive Data Integrity Verification routine that uses checksum values to validate data. What is the checksum value? Does SQL Server calculate it at runtime, or is the value stored with the row information on each page? How does SQL Server 6.5 calculate the checksum value?

Both SQL Server 6.5 and 7.0 calculate the checksum by taking each byte of data in a field and adding its byte value to come up with a single number that represents that field. This value is calculated for each column and row of an entire table to come up with a single value on a per-table basis.

During the transfer of data, if these numbers match on both sides, any corruption is highly unlikely. However, there is no guarantee because checksums, by their nature, are guaranteed only to catch single-bit errors.

I enabled trace flags 1204 and 1205 on SQL Server 7.0. The dump that results when deadlocks occur looks quite different from the output that these trace flags produced in earlier versions of SQL Server. I can't find any documentation describing what the output means. I've figured out that 20 in the KEY value is the DBID and 1433772165 is the object ID, but what does the "1" represent? How can I tell whether this deadlock occurred on data or index and at the row level or data level?

Never turn on trace flag 1205, because it is for Microsoft's internal use only. The "1" means it's a clustered index. But this trace doesn't show a deadlock; it shows an intermediate result from using 1205. You can use Trace Flag 1204 as publicly documented, which produces additional deadlock information and can be useful for application development.

Can I define a column in a SQL Server 7.0 table on another column? For example, can I run:

                              CREATE TABLE Stuff (                                 Col1 int,                                 Col2 int,                                 Col3 as Col1 * Col2)

Which is more efficient, explicitly generating a calculated value with code or using the column definition to get a value? Assuming that the definition of Col3 could be much more complicated and that I have some detailed and complex scientific calculation, which is the best way to proceed?

If your server isn't overwhelmed with work, the computed column is faster because you end up transmitting less data across the network. If your server is already busy and you have plenty of network bandwidth, send the data back to the client and compute the values there to reduce server overhead.

If a client is running at 85 percent utilization and you add SQL Server 7.0, will SQL Server 7.0 create more network communication and raise the 85 percent to 95 percent? Or will you see a reduction in network traffic because SQL Server 7.0 has better compression when sending data to the client?

You might see a minor reduction in network traffic, but your overall traffic level is the same as it was with SQL Server 6.5 unless you change the application.

We use cursors a lot in our applications, so some stored procedures have nested stored procedures and each uses cursors.

                                EXEC proc abc (This proc has a cursor.)                                 DECLARE cursor test

The abc proc calls another proc xyz (this proc has four cursors).

                                 OPEN cursor test                                 FETCH next FROM test                                 WHILE @@fetch_status=0                                 BEGIN                                 FETCH next FROM test 

(Here we see the following error message saying cursor is not open.)

                              SQL Server 7.0 error message                              end                              close cursor test                              deallocate cursor test.

The above stored procedure works in SQL Server 6.5. Why doesn't it work in SQL Server 7.0?

Here are a couple of possibilities: The same cursor name in multiple procedures is causing unplanned interactions (i.e., one of the procs is closing the global cursor test). Make the cursors local in scope. Or the transaction commit or especially the rollback is closing the cursor. The latter is probably the most likely, because the code worked against SQL Server 6.5 and closing on rollback is new to SQL Server 7.0.