Downloads
9750.zip

Editor's Note: Share your SQL Server discoveries, comments, problems, solutions, and experiences with products and reach out to other SQL Server Magazine readers. Email your contributions (400 words or less) to r2r@sqlmag.com. Please include your phone number. We edit submissions for style, grammar, and length. If we print your submission, you'll get $50.

Ordering by SQL Views


SQL Server usually doesn't allow an ORDER BY clause in views—a situation that both astounds and outrages me. However, you can use this workaround: Just add a TOP 100 PERCENT clause to the front of the SELECT list.

Here's an example of how to use TOP 100 PERCENT to order data in views. The Northwind database contains a view erroneously called Alphabetical list of products. If you open this view, you'll see that it doesn't show the products in alphabetical order. To put the products in alphabetical order, you modify the SQL code as follows:

SELECT TOP 100 PERCENT Products.*,
   Categories.CategoryName AS CategoryName
FROM Categories INNER JOIN
   Products ON
   Categories.CategoryID =
Products.CategoryID
WHERE (Products.Discontinued = 0)
ORDER BY ProductName

Microsoft Access will complain when you try to save the query (ORDER BY not allowed in this type of query). Ignore the warning, and save the view. When you open the view, you'll see the results in order of ProductName.

I researched this workaround's possible overhead by using Query Analyzer to compare the runtimes and execution plans of two SELECT statements: one with the TOP clause and one without. I used the SQL Server version of FoodMart, which has more data than Pubs or Northwind, to select all 10,000-plus rows from the customer table in order of lname,fname. On my system after cache preloading, each statement executed in 28 seconds. The graphical query plan showed the TOP statement as being almost 0 percent of the statement's total processing cost. I examined another multitable SELECT, and again, the TOP clause imposed virtually no penalty.


Storing SQL Server 7.0 Server Registrations on a Centralized System


Do you manage multiple SQL Server machines and find yourself accessing these servers from more than one location? SQL Server Enterprise Manager's Store user independent option lets you set up a SQL Server server registration on one server, then share that registration with Enterprise Manager on other servers. Whenever you need to administer your servers from a new location, you can point at the centralized server's registration profile.

To set up a central store for server registration information, first click the server name in Enterprise Manager on the remote server. Click Tools, Options, then select the General tab. Clear the Store user independent check box. You can now use Enterprise Manager to create server groups and to register servers.

To set up a local server to access this central registration information, click the server name in Enterprise Manager. Click Tools, Options, select the General tab, then select Read from remote. In the Server name box, enter the name of the remote server that contains the registration information.

Logging Third-Party Applications' Transactions


Most DBAs in small to midsized companies are also responsible for maintaining third-party applications, such as those for accounting and human resources (HR). These applications are usually database-oriented client/server applications. But most people maintaining these applications have a hard time tracking when the application executes a transaction and what tables in the database are affected, how they're affected (update, delete, or insert), and in what sequence. Unfortunately, the applications' manuals usually don't cover these details, so you must buy support services from the vendor.

But with the power of triggers, you can track exactly what happens when a third-party application executes a transaction. My company has an application that uses SQL Server 6.5 with more than 300 tables in two databases: A1 and A2. Here's how we set up a system that uses triggers to track database changes. (Note: To be more secure, we do all processes in the testing environment, in which we establish the same databases as those in our production environment.)

  1. Generate the scripts that create all tables within these two databases.
    From Enterprise Manager, expand A1 and A2, then from Objects/Tables, right-click
    any table and choose Generate SQL scripts. From the resulting pop-up window,
    choose to generate scripts for all tables. When you're finished, you'll have two scripts.

  2. Create two more databases—A1_BK and A2_BK—and run the appropriate script that you generated in Step 1 against each database to create duplicates
    of databases A1 and A2.

  3. Create another database, called DBG_DB, and create a table named dbg_info
    in this database by using the following command:
Create table dbg_info
(  db_name varchar(30),
   tbl_name varchar(50),
   act_flag  char(1),
   act_time datetime default getdate()
)

This dbg_info table records all actions and tables involved in transactions that changed any tables. Db_name is the database affected by a transaction, tbl_name is the table affected by a transaction, act_flag is the action that the transaction performed (U for update, D for delete, or I for insert), and act_time is the time the system performed the transaction.

  • Create triggers for every table in A1 and A2. Then, whenever the application changes a table in A1 or A2, the table's trigger writes the changes to the corresponding dbg_ info table in A1_BK or A2_BK. If A1 and A2 have many tables, creating a trigger for each table can take a while. So, I created a script to automatically create triggers for all tables, as Listing 1 shows. (Note that this script is for SQL Server 6.5; you must modify it to work with SQL Server 2000 or SQL Server 7.0.) This script creates a stored procedure in a database, such as A1 or A2; you can then use the procedure to create the triggers by executing the following statement:
    USE A1 (or A2)
    Go
    Exec crt_dbg_trg
  • Test your application. Remember to switch to the testing environment from the production environment. After you execute a transaction, you can use DBG_DB..dbg_info to find which tables were affected:
    Select * from dbg_db..dbg_info

    You can then open the affected tables in the A1_BK and A2_BK database to see which records have been logged.

    You need to be aware of a limitation with this technique. If any tables have TIMESTAMP or IDENTITY columns, whose values SQL Server automatically generates, you can't use the following statement in your triggers:

    Insert into A1_BK.dbo.table_name select * from A1.dbo.table_name

    To use this statement, when you generate the SQL scripts in Step 1, you need to use an editor to find and replace all timestamp columns with the binary(8) data type and delete all identity keywords.

    This technique for tracking third-party applications' transactions will help you understand and maintain the applications by letting you see the logic behind and the result of every transaction. And with all this information logged, you can create a Visual Basic (VB) program, for example, to show transaction details in an easy-to-read format.