Editor’s Note: Welcome to SQL Server Magazine’s monthly, Web-only series of SQL Server tips brought to you by the Microsoft SQL Server Most Valuable Professionals (MVPs). Microsoft introduced the MVP program in the 1990s to recognize members of the general public who donate their time and considerable computing skills to help users in various Microsoft-hosted newsgroups. MVPs are nominated by Microsoft support engineers, team managers, and other MVPs who notice a participant's consistent and accurate technical answers in various electronic forums and other peer-to-peer venues. For more information about the MVP program, go to http://support.microsoft.com/support/mvp/. The MVPs donate their SQL Server Magazine author fees for these tips to the World Food Programme (http://www.wfp.org). To donate free food to a hungry person today, visit http://www.thehungersite.org.

When a data file autogrows, does autogrowth block updates to the file?


No, but autogrowth of data files blocks allocations of new extents because SQL Server blocks changes to the allocation pages, which keep track of which extents in the file are in use, during autogrowth. These allocation pages are the Global Allocation Map (GAM) and Secondary GAM (SGAM). (See SQL Server Books Online—BOL—for details about GAM and SGAM.) Thus, the autogrowth operation blocks new extents so that other operations can't use them. This blocking affects, for example, INSERT and UPDATE operations that require new space allocation. Such INSERT and UPDATE operations must wait until the autogrow operation finishes. Autogrow shouldn't affect INSERT operations that reclaim existing unused space, UPDATE operations that are performed in-place, and DELETE operations that leave free space in a page. Autogrow operations won't affect SELECT operations because retrieving data doesn't involve any new space allocation.

Can you create UNIQUE and PRIMARY KEY constraints on computed columns in SQL Server 2000?


In SQL Server 2000 and earlier, the physical mechanism that UNIQUE and PRIMARY KEY constraints use to enforce uniqueness is a unique index. Because SQL Server 2000 supports indexes on computed columns, you can create UNIQUE and PRIMARY KEY constraints on computed columns.

Defining a UNIQUE constraint on a computed column is a straightforward process, as the following example shows:

CREATE TABLE T1 (
  col1 int NOT NULL,
  col2 AS col1 + 1 UNIQUE
 )

However, if you define a PRIMARY KEY on a computed column, such as:

CREATE TABLE T2 (
  col1 int NOT NULL,
  col2 AS col1 + 1 PRIMARY KEY
  )

you receive the following error:


Server: Msg 8111, Level 16, State 2, Line 1
Cannot define PRIMARY KEY constraint on nullable column in table ‘T2’.
Server: Msg 1750, Level 16, State 1, Line 1
Could not create constraint. See previous errors.

SQL Server requires you to guarantee that your computation's result won't be NULL. The computation in the computed column can overflow (e.g., when you add 1 to the largest integer) or underflow (e.g., when you subtract 1 from the smallest integer), and other computations can result in a divide-by-zero error. However, if the following session settings are off: ARITHABORT, which determines whether a query has terminated when an overflow or a divide-by-zero error occurs, and ANSI_WARNINGS, which specifies ANSI SQL-92 standard behavior for several error conditions, instead of terminating the query, the computation can have a NULL result.

In practice, when either ARITHABORT or ANSI_WARNINGS is off, you can't create an index on a computed column or insert values into a table that has an index on a computed column because SQL Server detects such an attempt and returns an error. But SQL Server still requires you to guarantee that the computation won't result in NULL values. The trick is to wrap the computed column's computation with the ISNULL() function and supply an alternative value if the computation results in NULL (which can't happen anyway for the reasons explained earlier):

CREATE TABLE T2 (
  col1 int NOT NULL,
  col2 AS ISNULL(col1 + 1, 0) PRIMARY KEY
)

How can a unique index allow multiple NULL values?


In general, a unique index can’t allow multiple NULLs. Indexes consider all NULL values equal to each other. However, with a SQL Server 2000 workaround, you can create a view that contains only the non-NULL values and build a unique index on that view. Thus, the view's index constrains all non-NULL values, but NULL values have no such constraint. The T-SQL script in Listing 1 provides more details about creating a view of non-NULL values.

Can you use subqueries in CHECK constraints?


The ANSI SQL standard allows the use of subqueries in CHECK constraints, but SQL Server doesn't. With SQL Server 2000's user-defined functions (UDFs), however, you can create a UDF that performs a subquery, then include that UDF in a CHECK constraint, bypassing the limitation. Note that although SQL Server should evaluate CHECK constraints before the data affects a table, if you write your subqueries in a UDF, you must assume that any data modification has already affected the table. For example, consider the following Auctions table:

CREATE TABLE Auctions (
  itemid  int   NOT NULL,
  buyerid int   NOT NULL,
  bidno   int   NOT NULL,
  bid     money NOT NULL,
  PRIMARY KEY(itemid, bidno)
)

To add a CHECK constraint that ensures each bid is higher than the current bid for the same item, you can add the UDF and CHECK constraint that Listing 2 shows. But if you try to add the following bid:

INSERT INTO Auctions VALUES(1, 1, 1, $100.00)

the attempt fails because the VALUES() function returns the $100.00 bid from the new row you're trying to insert, and a bid can't be higher than itself. You must revise the UDF and the CHECK constraint as Listing 3 shows. As long as the CHECK constraint doesn't compare the new bid to itself, the CHECK constraint works fine.

Now, try to add a few bids, as Listing 4 shows. Note that the UDFs you include in your constraint don't limit you to queries against the table you're modifying; you can query other tables as well.

How do you find the approximate storage size of a table row?


You can use the stored procedure in Listing 5 to determine the approximate storage size of a table row. This stored procedure uses data from the INFORMATION_SCHEMA.COLUMNS view to calculate the storage size value. The calculation takes into account 10 bytes of overhead inside the row, and adds to it the number of bytes that the columns took. The calculation determines the size of each column according to the column’s data type. Notice that some of the fixed-length data types, such as int and datetime, always require the same amount of storage, while the amount of storage that decimal and numeric data types require depends on the numeric precision defined for the column. Also, notice that variable-length data types such as varchar and nvarchar require two additional bytes for each variable-length column. SQL Server requires these two bytes because SQL Server keeps a two-byte pointer to each variable-length column inside the row. SQL Server keeps the additional two bytes inside the row if at least one variable-length column exists. These two bytes store the number of variable-length columns in the table.

How do you use Document Object Model (DOM) from T-SQL in SQL Server 2000 to open an XML file?


You can use the OLE Automation system stored procedures sp_OACreate, sp_OAMethod, sp_OAGetProperty, sp_OASetProperty, and sp_OADestroy to manipulate the DOM object. To use DOM from T-SQL, place the sample XML document (from SQL Server Books Online—BOL—"Retrieving and Writing XML Data"), which Listing 6 shows, in the C:\TEMP\Test.XML file, then execute the code that Listing 7 shows to open and read the XML file. You can use this solution to read any XML file that's 8000 characters or less in length.