Yes, Microsoft has announced that in the next release of SQL Server, code-named Yukon, you'll be able to write stored procedures and functions in any .NET language, including C#.
Documentation on Microsoft's http://msdn.microsoft.com/library site says that you can install SQL Server 2000 on a Windows NT 4.0 server, but that some SQL Server 2000 features need Windows 2000 to work. However, the Microsoft articles don't mention which features don't work. What do I lose if I install SQL Server 2000 on an NT 4.0 server?
By running SQL Server 2000 on NT 4.0, you miss out on Kerberos security delegation and Active Directory (AD) support. Otherwise, SQL Server 2000 on NT 4.0 gives you the same functionality as SQL Server 2000 on Win2K.
I'm developing a tool to migrate data from Microsoft Access onto the more robust SQL Server 7.0 platform. To improve the migration performance, I moved SQL Server from my laptop to a Dell OptiPlex G1—a 400MHz Celeron box that's far from enterprise-class—and timed the conversion. Then, I installed SQL Server 7.0 on a Dell PowerEdge 2450, a two-processor box with 1GB of RAM, and timed the migration again. (I reduced the Max Async IO to compensate for the SCSI RAID system on the PowerEdge versus the IDE hard drive on my OptiPlex.) But the migration operation I performed on the faster SQL Server system took about 30 seconds longer instead of shaving off time, as I expected. I think I should be able to trim additional time from the enterprise-level SQL ServerDell PowerEdge combination. Any ideas?
The performance problem is likely occurring in the disk subsystem. IDE allows disk-level write caching, which is fast but carries a potential data loss, whereas the SCSI system blocks write-caching to maintain data integrity. The net result is that a desktop system with IDE can often outperform a server system with one SCSI interface.
I'm running SQL Server 7.0 Service Pack 2 (SP2) on a Windows 2000 SP1 machine. On my Sales fact table, I created a non-unique index on the State column. When I write a query that filters data based on state, the query optimizer performs an index scan for only a few states; for the remaining states, it performs a table scan. I tried to force the query optimizer to perform index scans on all states by rewriting my query as follows:
SELECT sum(sales) sales FROM vsales WITH (index('IDX_ST')) WHERE st='IL'
However, the query optimizer continues to do table scans on most states. Because the index key is tiny (2 char bytes), I think I can improve response time if I can force the query optimizer to use an index. How can I achieve this goal?
Before you settle on an index as the appropriate access technique, you need to compare the execution costs of your query with and without the index. First, turn on Showplan and use SET STATISTICS_IO ON in Query Analyzer to see the total resource cost for your query. Next, run the following query to force the index, then compare the costs:
SELECT sum(sales) FROM vsales (index =IDX_ST) WHERE st = 'IL'
If the query optimizer doesn't pick the lowest-cost plan, you've probably found a bug and should call product support. Note that if the Sales fact table is so small that a table scan would be faster in all cases, the query optimizer may never use the index.
SQL Server 2000 Enterprise Edition installs SQL Server executables and program files on both of my clustered servers simultaneously. If one of my servers goes down, how can I rebuild the server and reinstall SQL Server on that node?
You can find directions for this core SQL Server 2000 clustering procedure in SQL Server Books Online (BOL). Simply run Setup, remove the failed node from the configuration, repair the node, then run Setup again. When you add that node back into the SQL Server 2000 configuration, SQL Server reinstalls and reconfigures itself appropriately.
I'm running SQL Server 2000 on a Windows 2000 server, and I want to let a nonsystems-administrator (sa) user see and execute a job that runs a Data Transformation Services (DTS) package. When I give the user xp_cmdshell execute privileges, I get this error message:
exec xp_cmdshell "dir c:\" Msg 50001, Level 1, State 50001 xpsql.cpp: Error 997 from GetProxyAccount on line 472
Can you recommend a better way to allow a non-sa user to execute such a job?
Did you enable a proxy account? You need to enable an account manually in the SQL Server Agent properties dialog box for Enterprise Manager so that SQL Server knows which Win2K account to use when a non-sa runs xp_cmdshell. When an sa executes xp_cmdshell, the extended stored procedure runs as SQL Server's service account.
I'm having a network connections problem with my SQL Server 7.0 server, which runs on Windows NT 4.0. A Novell NetWare 3.11 server runs on the same physical network. To insert data from a text file that resides on the NetWare server to my SQL Server machine, I wrote a T-SQL stored procedure that uses the Data Transformation Services (DTS) Bulk Insert task. I experience no problems when I run the stored procedure by using the text file on a local NT hard disk, but the stored procedure fails when I run it against the source text file from the NetWare server. I mapped the NetWare share onto NT's O: logical drive, which is fully accessible to my login and appears in Network Neighborhood. The failure generated the following error message:
Server: Msg 4861, Level 16, State 1, Line 1
Could not bulk insert because file 'O:\OLTDATA\oltm.01 could not be opened. Operating system error code 86 (The specified network password is not correct.).
Why does the stored procedure fail when I use the source text file from the NetWare server?
You've mapped the O: logical drive to your interactive login rather than to the service account's logical environment, so SQL Server doesn't see the O drive. For the stored procedure to work, you need to use Universal Naming Convention (UNC) paths so that SQL Server will know where the network resource is, or you can copy the file to the local SQL Server system.
How can I enter information in SQL Server 7.0 that spans many related tables?
In SQL Server 7.0 and earlier, you can create a view that updates multiple tables, then update one table at a time by referencing the view in the UPDATE statement. However, you must specify only columns from a single table in the UPDATE statement. (You can find documentation for additional constraints under UPDATE in SQL Server Books Online—BOL.) You can also create a stored procedure with parameters that might affect multiple tables, then modify each table one at a time through the stored procedure. In SQL Server 2000, you can use INSTEAD OF triggers on a view as another option.
My SQL Server 7.0's table keys are integers, but I'm thinking of changing the integer keys to universally unique identifiers (UUIDs) that are 35-byte varchar columns. What sort of performance hit would the switch create? Is SQL Server 7.0 optimized to handle UUIDs?
In general, the length of the column matters as much as the column type. So, upgrading from a 4-byte key to a 35-byte key will slow down some joins and increase the size of your non-clustered indexes.
Why does my inline or embedded SQL run faster than my stored procedures?
Recompilations might be the source of the slower stored procedure speed. To find out for sure, you need to do some performance investigation, such as looking at Showplans for each type of query versus calling the stored procedures and comparing query plan cache hits to cache misses. You can also try coding the object owner for referenced tables, views, and procedures inside your stored procedures, as the following example shows:
SELECT * FROM dbo.mytable
This technique helps you reuse plans and prevent cache misses.
Editor's Note: Send your SQL Server questions to Richard Waymire, Microsoft's group program manager for SQL Server management tools, at firstname.lastname@example.org.