DTS DLL Debugging
Debugging Visual Basic (VB) COM DLLs that you use in Data Transformation Services (DTS) is easier than it sounds. First, open the DTS Package in the DTS Designer. Then, start the VB Interactive Debugging Editor (IDE) and open the WebDataFetcher project. Within the VB IDE, you can run the project with a full compile by pressing Ctrl+F5. Running the project with a full compile is a good way to check for blatant syntax errors. The program will highlight in blue any lines that contain syntax errors.
Related: 34 Development Tips
You can also step through the code and inspect how it works or debug errors in the programming logic by pressing F8, which puts the VB IDE into debug mode. The VB IDE will temporarily register the WebDataFetcher project. In debug mode, the VB IDE will stop on the first line of code in the project before it's executed. When you run the DTS Package, the VB IDE isn't in the screen view. After you run the Package, click the VB IDE icon to bring the IDE into view. The program will highlight in yellow the first line of the GetFile function. VB IDE will execute this line next. Each subsequent time you press F8, the VB IDE will execute the present line, move to the next line, and highlight it. You can move the flashing cursor over any variable, such as aURL, then pause, and a window will pop up that contains the value of that variable. If you don't want to step through code, just press F5 and VB IDE will run the rest of the code without stopping. When you want to stop the VB IDE from running in debug mode, click the End item in the Run menu list.
The Security Connection
Here's a summary of steps you can take to optimize SQL Server security and connectivity.
- Use Windows-only authentication with SQL Server.
- Use trusted connections instead of strings that pass SQL Server usernames and passwords.
- Put the connection objects in DLLs and put them in Microsoft Transaction Server (MTS).
- Set your code to use OLE DB instead of ODBC if you're using ADO. With ADO, ODBC calls OLE DB, so by using OLE DB directly, you improve performance by eliminating a processing layer.
- Use TCP/IP between your IIS and SQL Server machines, not the default Named Pipes, if IIS and SQL Server are on separate servers. As Microsoft article "PRB: 80004005 ConnectionOpen (CreateFile()) Error Accessing SQL" at http://support.microsoft.com/support/kb/articles/q175/6/71.asp states, "When Named Pipes are used to access the SQL Server, IIS tries to impersonate the authenticated user, but it does not have the ability to prove its identity."
- Put your connections and stored procedure calls into Visual Basic (VB) code DLLs, install them in MTS (which will automatically pool connections for you), and create server objects in VBScript to use the connections.
- Ask for the correct Microsoft department if you need help using ADO-based code to talk to SQL Server. Microsoft Technical Support not only has IIS and SQL Server experts; it also has ADO-to-SQL Server experts.
Fast Bulk Load in SQL Server 7.0
SQL Server 7.0 offers several high-speed mechanisms for loading data. Bulk copy program (bcp) is a high-speed file-import utility that SQL Server has supported since the early days of the database management system (DBMS). Bcp lets you quickly load large files and is often a good choice, but it's not user friendly.
In SQL Server 7.0, Microsoft extended SQL Server's data-import capabilities with DTS and the new T-SQL command BULK INSERT. DTS offers a tremendous amount of data-handling flexibility, but BULK INSERT can be twice as fast as either bcp or DTS when used in comparable circumstances.
Why is BULK INSERT so much faster? BULK INSERT is a T-SQL command, so it runs in-process with the SQL Server engine. Thus, SQL Server doesn't need to pass the data along the normal client API network-abstraction layer called a Network Library (NetLib). Bypassing the NetLib layer saves a huge amount of time.
In addition, SQL Server 7.0 supports a custom task add-on that lets you write a BULK INSERT task directly from a DTS package. (Microsoft also integrated this feature into SQL Server 2000.) If you're looking for the best combination of speed and programmatic workflow control, BULK INSERT from DTS might be the answer.
7 Cursor Cues
Proper API cursor use can mean all the difference in the world when it comes to application scalability. The Microsoft TechNet article "Top 10 SQL Server 7.0 Support Issues and Resolutions" gives several cursor tips, including
Use default settings for singleton SELECT statements (which return one row) or other small result sets. Caching a small result set on the client and scrolling through the cache is more efficient than asking the server to implement a cursor.
Use the default settings when fetching an entire result set to the client, such as when you're producing a report. Default result sets are the fastest way to transmit data to the client.
You have to use default result sets for any T-SQL statement or batch of T-SQL statements that generate multiple result sets.
In joins, static or keyset-driven cursors can be faster than dynamic cursors. You must use static or keyset-driven cursors when you want to do absolute fetches.
Static and keyset-driven cursors increase the usage of tempdb. Static server cursors build the entire cursor in tempdb; keyset-driven cursors build the keyset in tempdb.
In applications that use server cursors, each call to an API fetch function or method causes a round-trip to the server. Applications can minimize these round-trips by using block cursors with a reasonably large number of rows returned on each fetch. You can set the block size by using ADO's CacheSize property. I like to use batch sizes in the range of 100 to 500 rows.
- Consider SQL Server 7.0's cursor performance optimization feature, called Fast_Forward Cursors. SQL Server Books Online (BOL) has a lot of information about Fast_Forward Cursors.
Lost in Translation?
Q. I'm a Visual Basic (VB) developer and new to T-SQL programming. Does T-SQL have a function equivalent to VB's InStr() function?
A. Life would be much easier if functions had the same name across different programming languages.
T-SQL's charindex function, for example, lets you do the same thing as VB's InStr(), which specifies the first occurrence of one string in another string. Charindex's syntax is
\[ , start_location \] )
Expression1 is a short character data type expression that contains the sequence of characters you want to find. Expression2 is the character string that you want to search. And start_location is the character position where charindex starts searching for expression1 in expression2. If you don't specify a start_location or if start_location is a negative number or zero, the charindex search starts at the beginning of expression2.
No Datetime Primary Keys, Please
Q. I work on a payroll-management database and suggested to the development team that I use a datetime column as a table's primary key. Team members told me not to do this, but when I pressed for a reason, no one could provide a good answer. Why shouldn't I use a datetime column as a primary key?
A. I agree with your development team. In general, you should avoid using datetime as a primary key. First, datetime is an 8-byte data type, and narrow keys tend to be more efficient and faster than wider keys. If your table is going to be very large, a smaller integer-based data type, such as the 4-byte int or the 2-byte smallint, might be a better fit. Second, and much more important, datetime is accurate only to one three-hundredth of a second, or 3.33 milliseconds (ms). By definition, primary key columns must be unique, and you can't ensure that you'll have unique values in a datetime column. Your business rules might say that entering multiple records within 3.33ms of one another is impossible, but I think that making that assumption is dangerous. Business rules and technical assumptions can always change.
SQL Server 2000 supports variants through the new sql_variant data type. Similar in many ways to the VB Variant data type, sql_variant can store information returned from any SQL Server–supported data types except text, ntext, image, timestamp, and sql_variant. Actually, the sql_variant type isn't a base data type but a way to tell SQL Server that the data type will be set when the data is stored in that column. So the data type for a particular column can be any of the supported sql_variant data types. Although the implementation isn't the same as that of the VB Variant data type, the sql_variant data type is flexible enough to solve the changing-rule problem. Because sql_variant morphs itself into almost any SQL Server data type, SQL Server sets the column's "real" data type based on the data it writes to the column. For example, a sql_variant data type "becomes" an integer when SQL Server writes an integer value to the column. If you use sql_variant in an extended property table, the customer data properties now look slightly different, as Table A shows.
In this case, you can define the table with three columns: one to identify the object, another to name the property being managed, and a third to hold the data for that property. This table is an example of a sparse table, which lets you define as many properties as the current business rules dictate. The sql_variant Info column can hold any non-BLOB data type, so you can easily store almost any type of information, even if during the design phase, you didn't anticipate needing to handle that type of data.
Templates for Development
I use templates in two ways: to develop objects and to reduce the amount of typing that I have to do when I write a section of code. At the company I work for, I customize the templates that I use for development according to the company's development standard. The templates provide handy development snippets of code that I use when I compile a series of objects on a server. Figure A shows my code template for generic stored procedures. This template consists of four sections: drop, create, and permissions sections and a success-or-failure notification section.
Every stored procedure that I write includes these four sections because they speed development for everyone in the shop. For the DBAs, the stored-procedure template produces a script that runs and reports a success or failure; for the developers, the template produces consistent documentation. In addition, the "drop" section of the template eliminates errors that result when you try to create an object that already exists.
You can use xp_sendmail extended stored procedure in triggers, stored procedures, and batch commands to incorporate SQL Mail functionality into code. The xp_sendmail extended stored procedure is in the master database, but with the proper syntax, you can call it from any database. The proper syntax for calling xp_sendmail from another database is master.dbo.xp_sendmail followed by any of the input parameters in Table B (a list of these parameters is also available in BOL).
You should be aware of the following rules when using the xp_sendmail extended stored procedure:
- You must configure SQL Mail sessions before you use xp_sendmail.
- Only one user can execute xp_sendmail at a time. If you execute long-running queries, be aware that other connections must wait until the first connection has completed processing.
- Xp_sendmail queries are subject to locks and blocks just like any other query.
- Xp_sendmail's Execute permissions default to members of sysadmin, but you can grant these permissions to other users.
Using xp_sendmail from SQL Server with the proper planning about when and who should receive these email messages can simplify administration and troubleshooting.
Use Codebehind to Separate ADO.NET from HTML
Active Server Pages (ASP) developers need to be aware of one change that will break some existing ASP pages you migrate: ASP.NET doesn't let you put functions inside the <% %> render blocks. You need to place all server-side functions within a