Download the Code iconDBAs use database partitioning for two main reasons: to enforce security by limiting access to data and to enhance performance. In "Horizontal and Vertical Partitioning," February 1997, Robert Schneider explains how to combine replication with vertical (column) and horizontal (row) partitioning to deliver only relevant sections of a database to replication subscribers. DBAs can use partitioning to separate current data from historical data, which lets a DBA set up a transactional server for current data and an OLAP or reporting server for historical data. The greatest benefit of separating one database into two is that you can tune each database or server for its primary purpose instead of trying to make one server work well for two different activities.

Usually, you wouldn't consider partitioning a SQL Server 2000 database until it grew to hundreds of gigabytes or even terabytes because SQL Server 2000 has powerful built-in optimization capabilities that, combined with proper indexing, should let you keep all your data on one server. But recently, I worked with a client whose small database was performing poorly. In this situation, circumstances and software constraints led me and the company's IT staff to choose database partitioning to improve performance—with great success.

Analyzing the Problem

Tom is the head of IT for my client—a small company that does human-resources (HR) processing. Several years ago, the company purchased a semi-customizable client-server software solution that has become mission-critical. The software supports the company's main functions and is available 24 * 7. The company uses the software for managing accounts and customer information, processing HR data, reporting and statistical analysis, and invoicing. When Tom called me, his database was outgrowing his server's capacity. Tom's company had just spent a lot of money to upgrade the system's hardware and networking components, and the business was growing—but so was the data.

Tom was in a fix: He'd worked hard to streamline and optimize existing technology to keep costs down so that the company could thrive. Even so, Tom's boss wasn't inclined to spend more money to enhance the technology because this mission-critical system was now an orphan product—one whose vendor had gone out of business, leaving Tom with no product support for the software. Tom planned to eventually replace the entire system, but meanwhile, he had to keep the existing database server functioning while supporting the ever-increasing operational load.

The primary production database contained 5GB of data, and Tom's users were encountering slow response times for complex queries and data updates. For example, the job of printing customer invoices (which happened three times each month) took half a day and further bogged down server response time.

As part of the discovery and systems-analysis process, Tom and I defined several solutions that could relieve the overburdened production server. We could tweak indexes; optimize the embedded business logic, which was partly implemented as stored procedures on the production database; convert the program-enforced relationships to Declarative Referential Integrity (DRI); move data to separate filegroups; partition the data into current and aged data tables on one database; or partition the database itself. All but the last two options would have given incremental relief to the server and the users, and in a less time-critical situation, we might have implemented some or all of these techniques. But at the current growth rate, Tom realized that his server would exceed capacity in only about 6 weeks, so we needed to implement an immediate solution. Tom opted for partitioning the database, figuring that this would give maximum relief and extend the life of his servers for a year or more.

Before choosing partitioning, we considered some other, more obvious choices. With smaller databases, the most obvious choice for performance tuning is to tweak the database's indexes. Index tuning is typically the best thing you can do to improve performance and the least intrusive change you can make to a production application—as long as it works. But a few factors led us to reject this solution. First, we performed an index analysis, which told us that the tables were already adequately indexed. We experimented with alternative indexing schemes, but nothing improved performance as much as Tom wanted, and in some cases, the changes slowed down data entry. We strongly suspected that logic embedded in the client programs on the user desktop was forcing table scans, but without the source code, we couldn't confirm our theory. In a table scan, SQL Server reads all rows in the table until it finds the data needed to resolve the query instead of using indexes to directly access the data—so table scans could be part of the performance problem.

An option that might have helped more than additional indexing would have been judicious use of filegroups—separating system tables from user tables, clustered indexes from nonclustered indexes, memo and tabular data from text and image data, then spreading the filegroups across separate spindles. However, we deduced that the data location was hard-coded into the client programs because the client applications failed when we tried to change the filegroups. So we had to leave all the data in the primary filegroup on the production database.

We also considered rewriting the worst-performing parts of the production application, such as the one Tom's company used for invoicing. The algorithms that dynamically computed the invoice amounts gathered data from eight tables, so the volume of data that the application used and the way it was gathering the data (by table scans) was a factor in the slow performance. But one restriction that the existing software package imposed on us was that we couldn't restructure the production tables. The production package was a compiled and encrypted third-party software solution, so we could only infer what the client applications were doing because we couldn't access the code.

Our options were limited; we couldn't change the table architecture, we couldn't change the application, and the indexing seemed fine. The option we had left was partitioning. We decided that our best solution was to get the weight of old, unused data off the production database. This option was in line with Tom's plans as well. Because the production application doesn't use the company's historical data after 90 days, the company had been planning to partition the production database and create a history database in preparation for a move to data warehousing. Database partitioning was a valid solution that also served the company's goals.

Creating the Solution

Scalability was a requirement for any database changes, so rather than partition the user data into separate tables for current and aged data, we decided to partition the data into two databases that we installed on separate servers. By separating the data, we could tune each server for its primary purpose—data entry or reporting. The next step was to determine which of the 65 production tables were candidates for partitioning. The tables in this database were relatively well normalized—the database didn't include declared or triggered relationships. But we had to determine precisely how each table related to the others so that we wouldn't inadvertently remove data that users needed for current production. Examining the tables was useful; during the process, Tom and his staff came to understand their business data better than ever.

Each production table contained data that fell into one of two categories: The data was either constantly in use (e.g., in reference or lookup tables) or aged (e.g., in associative tables that contain transaction facts). The aged data was causing the performance problem, accumulating until the tables contained many millions of rows. A general rule on a small table (one containing only a few thousand rows) is that a table scan might be faster than an index search. But when you've got tens or hundreds of millions of rows in a table, an index search is more efficient than a table scan. SQL Server's optimizer knows this and chooses the appropriate operation for the table size and the query type. In our case, something in the compiled logic was forcing SQL Server to choose table scans instead of index searches, regardless of the number of rows in the table.

Fourteen tables fell into the aged-data category. Tom's users didn't need the aged data for current daily activities. For auditing and historical reasons, Tom couldn't delete the data, but he couldn't leave it in the production database without compromising his production server's performance.

After we identified the tables that we wanted to partition, we developed algorithms to determine which records were aged and which were current. This step wasn't as easy as it sounds. The intertable relationships were enforced by nearly 500 stored procedures, so we had to make some assumptions regarding certain data and how it related to the rest of the data in the database. Not all of the tables had a designated primary key; one table had a 700-byte composite primary key. The UIs that called the code were precompiled, and no source code was available, so we were working blindly.

We did all the preliminary work—sorting aged tables and data, partitioning the database, testing the production programs, and setting up the reporting server—in a test environment that we configured to match the production environment as closely as possible. However, there's nothing like a production environment to reveal the flaws in your logic. Our first attempt at partitioning the production database failed, and we had to try again. Because of our painstaking planning, we were able to quickly restore the production database so that no one outside IT realized we'd partitioned the database the night before.

To partition a production database, you need several hours of downtime and lots of disk space on both the local server and a network file server. A high transfer rate on your network (e.g., a gigabit connection) speeds the process. Tom got another server to house the new database (ArchiveDB) and contain the aged data. If a new server isn't in your budget, you can create ArchiveDB as a second database on your production server. To partition your database, you can follow the steps we used:

  1. Create an archive database to hold the aged data. If necessary, create a different filegroup structure, separating system from user data, data tables from nonclustered indexes, and tabular data from text, ntext, and image data. The code in Listing 1 is similar to the code I used to create ArchiveDB and establish filegroups for the new database.
  2. Take the production database offline, and make a full backup.
  3. Detach the production database, copy it, and store the copy on the local server. Then, make a second copy, and store it on the network file server for safekeeping.
  4. Reattach the production database.
  5. Set logging on both the production database and ArchiveDB to bulk-logged.
  6. Use Data Transformation Services (DTS) to copy data from the production database to ArchiveDB. This method was the fastest way for us to copy data from one filegroup schema to another.
  7. Run scripts to delete the current data from ArchiveDB.
  8. Run scripts to delete aged data from the production database.
  9. Run count scripts to confirm record counts and be sure data doesn't overlap.
  10. Reset databases to full logging.
  11. Back up both databases, and back up the master database on each server.
  12. Bring both databases back online.

The first time we ran this process, the record counts didn't match, and we realized we'd lost data in the partitioning process. We found an error in one of the algorithms we used to calculate when a record is old enough to archive. We needed more time to test the corrected algorithm, so we decided to fall back to the original production database. In only 1 minute, we detached the newly partitioned production database and attached the original production database from the local-server copy we made at the beginning of the process. The database went back into production almost immediately with no data loss or operational errors.

The next day, we corrected and tested the troublesome algorithm. That night, we went through the partitioning process again, and this time, the record counts were correct. The process ran so smoothly that we had time to modify the reports that used historical data to reflect the new partitioned architecture. We went into production the next morning with no data loss, no data errors, and better performance. The production database, which had previously been nearly 5GB, was now a trim 500MB. The invoicing job, which had previously taken half a day to run, finished in half an hour, limited only by the speed of the printer.

This type of database partitioning is called horizontal partitioning. We used a complex date algorithm to determine how to partition the data. To maintain the division between the current data and the aged data. I created a set of programs that run as a scheduled job. These programs search the tables in the production database, find records to archive, copy the records to ArchiveDB, then delete them from the production database. This scheme ensures a minimal load on the production server because you can schedule the job to run during off-peak times. We didn't implement distributed partitioned views for data entry because data entry happens only in the current database and because identifying and creating partitioning columns would be hard. (For information about distributed partitioned views, see Kalen Delaney and Itzik Ben-Gan's articles "Distributed Partitioned Views," August 2000; "Querying Distributed Partitioned Views," September 2000; and "Modifying Views with INSTEAD OF Triggers," October 2000.)

A Happy Result

We delivered a scalable solution that gave Tom's production database a new lease on life by trimming it down to a manageable size for his server hardware. Tom's boss was happy, users could use the new databases efficiently, and Tom has scheduled a replacement for the client interface in the next year. The old data that's housed in ArchiveDB will become the company's data warehouse.

Crafting a good solution for a poorly performing database doesn't always mean modifying table designs, tweaking indexes, upgrading hardware, or enforcing relationships. In a production environment, you might not be able to make these kinds of changes because of the effect they might have on operations. Instead, you might have to explore alternatives such as database partitioning.