45921.zip

Looking at the examples in Listings 1 and 2, I don't understand why SQL Server rounds certain decimal values. The first number after the decimal data type is the precision setting, which defines the total number of digits the integer can have. The second number is the scale setting, which defines the total number of digits to the right of the decimal place. Listing 1 returns the result set

`.00357175   .003571750`

However, if I change the decimal definitions to (19, 9), as Listing 2 shows, SQL Server truncates the data and rounds the result to the sixth decimal place as follows:

`.00357175   .003572`

Given that the final answer requires eight decimal places and I've set the scale to 9, why does SQL Server round the result?

The SQL Server Books Online (BOL) topic "Precision, Scale, and Length" says, "The result precision and scale have an absolute maximum of 38." Using precision setting 19 and scale setting 9 should keep you well under the maximum, so it's curious that SQL Server rounds the result set of Listing 2 even though the batch works as you expect when you set the scale to 10. BOL also says that "when a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated." That means SQL Server chooses to round decimals up rather than lose parts of the integer value, which is what you want.

Table 1 shows how SQL Server calculates the result precision and scale for various operations. At first glance, your precision setting is well under the maximum allowed value. However, if you use Table 1's expression for multiplication operations and add 19+19+19+1 (which equals 58), you see that your precision result is 20 digits above the maximum. For multiplication operations, you must subtract the scale setting from the precision setting (p-s) to get the number of digits in the integer part of the number (p). For example, when you multiply three DECIMAL(19, 10) values, you have three sets of precision and scale settings (p1=19, s1=10; p2=19, s2=10; and p3=19, s3=10). You can use the following formula to find out that SQL Server will reserve 29 integer digits:

`((p1–s1)+(p2–s2)+1)+((p3–s3)+1)=29`

Given a maximum result precision of 38, the maximum number of decimal places you have left is 9, as your first result set shows.

To see why SQL Server truncates the data differently when you set the scale to 9, simply plug your new numbers into the formula; you end up with 32 integer digits and 6 decimal places, as your second result set shows. (For more information about the decimal data type, see my article "Store Integers as Decimal Data Type" at InstantDoc ID 43114.)

You can work around this behavior by CASTing each step of the operation using an explicit CAST call. Listing 3 shows how you can manage rounding operations in complex calculations using explicit CAST statements on the calculation's subsets.