In the last meeting of the Israeli SQL Server Users Group, SQL Server
MVP Ami Levin demonstrated a very interesting technique utilized by the
optimizer to optimize aggregates.
Environment: SQL Server 2005 Developer Edition, Service Pack 1.
Consider the following query:
USE Northwind; SELECT E.FirstName, E.LastName, COUNT(*) AS NumOrders FROM dbo.Employees AS E JOIN dbo.Orders AS O ON E.EmployeeID = O.EmployeeID GROUP BY E.FirstName, E.LastName;
There are 9 employees in the Northwind database who handled 830 orders.
Northwind is a very small sample database, but the ratio between employees
and orders you find in Northwind is common in production environments in
the sense that each employee handles a large number of orders.
You would probably expect that the execution plan would first perform a join
between Employees and Orders, then group the result of the join by the
employee’s first and last names, and then calculate the aggregate. Thinking in
more realistic table sizes in production environments, this would mean a join
between a small table of employees and a large table of orders. But
apparently the optimizer has a sophisticated trick under its sleeve...
Examine the execution plan for this query. It would probably be most
convenient for you to examine the graphical actual execution plan to analyze
this plan. I’ll explain it through the textual actual execution plan produced by
the SET STATISTICS PROFILE session option. I abbreviated the output
Rows Seq StmtText ---- --- ------------------------------------------------------------ 9 8 SELECT ... 0 7 |--Compute Scalar(DEFINE:(\[Expr1004\]= CONVERT_IMPLICIT(int,\[globalagg1006\],0))) 9 6 |--Stream Aggregate( GROUP BY:(\[E\].\[LastName\], \[E\].\[FirstName\]) DEFINE:(\[globalagg1006\]=SUM(\[partialagg1005\]))) 9 5 |--Sort( ORDER BY:(\[E\].\[LastName\] ASC, \[E\].\[FirstName\] ASC)) 9 3 |--Nested Loops( Inner Join, OUTER REFERENCES:(\[O\].\[EmployeeID\])) 9 2 |--Stream Aggregate( GROUP BY:(\[O\].\[EmployeeID\]) DEFINE:(\[partialagg1005\]=Count(*))) 830 1 |--Index Scan( OBJECT:(\[Orders\].\[EmployeesOrders\] AS \[O\]), ORDERED FORWARD) 9 4 |--Clustered Index Seek( OBJECT:(\[Employees\].\[PK_Employees\] AS \[E\]), SEEK:(\[E\].\[EmployeeID\]=\[Orders\].\[EmployeeID\]))
I added a column called Seq to reflect the processing order of the operators.
The plan first performs an index order scan of the index
Orders.EmployeesOrders (Seq = 1). This is the narrowest index on the
Orders table that contains the EmployeeID column. In fact, the EmployeeID
column is the only column in the index.
The second operator in the plan (Seq = 2, Stream Aggregate) calculates the
count of rows for each EmployeeID, and stores that count for each group as
a computed value called \[partialagg1005\]. The interesting part here is that the
query asked to aggregate by the employee’s first and last names, but the
optimizer figured that since the EmployeeID column is the primary key in the
Employees table, a given EmployeeID value corresponds to one and only
one combination of FirstName, LastName values (not necessarily the other
way around). The optimizer decided to calculate the count of rows for each
EmployeeID prior to the join, hence the outer input of the join becomes much
smaller (9 rows instead of 830 rows).
The Nested Loops join that shows up in the plan (Seq = 3) operates on
much smaller inputs (9 and 9 rows instead of 830 and 9 rows). For each of
the 9 rows returned from the Stream Aggregate operator, the Nested Loops
operator performs a Clustered Index Seek operation (Seq = 4) in the
clustered index on the Employees.EmployeeID column to retrieve the
corresponding employee row from the Employees table. The row from the
Employees table contains the FirstName and LastName columns.
The rows produced by the join (9 rows in total) are then sorted by
LastName and FirstName (Seq = 5), and then aggregated (Seq = 6). The
last aggregate that you see in the plan (Stream Aggregate, Seq = 6) is needed
because technically there might be more than one employee with the same
first and last names. However, the nice bit about this aggregate is that it
operates on a small set of rows with the partial aggregates that were
precalculated for each EmployeeID value.
Of course, there’s little business logic in aggregating data multiple employees
just because they happen to have the same first and last name. If you revise
the query slightly and add E.EmployeeID to both the GROUP BY and
SELECT clauses, you will see that the second aggregate is eliminated
altogether. You will only see the aggregate taking place against the input rows
from the Orders table prior to the join, knowing that the relationship between
EmployeeID and (FirstName, LastName) is 1:1.
I find this to be a cool technique. It eliminates the need for us to complicate
our code, writing an aggregate query ourselves against one of the tables,
storing the result in a temporary table, then joining to the other table. There
are many such tricks that the optimizer hides under its sleeve, allowing us to
focus on the logical aspects of the code, and the optimizer on the