If you can't get enough of the types of T-SQL challenges I pose in my articles, you'll find more T-SQL exercises to sharpen your expertise in my Puzzled by T-SQL blog. Here's a recent post that delves into an execution plan for an aggregate query.
In the last meeting of the Israeli SQL Server Users Group, SQL Server MVP Ami Levin demonstrated an interesting technique that the optimizer uses to optimize aggregates. (Ami used SQL Server 2005 Developer Edition, Service Pack 1—SP1.) Consider the following query:
COUNT(*) AS NumOrders
FROM dbo.Employees AS E
JOIN dbo.Orders AS O
ON E.EmployeeID =
GROUP BY E.FirstName,
There are nine employees in the Northwind database who handled 830 orders. Northwind is a small sample database, but the ratio between employees and orders you find in Northwind is common in production environments in the sense that each employee handles a large number of orders.
You'd probably expect that the execution plan would first perform a join between Employees and Orders, then group the result of the join by the employee's first and last names, and then calculate the aggregate. Thinking in more realistic table sizes in production environments, this would mean a join between a small table of employees and a large table of orders. But apparently the optimizer has a sophisticated trick under its sleeve.
Examine the execution plan for this query. Figure A shows the textual actual execution plan produced by the SET STATISTICS PROFILE session option; I've abbreviated the output for clarity. (You'll probably find it more convenient to examine the graphical execution plan yourself, though.) I added a column called Seq to reflect the processing order of the operators.
The plan first performs an index order scan of the index Orders. EmployeesOrders (Seq = 1). This is the narrowest index on the Orders table that contains the EmployeeID column. In fact, the EmployeeID column is the only column in the index.
The second operator in the plan (Seq = 2, Stream Aggregate) calculates the count of rows for each EmployeeID and stores that count for each group as a computed value called \[partialagg1005\]. The interesting part here is that the query asked to aggregate by the employee's first and last names, but the optimizer figured that since the EmployeeID column is the primary key in the Employees table, a given EmployeeID value corresponds to one and only one combination of FirstName, LastName values (not necessarily the other way around). The optimizer decided to calculate the count of rows for each EmployeeID prior to the join; hence the outer input of the join becomes much smaller (nine rows instead of 830 rows).
The Nested Loops join that shows up in the plan (Seq = 3) operates on much smaller inputs (nine and nine rows instead of 830 and nine rows). For each of the nine rows returned from the Stream Aggregate operator, the Nested Loops operator performs a Clustered Index Seek operation (Seq = 4) in the clustered index on the Employees.EmployeeID column to retrieve the corresponding employee row from the Employees table. The row from the Employees table contains the FirstName and LastName columns.
The rows produced by the join (nine rows in total) are then sorted by LastName and FirstName (Seq = 5), then aggregated (Seq = 6). The last aggregate that you see in the plan (Stream Aggregate, Seq = 6) is needed because technically there might be more than one employee with the same first and last names. However, the nice thing about this aggregate is that it operates on a small set of rows with the partial aggregates that were precalculated for each EmployeeID value.
Of course, there's little business logic in aggregating data from multiple employees just because they happen to have the same first and last name. If you revise the query slightly and add E.EmployeeID to both the GROUP BY and SELECT clauses, you'll see that the second aggregate is eliminated altogether. You'll see only the aggregate taking place against the input rows from the Orders table prior to the join, knowing that the relationship between EmployeeID and (FirstName, LastName) is 1:1.
I find this to be a cool technique. It eliminates the need for us to complicate our code, writing an aggregate query ourselves against one of the tables, storing the result in a temporary table, then joining to the other table. There are many such tricks that the optimizer hides under its sleeve, letting us focus on the logical aspects of the code, and the optimizer on the performance aspects.