An analytic application is an application that lets you monitor key metrics, then analyze those measurements to find out whether your business is moving toward its business objectives. For example, you might have a business objective to increase revenue by 30 percent each year. One metric, or key performance indicator (KPI), that supports this objective is the difference between actual revenue and forecasted revenue. An analytic application lets business decision makers monitor this KPI and analyze it when necessary. If this metric is equal to or greater than zero, your revenue meets or exceeds your forecast and the company is meeting its objectives. But if this KPI becomes negative, you need the ability to determine which factors contributed to the change. Ideally, the analytic application also includes the ability to act on this information (e.g., notify the responsible people).

Let's illustrate this example with a business objective and the analytic application that my company used to monitor and analyze its progress toward that objective. You can use the basics of this application as a model for designing and implementing analytic applications that support your business objectives.

The Business Objective

The business objective that my company wanted to support was to deliver high-quality products on time--the classic objective for a product-development group and an objective that applies to both commercial product development and internal software development. The software development life-cycle phase that we decided to analyze was the quality-assurance, or stabilization, period--the time between the point when the software is feature-complete and the point when the company releases the software to customers.

Most software-development organizations schedule the amount of time for stabilization based on the software's maturity, the duration of code development, and the software's quality objectives. Usually, the scheduled stabilization time isn't sufficient and requires a heroic effort near the end to finish on time. The difficulty in managing the stabilization part of the schedule lies in assessing exact progress, then making intelligent decisions about reallocating resources and changing the schedule in the middle of the stabilization process. For example, as a development manager, you don't want to delay a release date unless you have concrete evidence that you won't make the currently scheduled date.

The traditional approach for tracking progress during the stabilization period is to track the number of outstanding defects. The number of defects should increase for a while, then peak, decrease, and level off at a point of marginal return, as Graph 1 shows. When you reach the point of marginal return (i.e., when great effort is required for minimal progress), the product is presumably ready for release.

I've been in several development groups that monitored the trend of the total number of defects. But without the ability to easily dig more deeply into the trend's contributing factors, charting the trend was only slightly helpful. Herein lies the difference between analysis and reporting: Reporting lets you monitor a KPI but doesn't let you analyze contributing factors or act on them; analysis lets you analyze contributing factors and act on them.

For example, imagine that you're in Week 24 of a project and that the trend line in Graph 1 extends only to Week 24. You examine the defect-count trend and notice a marked change in the slope, suggesting that you're close to the trend's peak. This change in slope could be good news: Perhaps your defect-fix rate is starting to catch up with your defect-find rate. However, the change in slope might simply mean a decrease in test coverage. Upon further analysis, you might find that some of the people who were testing the product in previous weeks have been assigned to another project. So, the lack of incoming defects is really because of a lack of testing. This example shows why analysis is so important: A report usually only creates questions; it doesn't answer them.

An Analysis Solution

How do you determine the cause of a change in trend slope? To answer this question, my company identified the most crucial KPIs for monitoring our product life cycle's stabilization phase (the KPIs we found most important are at callout A in Table 1). A static report on an intranet portal shows some of these KPIs, and a SQL Server 2000 Analysis Services cube supports some of them. We hope to eventually have a cube that supports all KPIs, but the company hasn't finished that project yet. However, the KPIs that the cube supports are available in an interactive view on the same portal. So, when we have a question about a KPI, we can immediately drill down to the contributing factors and determine that KPI's status.

From the basic metrics at callout A in Table 1, you can derive some other interesting metrics, such as those that callout B in Table 1 shows. My company uses a commercial defect-tracking product that stores the underlying defect information in a SQL Server 2000 database. The company then inspects the database's schema and uses a Data Transformation Services (DTS) package to extract this data and transform it into a star schema that supports cubes. The operations staff schedules the DTS package to run nightly to update star-schema tables and refresh OLAP cubes.

Considering the requirements for cubes, the cube should answer several types of questions, which fall into two categories: defect-flow questions and defect-state questions. The difference between these two types of questions boils down to the basic entity you want to analyze. Defect-flow questions are about defect transitions (a defect transition occurs when a defect moves from one state to another). After you find defects, they progress through a series of states: found, assigned to an engineer, fixed but awaiting the next product build, fixed but awaiting quality-assurance verification, and quality-assurance verified and closed.

A few sample questions that understanding defect transitions can answer are

  1. What's the average time that a defect waits for a manager to assign it to an engineer?
  2. How many defects did quality assurance verify and close during the previous week?
  3. How many defects did a particular engineer fix during the previous month?
  4. What percentage of the defects found by a quality-assurance representative led to product fixes?

Questions 2 and 3 might appear to be asking for a defect count, but they're really asking for the number of defect transitions. For example, Question 2 asks, How many transitions occurred from a fixed, awaiting-verification state to a verified-and-closed state?

Another category of questions relates to the defect state. These questions require snapshots of all defects and their current state each day. Examples of questions that defect states can answer are

  1. How many defects are waiting for verification?
  2. What's the net number of defects fixed this week?
  3. What's the trend of outstanding defects (e.g., as in Graph 1)?
  4. What percent of the defects found by a quality-assurance representative led to product fixes?

You can't easily answer defect-state questions by using defect transitions. For example, to determine how many defects are waiting for verification, you can't just determine the number of defects that moved into a state of awaiting verification yesterday; some defects might also have made subsequent transitions the same day.

Similarly, you usually can't answer defect-transition questions with defect states. The problem is that defects can make multiple state transitions in a day--and frequently do. Thus, the defect-state cube never reflects some states for a particular defect because defect-state snapshots occur only once daily. Both defect-transition and defect-state cubes are useful; they just have different purposes.

Table 2 shows the structure of the defect-state cube, and Table 3 shows the structure of the defect-transition cube. If you examine the metrics that you want to monitor and analyze, you'll notice that the cubes don't contain all the information you need to determine the KPIs. The missing information includes the amount of test coverage, the time spent testing fixes, and the time spent fixing defects. My company currently tracks test coverage in a relational database and tracks the time spent on testing and fixing defects in a spreadsheet.

Human Aspects of KPIs

KPIs are no excuse for ignoring other management duties. Usually, KPIs just reinforce what you already suspect. They're a great tool for measuring progress, and visible progress is an excellent motivational technique. But to paraphrase an old saying, be careful what you measure because you just might get it.

One of Scott Adams's Dilbert cartoons provides a great illustration of this point:

Boss: Our goal is to write bug-free software. I'll pay a $10 bonus for every bug you find and fix.

Wally: I'm gonna write me a new minivan this afternoon!

The number of bugs that a developer fixes isn't a good overall measure of productivity because the developers with the most fixes might also be the developers who created the most defects.

I have mixed feelings about tying KPIs to a person's compensation or other rewards. If you don't choose your measures carefully, people can lose sight of the project's larger objectives. Since we started tracking some of these measures in our development lab, individual developers have become very conscious of their own contributions to these metrics and how they could make themselves look better by fixing many low-priority defects instead of a few high-priority ones. I'm afraid that in trying to improve his or her individual KPIs, a developer might shortcut a solution to get a better fix-per-hour ratio or ignore an apprentice developer's plea for help.

MDX Formula for Trend Lines

If you implement your own analytic application based on Analysis Services, you'll soon find that one of the most helpful data views is a line chart of a KPI's trend over time. However, creating the MDX formula to chart a trend over time isn't completely intuitive. Using MDX to select a time series for Analysis Services to return on the columns is easy. But creating a time series that changes as you load the data into the cube (e.g., the previous 3 months) is more difficult.

You often need an expression for the previous 3 months of loaded data. The FoodMart Sales cube provides a good example. All the months in 1997 contain data, but no months in 1998 do. Using MDX, I can solve the problem in the following way:

WITH SET \[NonEmpties\] AS 'Filter(
  \[Time\].\[Month\].Members, Not IsEmpty(
  \[Time\].CurrentMember ))'
   SET \[Recent Months\] AS 'LastPeriods(3,
  \[NonEmpties\].item( Count( \[NonEmpties\] )
  - 1 ))'
SELECT \[Recent Months\] ON COLUMNS
FROM \[Sales\]

This code creates a set that returns all the non-empty members, \[NonEmpties\]. Then, the code uses the last item in the \[NonEmpties\] set with the LastPeriods() function to return the number of periods I want. I use this type of formula to derive most of the views I monitor on defect cubes.

Analytic applications are popular in business today. With the current uncertain economy, business efficiency has become more important than ever. One way to improve efficiency is to monitor your progress toward your business objectives, then analyze that progress if you don't meet your objectives. You can create your own analytic application to track relevant data and monitor your business activities. To practice writing MDX formulas that chart trend lines, try your hand at the puzzle in the sidebar "July MDX Puzzle." For the answer to the June MDX puzzle, see the sidebar "June MDX Puzzle Solution Revealed."