Historical Analysis and Trends: The Real Value Metrics

[article]
Summary:

In "Practical Test Reporting," David Freeman wrote about how to start a basic test metrics program. In this follow-up article, he shows how to combine the historical information to predict how your future projects may track–kind of like creating your own metric "magic eight ball."

You've created a way to track and report progress for a day or an entire project. You can develop graphical information and begin to quantify the work the test team performs. Your boss is impressed. Imagine if you had several projects worth of data. You now see how your projects tend to perform historically. By transferring the daily report information to the project summary information and then to a testing-history repository, you can detect common trends in your projects and use this information to change those trends and predict future project progress.

Let's take a look at a snippet of summarized historical project information. Figure 1 shows summary information for several projects.

Figure 1
Average summary information is presented at the top of the sheet. Across the top is a set of columns labeled Project Pass %. Each column will measure a value specified in the appropriate row when the project is at a certain Pass %. For example, Phone Dialer 1.0 had 6,686 tests when the project was at 10% pass. The sheet also shows a number of averages that have been derived from the completed projects.

Find the Phone Dialer 1.0 project in the Project column. There are only a few pieces of data that have been transferred from the project summary.

The following four pieces of data allow us to calculate some additional measures:

  • The length of the test cycle is determined by calculating the number of days between the start and end dates (in this case, including weekends).
  • The number of days into a project a particular Pass % was accomplished is determined by finding the difference between the date the Pass % was attained and the start date of the project. This lets us see how long it takes for each
    project to reach each milestone.
  • The percent of the project time it took to reach each Pass % milestone is calculated by dividing the number of days into the project by the total length of the test cycle. A percent is used to normalize the data.
  • The number of tests that changed between each milestone is determined as well as the percent of change.

The average values (noted at the top of the report) of Length of Test Cycle, Days into Project, % into Project, Number of Tests, and % Change of Tests for all the projects that have been measured allow us to look for emerging patterns. (All the data here is from real projects, though the project names have been changed.) Figure 2 is an example of some derived data.

Figure 2
The X-axis of this chart shows the Pass % milestones (100% is not shown, as rarely are all tests executed and passed). The Y-axis shows the percent of test cycle time it takes to reach the milestone. For example, on average it takes about 15% of the testing cycle to reach the 10% Pass milestone. It takes 36% of the project timeline--or an average of thirty-eight days of a 108-day project--to reach 50% Pass. So what is the crux of this data? Here are several ways to interpret what you've collected:

  • On average, projects reach the 90% Pass milestone 53% of the way into the testing cycle.
  • The 90% milestone can be attained in just half the allotted testing time.

Or, an even more disturbing interpretation of this data:

  • On average, it takes the last 47% of the project's testing cycle to pass the last 10% of tests.

What could cause the timeframe discrepancies over the course of the project?

  • Testers are gung ho at the beginning and lazy at the end of the project.
  • Resources are removed from every project.
  • Requirements are continuously changed.
  • Code is still being developed, so bug fixes aren't timely.
  • Code is delivered late.
  • Tests are completed late.
  • The data is no good.

It's difficult to surmise what actually happened from the data, though we can see that tests should have been completed in a timely fashion. We know this because the number of tests decreased as the project progressed, meaning tests were removed from the test base.

If we were to look at each project's data at a finer level, I'm sure we would see where tests were created late in the cycle, but the volume of tests removed outweighs those newly created.

It's important to use this information to improve your organization. Use it to determine why events occur the way they do. Figure out what can be changed to make the organization more effective and improve product quality. Try to determine what caused the issues in your historical data, set goals to correct the issues, and use project data to see if you've accomplished these goals.

It may be useful to code the projects according to type, complexity, size, duration, etc. This allows you to compare similar projects and relevant data.

Predicting Progress--The Ultimate Goal
Let's examine one last concept. By summarizing all your projects and reviewing the data as a whole, you can see developing historical trends. Assuming future performance can be based on what has occurred in the past, you can now predict what may happen with the next project. Using historical data, Figure 3 shows the projections for our example project, Phone Dialer 1.0.

Figure 3
The Average % of Project/Assumed Duration (the overall average percentage of all projects it takes to attain a certain Pass % milestone) is used to calculate when a project will reach a milestone. The data required for this calculation are simply the testing start and end dates and the Average % Into Project we learned in Figure 1. The total days are figured, and the percent of the project to reach each milestone is calculated.

You'll notice that the Average % of Project is listed on two lines. The first occurrence is the data calculated from the actual project information (Figure 1). The second, labeled "Assumed Duration," is the data that is actually used to calculate the estimates for the project. The data is manually transferred from one line to the other.

The purpose of this is to allow the estimator to modify the Percent into Project if the team has set specific goals. We'll assume that the calculated values are sufficient, with the exception of 100% complete--the data points here are not sufficient to rely upon.

Also worth considering are two different sets of targets: Weighted Targets and Unweighted Targets. Weighted Targets use the Percent into Project to calculate the milestone date. We should see the front weighted curve (most of the testing is completed in the first half of the test cycle) using these values. Unweighted Targets simply take the total percent of the project to be tested--in most cases 100%--and divides it by the number of days allotted.

In this example, Phone Dialer 1.0 has 133 days (including holidays) for its test cycle. According to historical data, this project will reach 10% Pass on 10/17, 20% on 10/30, and so on. Figure 4 shows a graph of the real vs. projected data.

Figure 4
Fortunately, at no time was this project behind the targets. You'll also notice that the Pass % line follows a very similar pattern to the historical projection plotted by the Weighted Targets line.

Wrapping Up
There are several considerations that are important for test measurement:

  • Test-result management is not pretty. It may be visually appealing once the summary information has been generated, but getting there is most of the work. The main reason that the metrics have been outlined in such detail is that I, as the test manager, am accountable for them. If anyone challenges the data, the source information must be defendable by its developer. Hopefully, you have a good understanding of how the metrics were developed and have thought of how these measures--or something similar--can benefit you and your organization.
  • You can do whatever you want with numbers. I'm sure we've all seen some voodoo used on weak or undesirable numbers and questionable statistics. Make sure your numbers are reputable and well founded. Don't allow information you've generated to be taken out of context.
  • Document the source and limitations of your measurement efforts. One of the limitations you may have noticed as you look at the sample data is that the data for the 100% milestone was often incomplete. The picture is even worse for the fifteen or so projects for which I have actual data. Of the fifteen projects from which I drew my sample, only one project actually obtained 100% Pass. Also, if tests were removed, document that along the way. I often use a Changes worksheet in both the daily and project-summary spreadsheets to track events. Removal of tests and changes in requirements and back-office issues will likely prevent any test effort from ever reaching 100% Pass. What does this mean for your metrics? Is it important? Can it be controlled? Consider how it affects your organization and its measurement efforts.

Click Here to Read Practical Test Reporting--Part 1

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.