Estimating Testing Time

[article]
Summary:

Testers are always facing a time crunch. As part of a recent assessment, a senior manager asked, "How long should the testing really take? It takes our testers from four, five, six, to thirty (insert your number of choice here) weeks, and we need it to take less time. Why can't it take less time, and how can we tell what's going on so we know how much testing we need?" In this column, Johanna Rothman answers with a timeline. By estimating how many testing cycles will be needed, plus how long each will take, she can map out the entire testing process. From this viewpoint, she is able to pinpoint where the process can be streamlined thus reducing the time spent testing.

Partway through an assessment, the senior manager asked me, "How long should the testing take?"

The answer to the senior manager's question is, "It depends."

If you do test-driven development, there is rarely more than an iteration's worth of at-the-end testing. When I coach teams who are transitioning to test-driven development, I recommend they plan for one iteration at the end for final system test and customer acceptance test. Once the team has more experience with test-driven development, they can plan better. I have found there is always a need for some amount of customer acceptance testing at the end. The amount of testing time varies by project and how involved the customers were all along.

If you're using an Agile lifecycle even without test-driven development, I recommend starting with one iteration's length of final system testing. I am assuming that the developers are fixing problems and refactoring as necessary during an iteration, which is real Agile development, not code-and-fix masquerading as Agile.

But I suspect many of you are not yet using Agile lifecycles with short iterations.

If you're using any of the following . . .

  • an incremental lifecycle such as staged delivery where you plan for interim releases (even if the releases aren't to the customers)
  • an iterative lifecycle such as spiral or evolutionary delivery
  • a serial lifecycle such as phase gate or waterfall—then planning for testing is difficult because it depends so much on what the developers provide, and you won't know until you start testing.

I'd expect as much testing time as development time, but it doesn't have to come all at the end as final system test like it looks in the waterfall pictures. Any proactive work you do—such as reviews, inspections, working in pairs, unit testing, integration testing, building every night with a smoke test, fixing problems as early as they are detected—can all decrease the duration of final system test. If you're the project manager, ask the developers what steps they are taking to reduce the number of defects they create. If you're the test manager, work with the project manager and the developers to create a set of proactive defect-prevention practices that make sense for your project.

Wherever you are in the organization, recognize that final system test includes several steps: testing the product, finding defects, and verifying defects. Your first step is to separate these tasks when you estimate the duration of final system test.

One question you should be able to answer is, "How long will one cycle of 'complete' testing take?" We all know we can't do complete testing, so your version of complete is the tests you planned to run and any other exploratory tests or other tests you need to run in one cycle of testing to provide enough information about the product under test. I realize that's vague and depends on each project. I don't know how to be more explicit because this is a project-by-project estimate. If you work with good developers, the cycle time can decrease a bit from the first cycle to the last—because the testers know how to set up the tests better and the product has fewer defects, which allows the testing to proceed faster.Once you know how long a cycle of testing takes, estimate how long it will take the developers to fix the problems. I use this data: the number of problems found per cycle in the last project, my gut feel for how many more/less we should find per cycle in this project, and bug-tracking system data telling me the average cost to fix a defect pre-release. If I knew that the first cycle in the last release found 200 problems, that it took the developers half a person-day each to fix the problems, and I have ten developers, I estimate ten working days to fix problems. That's a long time. And yes, I was on a project where that's what it took. It was agonizing—we thought we'd never finish fixing problems.

Now that you know how long a cycle should take, estimate the duration for fixes after the first cycle. How many cycles of testing do you plan? When I set up projects, I tend to use some proactive defect detection and prevention techniques, so I generally plan on three cycles of testing. In a casual conversation with Cem Kaner a number of years ago, he mentioned a product for which he planned on eight cycles of testing. For one project, for which I was a contract test manager, we had almost thirty cycles of testing. I can't tell you how many cycles of testing you'll need because that depends on the product's complexity, how good your tests are, and the initial count of product defects before this project started.

Here's what I have noticed from my work with multiple organizations. The groups who want to decrease testing time tend to perform the least proactive work reducing the overall number of defects, and they typically perform primarily manual black box testing at the end of a project. I understand their desires, but they've set up their lifecycle and processes to produce exactly the wrong result.

The best guess I have is to estimate the number of cycles you'll need for testing, the duration of one cycle, and the time it takes for developers to fix problems between cycles. Add up the testing plus fixing/cycle and multiply by the number of cycles you think you'll need, and you'll have an estimate of the testing time needed. By the way, when I do this, I never give a single number; I always give time per cycle ("It will take us six days per cycle"), my estimate of the number of cycles ("We'll need four cycles"), and my estimate of defect-fixing and verification time ("Plus three days between test cycles for developers to fix problems"). That way I can show the costs of not performing proactive defect prevention in a nice way. And I can show my uncertainty in my estimation ("That's a minimum of thirty-six working days, longer if the defects take longer to fix and verify").

If you want to reduce testing time and create a low-defect product, test all the way through the project with a variety of test techniques. Use test iterations so you know at the end of one test cycle where you stand with defects. The lower the number of defects and the more sophisticated your tests, the faster your testing time.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.