Planning the Endgame

[article]
Summary:

What can a test manager do when a project manager says, "Test faster!" or tries to cut the amount of testing to meet a project release date? Fiona Charles says that you can argue for the time and resources you need by incorporating the endgame into your estimations. In this week's column, Fiona details how to structure a winning argument by paying close attention to all the activities that occur during testing.

We know that the elapsed time for testing will ultimately be decided not only by the number of hours we schedule for our team but also by the quality of the system we get to test, the development team's turnaround time for bug fixes, and the stakeholders' appetite for risk.

This is the endgame: the interplay of tests, builds, bug fixes, and retests plus regression tests. Unfortunately, project managers, even experienced ones, can fall into the trap of planning only for testing-forgetting to take the whole endgame into account.

A test manager can help the project manager build a credible case for the amount of testing time you need by modeling a plan for the endgame.

I start by coming up with a number representing test cases. That gives me the most important unit of measure, and a starting point for other planning assumptions.

In my planning, a "test case" is just a handy unit of measure, representing one significant thing we're going to do to test some aspect of the system. The definition and size vary according to the system we're testing. Although the actual sizes of test cases for any system will vary widely, it's usually possible to come up with an average unit that will hold up well enough for planning purposes.

During test design, I work with my team to define what we mean by "test case," usually by analogy.

"A test case is a thing like this, about this big."

"We think it will take about this long to develop an average test case."

When we have several developed, we can say,

"We think it will take about this long, on average, to execute an average test case, including setting up the data, taking notes, and entering bugs."

Test cases don't have to be documented. I can use the same unit to allow for a mix of predesigned and exploratory tests in the plan. Having a consistent unit means I can apply some planning assumptions to undocumented tests. I add contingency to the number, and for test execution I also raise it by some percentage for exploratory testing.

For test execution, I first try to estimate the number of test cases we can attempt in a week. Several assumptions go into that, including average time to execute a test case, productive tester hours per day, and the number of testers. I also add some factor for environment down time, plus or including builds, depending on the disruption time expected from routine builds.

The calculations look like this:

Let's say we have 1,200 test cases. If we plan only for actual test time, it could appear as if we would complete testing in about three weeks.

In reality, the test team won't reach full productivity in the first couple of weeks. If we estimate a productivity hit of 25 percent in weeks one and two, we will actually only execute about 300 test cases in the first two weeks.

Regardless, we will be finding bugs, so next I estimate how many bugs we expect to find. Let's say I expect an average of one bug logged for every three test cases executed. A test case that finds a bug won't pass and will have to be re-executed at least once more.

Week one starts to look like this:

Of the bugs we find, some number-say one in three-will be severity one or two, and therefore critical to fix and retest. I also assume that around a third of the lower-severity bugs will be fixed. I ask the development leads for average times to fix bugs. If

About the author

Fiona Charles's picture Fiona Charles

<span class="Text"><strong>Fiona Charles</strong> is a Toronto-based test consultant and manager with thirty years of experience in software development and integration projects. Fiona is the editor of <em><a href="http://www.stickyminds.com/s.asp?F=S1149_BOOK_4" target="_blank">The Gift of Time</a></em>, featuring essays by consultants and managers of various professions about what they've learned from Gerald M. Weinberg. Through her company, Quality Intelligence, Inc., Fiona works with clients in diverse industries to design and implement pragmatic test and test management practices that match their unique business challenges. Her experiential workshops facilitate tester learning by doing, either on the job or at conferences. Contact Fiona via her Web site at <a href="http://www.quality-intelligence.com/" target="_blank">www.quality-intelligence.com</a>.</span>

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!