I recently got this email from my friend Carol:
I need a fairly scientific way to estimate testing time. Today, I know how long my test cases take to run individually, I know there will be some number of bugs, I know the fixes will take some period of time. I know I will need to rerun tests, etc. Is there a formula that helps with estimating this? I realize it will not be exact, but something that other companies do to make estimating more of a science than a feeling. I hope you have an exact answer for this question. My boss is going to ask me for this information on Monday, so no pressure but HELP!
Carol asks an important question. Management tends to think of software development as an investment, like buying a car or a house. Like those big-ticket purchases, there are plenty of other options to choose from, so management tends to like to know the benefits, the time to build, and the cost.
Those seem like reasonable requests, at least at first. Then we run into Carol’s questions, which make things more challenging. Sadly, the reality is that our guesses at how long tests take to run are often wrong, we likely don't get to put in forty hours of productive testing in any given week, we are often waiting for new builds and fixes, and the rate of failure means we’ll need to rerun tests, often more than once.
This means a terrible amount of uncertainty in the estimating process. Add a new team, technology, or process, and suddenly the uncertainty is over the edge; coming up with a schedule estimate for testing starts to feel less like science and more like an irresponsible guess.
The following factors significantly influence our ability to estimate testing time well, but with a little effort, you can tighten up the process.
In her email, Carol indicates her lack of knowledge about her situation. She states that fixes will take “some period of time” and there will be “some number of bugs.” Like Carol, most organizations lack sufficient historical data to build estimates from. Without the data of experience, it will be difficult to create accurate estimates.
So start gathering data! Over the next two weeks or so, try to figure out what percentage of your time is spent on rework. If it’s 30 percent, use 70/30 to find that the planned test effort should be multiplied by 1.43 to find the real effort.
The next key factor is the test team itself. How large is the team? What is each member’s personal level of skill and experience? Do they have a well-defined testing process that everyoe understands and can select from? How stable is the team? Do members come and go randomly, or do they have a cohesive history? How much time can the team focus on testing tasks without interruption? And what are the individuals’ interaction skills? These answers are all vital to the team’s performance and, thus, the estimates for testing time, but we have no ways of measuring these vital characteristics. Lacking this, ask yourself how much final schedules differ from the planned ones and how that is changing. If it’s getting worse, you need more time. If it varies, use the last example.
Another factor in good estimates is the stability of the requirements. We no longer “freeze the requirements” like we used to. In today’s agile world we welcome change, and with those changes to requirements will come changes in testing—and the estimates. Product owners flex scope to hit dates; take a look at flexing testing to hit deadlines.
System size, complexity, and risk are also key factors that influence the amount of testing that “should” be performed. And again, we have no effective ways of measuring these factors. In his book The Principles of Product Development Flow, Donald Reinertsen says larger projects slip not only by larger amounts, but also by larger percentages. When you look at how far your estimates are off, look at projects that looked to be of similar size at the beginning.