We've all heard the pitch: test automation saves testing time and resources. Test tools can execute tests faster than a person can, and in most cases they can do so in an unattended mode. So test automation should reduce test cycle time or the number of testers needed. Right? Not exactly. But Linda Hayes will tell you what it does save.
Many a test tool investment has been justified by a handy little spreadsheet that calculates the return on investment based on the reduction in the number of resources and amount of time being spent on testing. It's common to hear that test automation saves testing time and resources.
But is this true? Do companies really reduce their testing headcount and test cycle time as a result of automation? In my experience, the answer is no. Frankly, I have never seen a company reduce their staff or their schedule because they bought a test tool. Test tools do not save testing time or resources.
But does this mean that test automation has no value? Again, the answer is no. The key is to understand the true purpose—and value—of test automation. I submit that automation can save more time and money than are budgeted for the entire test organization and test cycle. How can that be, you ask?
Investment in Automation
Granted, automation is not cheap. Not only does it take time and effort to evaluate and select a test tool: it takes money to buy it and time to learn to use it; and more time to actually implement it; and the most time of all to maintain the test library as the application changes. In fact, experience shows that it takes between five and ten times as long to develop and debug an automated test as it does to execute it manually.
Okay. So you have to make the investment to get the return, and in most cases you will execute the same test at least ten times over the life of the application or even within the same release cycle. So, what's the problem?
The problem is that the goal of test automation should not be to reduce either test resources or cycle time. The goal should be to reduce the risk and cost of software failure by increasing test coverage.
Think about it. If your management's primary goal is to reduce the cost of testing, then I can propose a foolproof way of cutting 100 percent of your testing budget without even buying a tool. It's simple: stop testing! Just quit it. You can save all that time and cut all the staff. No problem, right?
Well of course this is a problem, you say. Then we would be shipping defective software, and that could lead to production downtime, customer crises, and all manner of development and support costs.
Precisely. It's really all about reducing the risk and cost of software failure.Return on Investment
So what is the return on an investment in test automation?
Here is an example from my experience. An insurance firm with a highly specialized offering decided to outsource their software development but retain the software test responsibility. They reasoned that while programming was not their core expertise, the expected functionality of the system was.
Due to varying regulations, a steady stream of new offerings, and the constant state of adjustment of actuarial and premium tables, the software development effort was extensive and constant. The manual test effort required four weeks of dedicated work from a full-time team of five people. Yet even with this level of effort production problems were frequent, resulting in anything from downtime to misquoted policies to lost business.
Since a four-week test cycle was the most that the business could tolerate given the demand for delivering new functionality, and since her request for additional staff had been turned down, the senior test manager decided automation was her only answer. She bit the bullet and invested six months for two of her key testers, plus an outside consultant to help them select the right tool and set them on the right path for their test library design and management procedures.
At the end of the six months, her team had automated their existing test cases, and the execution could be completed in only one week. It would have taken even less time, but overnight batch processes and time-dependent factors required at least five calendar days of execution.
But here is the beauty: she did not advertise her accomplishments. She did not reduce her test staff, or promise test cycles of one week. Instead, she reinvested the time savings into expanded test analysis and coverage, and also in creating a daily "sanity check" that ran 150 test cases first thing every morning—in production—to check for problems with the infrastructure, data tables, interfaces, and other potential sources of failure. Her logic was simple: if she only did what she was doing before, then she would get the same results—production problems.
The payoff? Post-production incidents went down 80 to 90 percent. The savings to the company, as she was told when she got her raise and bonus, exceeded the entire budget for the test department.
The Real Payoff
If you want to justify test automation, don't look at your test budget, look at the cost of failure for the system you are testing. What does it cost the company or its customers in time, resources, and money if defects escape to production? What is it worth to deliver on time with a quality product?
If you do this analysis, you may find that test automation justifies a bigger test budget, not a smaller one.