Automation Déjà Vu—Again!


A decade's worth of test automation history reveals that not much has changed. Experts still dwell on how much to automate and how to estimate different types of ROIs. Test automation's growth is stunted because it's not revered as a discipline different from manual software testing. In this column, Dion Johnson urges us to correct the situation so that test automation can develop into a more lucrative opportunity.

1997: Cem Kaner's "Improving the Maintainability of Automated Test Suites" white paper:

"When GUI-level regression automation is developed in Release N of the software, most of the benefits are realized during the testing and development of Release N+1."

1999: Bret Pettichord's "Seven Steps to Test Automation Success" white paper:

"We need to run test automation projects just as we do our other software development projects."

1999: Mark Fewster and Dorothy Graham's Software Test Automation book:

"If no thought is given to maintenance when tests are automated, updating an entire automated test suite can cost as much, if not more, than the cost of performing all the tests manually."

2001: Dion Johnson's Designing an Automated Web Test Environment white paper:

"With many, getting an automation tool is like a kid getting a new toy—they jump right in and start playing. And the resulting test suite, much like a child's new toy, just doesn't last."

2008: An unnamed test lead's automation request:

"I know that the application is still changing, but do you think you can start now and automate most of the tests so that they may be used during the acceptance testing in two days?"

Are you kidding me!? More than ten years have passed and we are still at a point where this type of request can be made with a straight face? Why has the industry as a whole not outgrown this? Why do we continue to ask the same questions that we largely asked over a decade ago? For its part, the IT industry and many of us within it have worked to raise the level of automation discourse through the introduction of new techniques, training, and publications. Somehow, this still seems not to have translated to the broader segment of the industry's population.

We are still preoccupied with questions such as:

  • Is record and playback an effective automation approach?
  • Is 100 percent automation possible?
  • How do I calculate return on investment (ROI)?
  • How early can test automation begin?
  • Can test automation replace manual testing?

Don't get me wrong; there's nothing wrong with asking these and other questions, particularly if you are new to IT, software testing, or even test automation. The problem, however, comes when these questions linger, seriously delaying the effective implementation of test automation and, even worse, leading many down the wrong path with regards to test automation. In addition, the over-preoccupation with these questions that have been asked over and over again for more than ten years—despite the fact that some relatively widely accepted answers are available—is one of the major symptoms of test automation's stunted growth. This stunted growth is also evident both in the fact that shelfware remains prevalent and in the missed opportunity to address more pressing concerns. Over the years, we have failed to come up with comprehensive solutions to several important automation issues, such as:

  • Detailed calculations for framework selection
  • Detailed Calculations for Automated Test Development and Maintenance Times
  • Making Risk-based (Quality-based) ROI Calculations More Acceptable
  • Moving to a Fourth Generation Automation Framework
  • Devising a Good Answer for an Acceptable Percentage of Tests that are Automated

Detailed Calculations for Framework Selection
Below is a formula that I often use to help define the level of complexity an automated test framework should have:

= AN + VN + BN + TN + CN
+ EN + Ti + P + R

  • AF = Automation Framework Definition
  • AN = Number of applications expected to be tested by your organization
  • VN = Number of versions/releases that each application is expected to have
  • BN = Number of builds/nature of application changes that each application build is expected to have
  • TN = Number of tests that you're expecting to automate for each application
  • CN = Number of configurations that an application may have to test
  • EN = Number of environments in which tests will be executed
  • Ti = Time period over which the tests will need to be maintained
  • P = Organizational process maturity
  • R = Resource technical level

The formula is not meant for literally plugging in numbers to get an answer for AF, but rather simply to illustrate the relationship of the framework choice to the automation scope—a relationship that may be used as guidance for selecting an automation framework. However, the past ten years could've been used to develop consensus within the industry about how this type of formula could be used literally to plug in values for an actual answer to the type of framework that might work best for the identified scope.

Detailed Calculations for Automated Test Development and Maintenance Times
How long should one expect to develop and maintain automated tests? There is a factor that is widely accepted among industry leaders that it will take three to ten times as long to automate a test as it does to execute it once manually. That means if it takes you one minute to execute a test manually once, it will probably take somewhere between three to ten minutes to automate that test. This is still a fairly broad range because there is no industry standard regarding the time it should take to maintain an automated test suite given a certain scope. We need to focus on further defining this information.

Making Risk-based (Quality-based) ROI Calculations More Acceptable
The three basic approaches for measuring ROI are:

  • Simple ROI: Quantifying automation benefits in terms of the monetary savings automated test execution provides over manual test execution of the same set of tests
  • Efficiency ROI: Quantifying automation benefits in terms of the time savings automated test execution provides over manual test execution of the same set of tests
  • Quality ROI: Quantifying automation benefits in terms of the potential monetary savings automated test execution provides via increased test coverage and reduced risk of application failures

Each measure has its pros and cons, but there is one measure that has been largely neglected in practice—Quality ROI. Quality ROI provides a broader view of the benefits of test automation that can get lost if you only focus on the Simple and Efficiency ROI measures. Neglecting the broader view may hurt an automation effort that is already riddled with unrealistic expectations about an immediate, high-monetary ROI. The past decade has seen lost opportunity in effectively moving the industry to a more balanced approach for measuring ROI.

Moving to a Fourth Generation Automation Framework
Automation frameworks may be discussed in terms of three generations:

  • Generation 1 (Linear): Framework that is composed of automated tests where all components that are executed by a given tests are mostly contained within that automated test
  • Generation 2 (Data-driven, Functional Decomposition): Framework that introduces more modularization, reuse of common components among several scripts, and greater separation of data from automation code
  • Generation 3 (Keyword, Model Based): Framework that adds a new level of abstraction that separates code logic from test logic, allowing automated tests to be built and maintained in a less technical way

Each generation has its pros and cons and should be evaluated for use in terms of the automation scope of the organization. There is an opening, however, for a new generation to be introduced. I've dabbled with the concept of building detailed manual test procedures that could be interpreted by an engine and therefore double as automated tests. In addition, there have been some tools that have sought to offer this capability through business components that may be used to build manual test procedures that may also double as automated tests. So while there has been some movement in this direction, it could and should be more pronounced.Devising a Good Answer for an Acceptable Percentage of Tests that Are Automated
The question has been posed for years, "What percentage of manual tests should be automated?" Most people that have been involved in test automation for a number of years can provide some guidance for estimating an acceptable percentage, but the reason this is so difficult to answer is because every project builds manual tests at a different level of detail and also deals with applications of varying complexity. Therefore, 75 percent automation may be feasible for one organization, while only 15 percent is feasible for another. We could, as an industry, evaluate these different conditions, and devise a standardized approach for calculating an acceptable percentage.

The lack of growth in test automation largely is due to the fact that test automation has not been treated as a separate discipline from manual software testing. There hasn't been a separate body of knowledge or a comprehensive resource that focuses on automation processes and procedures, as opposed to just automation tools. We must correct this, so that we can move the discipline of test automation forward and stop the automation déjà vu that we've been stuck in.

For more information, visit Automated Testing Institute.


About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.