To Automate or Not?

[article]
Summary:

Getting the most out of automation is a process of evaluating the test goals and matching the right tool for the job. Sometimes test automation solves problems, but often it creates more work than it alleviates. When does automation help, and when does it hurt? Here are a few ideas for evaluating your test goals and making sound decisions on whether or not to employ automation to solve them.

Test automation is a complex topic that has generated volumes of literature-Web searches yield a myriad of information regarding commercial tools to use, theoretical techniques for design, methodology and best practices, automation consultants, etc. Suffice it to say, with all the mass of information available for test automation, the decision to automate or not can be difficult. 

Why is there so much involved with automation? The answer is straightforward: because it's difficult . Decisions about which tool(s) to use, how to architect and implement a test suite, and who will do the work are complicated. Then there are software design considerations, supporting scripts necessary to run the automation, source control, and library construction. Add to that the complexity of managing a large automation suite, continued maintenance of scripts, and adding to the test suite with new functionality…one can easily be left with the looming question, "Is this really going to be worth it?"

Here are three things to think about when making the decision to automate.

Unfortunately, test automation is not a magic bullet for achieving great test results. Software vendors will try to convince you that you can automate any and all testing your group does-this is not true. Remember, automation does not actually do software testing, it is a tool to help your test engineers test better. The time saved through test automation can easily be reinvested through test maintenance, adding new test cases, removing obsolete test cases, and improving upon the test architecture.

What does this mean for your automation effort? It means you should automate only the things that need to be automated. You can probably think of numerous applications for test automation, but of those, select the best fit and start on that first. Especially if this is your first effort in test automation, if you shoot for the moon, it can very easily backfire on you in terms of projected effort and cost. Going for the low-hanging fruit first allows you to maximize the return in the short term, realizing important gains from just having spent resources and money on tools and people.

Consider all the output of the development group in our company, including 1) core API driven peer-to-peer technology, 2) a showcase Web site to demonstrate the technology, 3) a corporate site, and 4) some internal tool development. We decided to automate the core technology first. Not only does it lend itself to reliable test automation (an API is almost always easier and more reliably automated than a GUI), it achieves the best bang for the buck for the department. Sure, it would be nice to automate the corporate Web site, so that on those small, weekly pushes of new content we could regression test the site-but why spend time and money automating something that takes two testers about an hour to test? It's not worth it.

Test automation is software development, nothing less. We wouldn't expect code written from development to be sloppy or undocumented. Ideally, we would like to see code reviews, participation, and reliability, even if it takes a little longer. Our test automation projects should be held to the same high standards. Taking even a day or two at the outset of an automation project to plan and scope the effort will pay off in the building phase. Not only will it keep you focused as you proceed through the effort, but it will result in a well-organized end result that is easily accessible to other engineers or testers, and is maintainable as well.

If you do not design for maintainability, you will spend so much time trying

About the author

Andrew Lance's picture Andrew Lance

Andrew Lance (andrew@centerspan.com) is a senior quality assurance engineer, technical lead for CenterSpan Communications, a company developing cutting-edge content delivery solutions based in Hillsboro, Oregon. Andrew has worked with test automation technologies for more than five years and has participated in every major phase of automated testing, from design and implementation to maintenance and support.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!