Tips to Overcome Test Automation Challenges

[article]

In my previous column, I complained that most testing remains manual despite billions invested in test automation tools, largely because of the unrealistic expectations and unfortunate reality arising from the record/replay approach. Many readers subsequently took me to task for pointing out what went wrong without offering any solutions. Fair enough.

In my experience, there are four key elements to making test automation successful:

There are some possible benefits to allowing both to coexist, too. Consider these:

  1. Educating development to produce testable applications
  2. Understanding why yet more scripting won't work
  3. Learning to transition from a test tool to an automation application
  4. Setting realistic expectations

Each of these is summarized below with links to more detailed discussions.

Educating Development
Automation changes the relationship between development and test forever. Manual testers only worry about what they can see, but automation is all about what's under the covers. In order to build a reliable, maintainable, automated test, the tester must be able to identify application objects using identifiers or names that are understandable, persistent, and unique. Further, these objects must be accessible. Understandable names make the test readable and therefore maintainable, persistent names make the tests stable and repeatable, and unique names make it reliable. Accessibility means your tool can interact with the object.

Unfortunately modern applications often comprise code that is generated on the fly by servers, using gibberish identifiers that change from moment to moment. Also, flashy interfaces often are built with opaque objects that don't reveal their methods or properties. If this is happening to you, there is nothing you, as a tester, can do about it. Only development can fix this problem, but they first have to be educated. For more on this subject, see "Automated Testability Tips."

Understanding Scripting
As I pointed out in "Fool Me Once," record/replay was seductive because it looked easy, but it actually was deceptive because the recorded scripts weren't reliable or maintainable. The typical response was to add yet more code to handle timing issues, externalize data, and insert logic to handle unexpected responses. While these techniques may have addressed the immediate issues, they created even more. Programming skills were required, thus excluding the application experts who typically performed the manual tests. The resulting tests contained more and more code, requiring more time to develop and even more to maintain. In the end, the tests were too complex, costly, and time-consuming.

A detailed discussion of how this problem played itself out can be found in the Better Software magazine article "The Demise of Record/Script/Play."Test Tool to Automation Application
What we learned from this history is that code is the culprit—the more you develop, the more time it takes, the more it costs, and the longer to maintain. And if you actually code every single test, you will end up with more code than the application has. Unless you have more time and people for testing than you do for development—and if you do, call the Guinness Book of World Records—then this approach is doomed.

It turns out that what you really need is not a tool but an application. The difference is code versus data. Using a tool results in code; using an application results in data. Why does this matter? Well, for one thing, you don't have to be a programmer to use an application, and that means your subject matter experts can participate. Second, data is easier to manage than code and can be maintained en masse, often automatically. And third, you may be able to buy an application rather than build one. For more on this distinction, see the WorkSoft blog.

There are many different ways to migrate from a tool to an application. For those who already have invested in a scripting tool, consider developing a framework that limits the amount of code that has to be developed and enables non-programmers to build tests as data. For those of you who haven't committed yet or who have shelved one or more tools, consider buying a commercial framework or adopting an open source approach. My favorite is the class/action architecture, also sometimes called table-driven, because it results in the least code and the code it does need is portable across tools and applications.

Setting Expectations
But the most important of all elements is to set realistic expectations about what it is going to take in terms of cost, time, and resources to implement and maintain automated tests. Automation is a discipline apart from manual testing and introduces new dynamics. Not only does the application have to be developed properly, but the test data environment has to be stable and repeatable—in itself a significant challenge—and test cases have to be explicitly documented and reusable.

And, although you won't ever automate everything—manual testing will always be needed for ad hoc, "what if" scenarios—and you won't be finished after the first project, automation is not only worth it but also inevitable. There is simply no other way to keep pace with snap-together, composite applications that allow rich, complex functionality to be developed in less and less time with more and more risk.

That's what I think works. What about you?

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.