People-driven Test Automation


So much of test automation focuses on getting those dirty humans out of the process, but the reality is that humans have to write and maintain software test infrastructure. In this article, Markus Gärtner covers some common pitfalls and how to avoid them.

Successful test automation depends on a variety of factors. The technical aspects involved have been well understood for more than a decade [1, 2, 3], but the human aspects have not been discussed as much. Overcoming a particular approach that testers working on automation may have used for decades can be a difficult task.

Technical Aspects
In order to understand the human factors in test automation, we have to revisit the technical factors. The most basic thing is that software test automation is, in fact, software development. In order to automate the steps in a test, some software is developed that needs to be maintained alongside the production code that it is testing.

In general, automated tests consist of two parts: the test data and the code that drives the application under test. Test data is usually maintained in a separate format or even a separate repository. The automation code may make use of some public framework like FitNesse or RobotFramework, or it might be based on a company-grown testing framework.

Test Data
The maintenance of test data is a critical part of software test automation. In the worst case, the test data initially written down needs to be adapted to every change in the software and a second system using automated tests is built. The result is called the second system effect in software test automation [4]. The most common cause is test data written in terms of how the system achieves a particular functionality. For example, a test for a login page on a website may be expressed in test data by “open browser FireFox,” “load login page,” and “enter in the first text field username.” This ties the test data to the implementation details of the UI with the effect that whenever the user interface changes, the tests need to change as well.

Use case writers and requirements analysts know a simple technique to avoid the second system effect. Use case descriptions and requirements documents focus on a user goal or functional requirement in terms of what will be achieved, rather than prescribing a particular implementation [5]. The solution to the second system effect, therefore, is to write down the test data in terms of the user goal that is exercised. In the above example, this could be noted as “login as user ‘user’ with password ‘very secret.’” The test will then be independent in regards to changes in the user interface. This dependency is thereby moved into the executable code that knows how to exercise the application under test.

Automation Code
Automation code is code that brings together the test data and the application under test. This code may be built with the help of a public framework or by growing your own. Most available frameworks additionally run the tests and report the results.

As noted above, the automation code is heavily dependent on the application it is testing. Therefore, it should be developed with similar development methods as the application itself. Ideally, the application under test and the automation code will share the same source code repository so that changes to application classes are also reflected in the automation code base. Because the automation code may become rather complex over time, it should be documented and tested.


About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.