People-driven Test Automation

[article]
Summary:
So much of test automation focuses on getting those dirty humans out of the process, but the reality is that humans have to write and maintain software test infrastructure. In this article, Markus Gärtner covers some common pitfalls and how to avoid them.

Successful test automation depends on a variety of factors. The technical aspects involved have been well understood for more than a decade [1, 2, 3], but the human aspects have not been discussed as much. Overcoming a particular approach that testers working on automation may have used for decades can be a difficult task.

Technical Aspects
In order to understand the human factors in test automation, we have to revisit the technical factors. The most basic thing is that software test automation is, in fact, software development. In order to automate the steps in a test, some software is developed that needs to be maintained alongside the production code that it is testing.

In general, automated tests consist of two parts: the test data and the code that drives the application under test. Test data is usually maintained in a separate format or even a separate repository. The automation code may make use of some public framework like FitNesse or RobotFramework, or it might be based on a company-grown testing framework.

Test Data
The maintenance of test data is a critical part of software test automation. In the worst case, the test data initially written down needs to be adapted to every change in the software and a second system using automated tests is built. The result is called the second system effect in software test automation [4]. The most common cause is test data written in terms of how the system achieves a particular functionality. For example, a test for a login page on a website may be expressed in test data by “open browser FireFox,” “load login page,” and “enter in the first text field username.” This ties the test data to the implementation details of the UI with the effect that whenever the user interface changes, the tests need to change as well.

Use case writers and requirements analysts know a simple technique to avoid the second system effect. Use case descriptions and requirements documents focus on a user goal or functional requirement in terms of what will be achieved, rather than prescribing a particular implementation [5]. The solution to the second system effect, therefore, is to write down the test data in terms of the user goal that is exercised. In the above example, this could be noted as “login as user ‘user’ with password ‘very secret.’” The test will then be independent in regards to changes in the user interface. This dependency is thereby moved into the executable code that knows how to exercise the application under test.

Automation Code
Automation code is code that brings together the test data and the application under test. This code may be built with the help of a public framework or by growing your own. Most available frameworks additionally run the tests and report the results.

As noted above, the automation code is heavily dependent on the application it is testing. Therefore, it should be developed with similar development methods as the application itself. Ideally, the application under test and the automation code will share the same source code repository so that changes to application classes are also reflected in the automation code base. Because the automation code may become rather complex over time, it should be documented and tested.

Human Aspects
Elfriede Dustin described in a 2005 interview that “50 percent of my audience had test automation tools that have become ‘shelfware’” [6]. In order to prevent this shelfware effect, software test automation needs to deal with the human aspects as well as the technical ones.

Knowledge and Skill
Because automation code may shift the work of

Tags: 

About the author

Markus Gärtner's picture Markus Gärtner

Markus Gärtner studied computer sciences until 2005. He published his diploma thesis on hand-gesture detection in 2007 as a book. In 2010, he joined it-agile GmbH (Hamburg, Germany) after having been a testing group leader for three years at Orga Systems GmbH. Markus is the co-founder of the European chapter in Weekend Testing, a black-belt instructor in the Miagi-Do school of Software Testing, and contributes to the ATDD-Patterns writing community as well as the Software Craftsmanship movement. Markus regularly presents at agile and testing conferences, as well as dedicating himself to writing about testing, foremost in an agile context.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Oct 12
Oct 15
Nov 09
Nov 09