5 Pillars of a Successful Test Automation Implementation

[article]
Summary:
Discussions on what constitutes a “proper implementation” of test automation often focus on what tool you should use, but that is only one part of the equation. Bas Djikstra details four other things you should consider, how they contribute to the success of your test automation, and what risks are associated with failing to pay proper attention to each of them.

For organizations looking to deliver quality at speed, running automated tests is an important part of the software development lifecycle. Test automation, however, can only be successful if implemented properly. Discussions on what constitutes a “proper implementation” of test automation often focus on what tool should be used for the job, or on the best (if there even is such a thing) or most efficient way to use a specific tool for a given task.

In my opinion, though, the tool that is used is only one part of the total test automation equation. Any successful test automation implementation is constructed from five distinct parts.

In this article, we'll take a look at each of these parts, how they contribute to the success of your test automation implementation, and what risks are associated with failing to pay proper attention to each of them.

1. The Test Automation Tool

While not the only factor playing a role in successful test automation implementation, the tool obviously does have an impact on the overall outcome of your automation efforts. Choosing a tool that is insufficiently compatible with your application under test, or one that does not fit the skill set of your automation team, will likely lead to less than optimum results.

Even more important than the choice of a tool, however, is asking yourself what it is exactly that you want to cover with your automated test, and then deciding on the most efficient way of getting to that result. A prime example of a question that needs to be asked is on what level a certain piece of functionality or business logic needs to be verified.

Do you want to make sure that your customers can open your web shop, search for a specific product, and subsequently place and pay for an order? You will probably want to check this using an end-to-end user interface-driven test. If you're verifying the correctness of a piece of logic that determines whether or not a customer is allowed to purchase a given object (for example, due to regulations on the state or country level), then you will likely be able to write tests that hook into your application under test at a lower level, such as an API or even a single class of code. This constitutes a different scope and approach for the test and, as a result, requires a different tool.

In short, make sure that you first know what your automated tests need to verify before spending time on how to achieve the desired result. Remember that there's a significant risk in forcing your tool to do things it is not designed to do.

2. Test Data

Another important factor of any serious test automation solution is the approach taken to managing test data. The broader the scope of the tests, the more important, but also the more demanding, test data management becomes.

While in unit testing you can get away with mocking all data your tests depend on, when you start working on integration or end-to-end tests, you will need specific data to be present in your application under test. And, to make matters even more complex, you will often need the data in other systems that interact with your interconnected application under test to be in a certain state as well.

There are several ways to deal with test data in these types of tests:

  • Creating the required test data in the setup phase of the test
  • Querying the system for existing test data before starting the test
  • Initializing the database of your application under test before the start of a test run

Each of these approaches has its potential pitfalls:

  • Creating test data in the setup phase of a test increases test execution time, increases the risk of failure before the test itself is even started, and leads to a lot of useless test data if there is no proper data cleanup procedure
  • When you query the system for existing test data before starting a test, you run the risks of accidentally using invalid test data, or of no test data with the right properties being present in the system
  • Initializing the database before a test run leaves you with database snapshots to manage and keep up to date—that is, if you're even allowed to perform a database restore procedure in the first place

Note that there is no one right way of dealing with test data for integration and end-to-end tests. However, choosing the wrong procedure, or failing to address the test data question at all, will likely lead to a test automation solution that is not reusable, maintainable, or scalable.

3. The Test Environment

Monoliths are rapidly going the way of the dinosaur. Modern IT systems consist of a number of interconnected components, services, and applications that work together to deliver business value. For testing purposes, however, this is not always good news: Having to manage and rely on the availability of dependencies, especially those outside your circle of control, for your integration and end-to-end tests can cause a lot of overhead, frustration, and delays in test time. Still, reliable and manageable test environments are key when you want to create and use automated tests as part of your testing approach.

One way to mitigate the risk of failing or nonexistent test environments is the use of simulation techniques such as stubbing, mocking, and service virtualization to replicate the behavior of critical yet hard-to-access dependencies in your test environment. Having simulations that mimic the actual dependencies' behavior enough to complete the test cases you want to execute can greatly speed up your automated testing—and, therefore, your development efforts.

Furthermore, when virtual environments are set up properly (for example, by leveraging containerization), recreating a new instance of the same test environment, complete with the same test data and other characteristics, makes it possible to move from automated to truly continuous testing, which in turn is a prerequisite if you're looking to adopt continuous delivery as a method of being more flexible and better responding to increasing market demands.

4. Reporting

The reporting generated as a result of an automated test run should be a crucial part of any solid test automation approach. Creating good test result reports is often overlooked, yet it is a potentially time- (and life-) saving task in any test automation project. Good reporting goes beyond displaying the number of tests run, passed, and failed, although having just that is better than nothing.

For a test run report to be truly valuable, it needs to make visible which tests were run (note that naming your test in a clear and unambiguous manner is the basis for any good report!) and not only what the result was (pass or fail), but also where something went wrong in case of a test failure, detailed as precisely as possible.

This is different from providing an information overload by just copying anything and everything into your test report—that would unnecessarily delay getting to the root cause of the test failure. A good reporting shows that something went wrong, where in the test the error occurred (at what step), what the error message was (depending on the audience of your report, this can be as simple as a stack trace, but in other cases you might need to provide error messages that can be read by nontechies too), and what the state of the application under test was at the moment of the failure (for example, using a screenshot for user interface-driven tests).

Note that a good reporting strategy might involve creating more than one report per test run. If your tests are part of a continuous delivery build pipeline, you might want to create low-level reporting that can be interpreted by your build engine to determine whether the build can be continued. But you might also want to create a readable report in HTML format, complete with a textual description of the purpose of the tests, as well as human-readable messages and screenshots in case of a failed test. It all depends on the audience.

5. Craftsmanship

The final, yet arguably most important, piece of the puzzle for creating a powerful and efficient test automation solution is the people that are responsible for implementing it. Without skilled automation consultants, architects, engineers, and developers paying attention to all the other aspects of test automation mentioned in this article, you'll likely end up nowhere soon.

Your test automation team ideally should be both skilled in the testing field, so they can answer why test automation would be a suitable solution in the first place, as well as what tests should be automated; and skilled in software development, meaning they know how to create a test automation implementation that is both powerful and maintainable. This does not mean that every member of your test automation team needs to be skilled in both areas, but as a whole, your team should possess a healthy balance of the two in order to deliver.

Putting It All Together

A good test automation solution needs to take more into account than just the tool that drives the tests. For automation to be truly successful, you need to give thought to your test data strategy, to how you manage your test environment, and to the way you inform your audience about the results of your automated test run. Most of all, however, it is about building a team of people who know how to do all of the above.

User Comments

2 comments
Simon Rigler's picture

Excellent article! I've just put together seven key factors for a test automation presentation at my place of work and I'm pleased to your five pillars are all in there. I'm especially pleased to see 'data'    in there as I think it is often overlooked or massively underestimated.

October 31, 2017 - 5:34pm
Bas Dijkstra's picture

Thank you for the feedback, Simon! Out of curiosity, which two other factors did you mention?

November 1, 2017 - 6:31am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.