Test Automation Stumbling Blocks: Foundational Problems


When test automation fails to live up to expectations, it may be due to foundational problems not directly related to the automation itself. The key to overcoming these foundational problems is to work toward gaining predictability by removing as many extraneous variables from the effort as possible.

Oftentimes, test managers and automation engineers begin a test automation effort by picking an automation tool, installing it, learning about the application under test, and jumping directly into writing automation scripts. Sometimes, they take time to develop a reusable automation framework containing shared components and function libraries, which allow future automation to be built faster and more efficiently. But even when these initial steps are done correctly, managers, automation engineers, and testers still find themselves frustrated by test automation assets that often don’t live up to their expectations. Test results are inconsistent and test maintenance, if done at all, takes much more effort than planned, making the customers wary of the results.

When your customers lose confidence in the test results, the automation wilts on the vine and becomes another example of failed expectations, leaving test managers to wonder why. When test automation fails to live up to expectations, it may be due to foundational problems not directly related to the automation itself. In fact, a test team can build automation that is efficient and spot-on with respect to application requirements and still suffer the fate of so many prior efforts. The key to overcoming these foundational problems is to work toward gaining predictability by removing as many extraneous variables from the effort as possible. These uncontrolled variables will quickly end the life of any test automation effort.

Before an automation tool is chosen and before a team talks about an efficient and effective automation framework, there are a few foundational items that must be addressed. From my experience, attempting to build test automation without ensuring these items are first in place is much like attempting to plant a garden in shallow soil. Whatever does take root and grow will never reach its full potential and will be much more susceptible to premature death.

Predictable Test Environment
Nothing is more frustrating to a tester than to get well into a passing test only to have the environment begin to fail around him. But where manual testers may be able to resume a test after the environment recovers, a test automation run generally cannot. While test environments will almost always be inherently less stable than their production counterparts, a measure of predictability is a must if test automation is to run with trustworthy results.

If software and infrastructure changes often bring the AUT down, these changes will be unable to overlap the running of test automation without causing false negatives. In this case, scheduling recurring, non-overlapping release and test automation windows may be necessary. Additionally, requiring basic smoke tests to pass before allowing a build to remain in the test environment will ensure at least a basic level of stability necessary for the test automation to run with predictable results.

The key here is being able to predict when the environment is likely to be down or unstable and avoiding automation execution during those times. Or more importantly, being able to predict when the environment is most likely to be up and scheduling the automation to run during those periods. We are not attempting to have our automated tests pass all the time. Rather, we are simply attempting to remove as many instances of false, extraneous failures as possible.

Predictable Test Data
Predictability in test data ensures that the automated tests have correct, consistent records against which expected results can be determined. For systems in which the data can be re-created each time the test is executed, this may be trivial. However, for systems with more complex data structures—particularly those systems with data that is not easily re-created on the fly–keeping consistent predictable test data in place is paramount.

If, for example, your application requires a user to log on with a username and password, something as small as an unexpected password change can stop your test automation in its tracks. Before a single line of test automation is written, the automation engineer and manager must ensure that they have a way to keep the data constant, run after run, month after month. Alternatively, if keeping the same data in place for an extended period of time is not feasible for your applications, the automation framework should be able to gracefully adapt to new test data with little manual intervention.

Predictable Automation Infrastructure
This is often overlooked, but in today’s world of virtual machines, multiple browsers, and centralized security controls, it is very easy to lose control of a host’s ability to create a predictable execution environment for your test automation. As an example, consider the consequences of an unexpected browser version upgrade to a virtual machine on which your automation is running.

If this new browser version displays message boxes or security warnings even slightly differently than the previous version and your automation is not expecting it, those tests will likely produce false negative results. The more control your engineers have over their automation hosts, the fewer unexpected variables will creep in and ruin an otherwise good test run.

While these three foundational items may seem obvious at first glance, they are often overlooked because it is assumed that if manual testers are not being impacted by them, neither will test automation. But, unlike most test automation, humans have the built-in ability to react and adjust to some level of unpredictability. Creating test automation with this level of reactivity and adjustment is, at best, extremely difficult. It is more often than not much easier to control these variables and set up your automation for success right from the beginning.

User Comments

1 comment
Sanat Sharma's picture
Sanat Sharma

Recently, I started the Automation Testing for one of the project. But the framework that I received from the customer was totally worded on C language. My test team took 2 weeks to understand the framework. But after that, the productivity of writing the test cases was quite good of all my testers.

- Sanat Sharma

August 5, 2013 - 2:52am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.