Transforming a Test Automation Maintenance Nightmare into Success

[article]
Summary:
Best practices for test automation emphasize reliability, portability, reusability, readability, maintainability, and more. But how can your existing automated test suite adopt these qualities? Should you address these issues with your current tests, or create an entirely new set of tests? Here are some questions that will help you determine if your test automation maintenance program is operating as it should be.

“Automation” is not a new buzzword in the industry. With the evolution of e-commerce and rapid access to mobile technology, delivering software applications as quickly as possible has been a trend for some time. But it’s difficult to appreciate the solution without truly understanding the problem. One size doesn't fit all, and there is not one perfect "best practice" solution that applies to all automation problems. We must weigh the cost, effort, and risk against potential benefits.

There are tons of online resources about best practices for test automation that emphasize reliability, portability, reusability, readability, maintainability, and more. When I first started creating automated tests, I found this information helpful as well as stressful. How could it be practical to adopt all these practices for your tests from the get-go? If you are a test automation engineer, I’m sure you have faced some of these challenges as well at some point in your career.

Let me start with my journey of writing browser automation tests, then get into what I learned from my mistakes and how I overcame challenges.

Writing tests was initially time-consuming, and I was always trying to improve as I cycled through them during maintenance. Just like any other development task, creating tests also has deadlines and management expectations, and balancing these factors is crucial for success in a test automation project.

In order for my first project to meet the schedule, I rushed to create the tests and didn't consider some of the best practices mentioned earlier. My tests were stable and passed 100% of the time—until the application under test (AUT) started changing a few months later. Now, the real quality of my tests came to the surface, and they became a maintenance nightmare.

Whenever a test failed, we spent lots of time trying to understand the cause of the failures so we could determine whether it was due to regression, an expected change in the AUT, or environmental issues such as a new browser or system updates. After weeks of troubleshooting and frustration, we spent some time to identify the issues that manifested from our tests.

Here is what we discovered:

  • Most of our tests were too big, consisted of too many steps, and tried to validate too many different functionalities. Many of these steps also depended on the successful execution of previous steps, and there were many inefficient wait conditions added, such as static delays, which made execution unnecessarily long.
  • From a test report standpoint, there were many times it was hard to understand the test failures. We weren't able to quickly determine what the cause of failures were without spending a lot of time on the report. Many test steps had generic names or descriptions based on element locator—for example, clickButton - //div[2]/button—and when such a test failed, it wasn't clear which button on the page it was referencing.
  • These tests weren't very portable. If someone wanted to execute tests from their workstation, they had to set up many additional environmental data locally because there were preconditions that needed to be set up as part of the build process outside the test. Another issue was that tests had many hardcoded references, such as resource name, that made it inconvenient for anyone to run these tests outside their specific test environment.
  • Tests lacked execution repeatability. Rerunning the test in the same environment was challenging and many times didn't work because people were not resetting the environment after its initial execution. This prevented quickly rerunning tests after updating them or after a new AUT build without manually setting all preconditions outside the test.

Once we identified the issues, next we had to decide how to tackle them. Our choice was either to address those issues with our existing tests or to create an entirely new set of tests.

Since we had a large number of complex tests, we had to consider the time it would take to recreate them and assure management that the new tests would not have maintenance issues moving forward. After assessing effort vs. risk factors with the team, we unanimously decided that recreating all tests would not be a viable option. We didn’t want to accidentally miss any existing functionality covered by these tests.

That left us with the option of fixing existing tests using best practices that would address most of our maintenance challenges. Here is how we mitigated our identified problems.

First we had to break down our existing poorly crafted tests into smaller modules that could independently run and test specific functionality. Note that this may not be feasible for every organization, depending on the volume of tests and the amount of time it would take to refactor them. In such cases, the best thing to do is to leave old, stable tests as is and only refactor those that need immediate attention. Sometimes recreating tests may not be a bad option, considering the amount of time it would take to fix all existing tests.

Then we separated out any commonly used test code and procedures into their own modules. Instead of copying and pasting these pieces of code all over, we simply started referencing it.

Then we created setup (precondition) and teardown (resetting any changes the test introduced in AUT) conditions as part of the test. Overall, this makes the test execution a bit longer, but it’s worth it to have standalone tests that can quickly execute without any manual steps or running lengthy build process.

We removed hardcoded references (host name, port, http, https, etc.), parameterized them, and made them configurable.

Finally, we optimized the test execution time and remediated fixed delays by applying proper smart wait conditions, such as having to wait until an element is visible, clickable, etc. This helped us to eliminate unnecessary long delays during test execution.

To make tests consistent, we defined a guideline and review process for everyone who is going to maintain these tests or create new tests in our infrastructure. The guidelines included test and element naming, element locator technique (use XPath vs. CSS), and documentation and comments. This consistency exercise made it easy for anyone on the team to read and maintain our tests.

In addition, we set a team goal for maintaining 100% passing tests, addressing any failing test as quickly as possible. A test could be failing or unstable due to environmental issues, poor test or wait conditions, a constantly changing AUT, etc., so it’s always best to identify such failures in a timely manner and attempt to address them immediately. This will retain the team's good faith in your automated tests.

These are the issues and solutions for our suite of automated tests, but again, there is no one correct formula that will work for everyone. How do you know if you have adopted a best-practice approach with your set of automated tests?

Asking these questions will help you determine if your test automation maintenance program is operating as it should be:

  • Are the tests easily maintainable by your team?
  • Are they stable and repeatable?
  • Are there any intermittent failures without any changes to the AUT?
  • Can others run these tests in their environment without performing a bunch of setup work? (With browser tests, as long as the AUT is set up, anyone should be able run these tests from any machine with the supported browser)
  • Do tests take a long time to execute?
  • Can someone understand a possible failure based on the test report?
  • Do you have to apply the same fix in multiple places? Can you refactor the common piece of code in one module and reference it?

Don’t be afraid to ask for feedback from developers and other team members about your tests. There is always room for improvement, and as a test automation engineer, you should always strive for it.

User Comments

1 comment
Akshaya Choudhary's picture

Excellent points, Vinay. And yes, it certainly is challenging to incorporate all the best practices from the very beginning. Test data management and test environment management gets tricky.

January 25, 2020 - 7:42am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.