Although there’s no shortage of test automation success stories floating around software testing conferences, webinars, and publications, they primarily feature developers and technical testers that 1) are focused on testing simple web UIs, and 2) have had the luxury of building their applications and testing processes from the ground up in the past few years. Their stories are compelling—but not entirely relevant for the typical company with heterogeneous architectures, compliance requirements, and quality processes that have evolved slowly over decades.
How can mature companies with complex systems achieve the level of test automation that modern delivery schedules and processes demand? The fast answer is: It depends.
Let’s look at the top four strategies that have helped many organizations finally break through the test automation barrier after many years of trying:
- Simplify automation across the technology stack
- End the test maintenance nightmare
- Shift to API testing whenever feasible
- Choose the right tools for your needs
As you read through them, it’s critical to recognize that there is no single right approach that suits every department in every organization. For each of the top strategies, I’ll point out some key considerations that could impact its importance in your organization.
Simplify Automation across the Technology Stack
Traditional approaches to test automation rely on script-based technologies. Before automation can begin, a test automation framework must be developed. Once the framework is finally implemented, tested, and debugged, test scripts can be added to leverage that framework. As the application evolves, these test scripts—and the test automation framework itself—also need to be reviewed, potentially updated, and debugged.
Often, significant resources are required to ramp up test automation for just a single technology (e.g., a web UI or mobile interface). This could include training existing testers on the specific scripting approach you’ve selected, reallocating development resources to testing, or hiring new resources who have already mastered that specific approach to script-based test automation.
Even testers who are well-versed in scripting find that building, scaling, and maintaining test automation is a tedious, time-consuming task. It’s often a distraction from testers’ core competency: applying their domain expertise to identify issues that compromise the user experience and introduce business risks.
If you have a heterogeneous application stack to test (for example, packaged applications such as SAP, Salesforce, ServiceNow, or Oracle EBS plus APIs, ESBs, mainframes, databases, and web and mobile front ends), multiple frameworks will need to be learned, built, and linked in order to automate an end-to-end test case. Selenium—by far the most popular of all modern test automation frameworks—focuses exclusively on automating web UIs. For mobile UIs, you need Appium, a similar framework. Also testing APIs, data, packaged applications, and so forth? That means that even more tools and frameworks need to be acquired, configured, learned, and linked together.
Now, let’s take a step back and remember the ultimate goal of automation: speeding up your testing so that it can be performed as rapidly and frequently as needed. To achieve this, you need a test automation approach that enables your testing team to rapidly build end-to-end test automation for your applications.
If your testing team is made up of scripting experts and your application is a simple web app, Selenium or free Selenium-based tools might be a good fit for you. If your team is dominated by business domain experts and your applications rely on a broader mix of technologies, you’re probably going to need a test automation approach that simplifies the complexity of testing enterprise apps and enables the typical enterprise user to be productive with a minimal learning curve.
You might find that different parts of your organization prefer different approaches (e.g., the teams working on customer-facing interfaces such as mobile apps might not want to use the same testing approach as the teams working on back-end processing systems). That’s fine—just ensure that all approaches and technologies are connected in a way that fosters collaboration and reuse while providing centralized visibility.
This is most important for testing in complex enterprise environments that involve multiple technologies—for example, packaged apps plus APIs, ESBs, web, and mobile. The more different interfaces you are testing, the more you should prioritize this. If you are a small team testing a single interface, this probably is not an issue for you.
End the Test Maintenance Nightmare
If your tests are difficult to maintain, your test automation initiative will fail. If you’re truly committed to keeping brittle scripts in check, you’ll sink a tremendous amount of time and resources into test maintenance—eroding the time savings promised by test automation and making testing (once again) a process bottleneck.
If you’re not 100% committed to maintaining tests, your test results will be riddled by false positives (and false negatives) to the point that test results are no longer trusted.
Maintenance issues stem from two core problems:
- Tests that are unstable
- Tests that are difficult to update
The key to resolving the instability issue is to find a more robust way of expressing the test. If your automated test starts failing when your application hasn’t changed, you’ve got a stability problem on your hands.
There are a number of technical solutions for addressing this when it occurs (e.g., using more stable identifiers). These strategies are important to master. However, it’s also essential to consider test stability from the very start of your test automation initiative. When you’re evaluating test automation solutions, pay close attention to how the tool responds to acceptable and expected variations and how much work is required to keep the tool in sync with the evolving application. Also, recognize that even the most stable tests can encounter issues if they’re being run with inappropriate test data or in unstable or incomplete test environments.
To address the updating issue, modularity and reuse are key. You can’t afford to update every impacted test every time that the development team improves or extends existing functionality (which can now be daily, hourly, or even more frequently). For the efficiency and “leanness” required to keep testing in sync with development, tests should be built from easily updatable modules that are reused across the test suite. When business processes change, you want to be able to update a single module and have impacted tests automatically synchronized.
This strategy is most important for teams hoping to achieve high levels of automation and teams working with actively evolving applications. If you’re trying to automate a few basic tests for a relatively static application, you might have sufficient time and resources to address the required maintenance. However, the more test automation you build or the more frequently the application is changing, the sooner test maintenance will become a prohibitive nightmare.
Also, fast-growing and high-turnover teams are more vulnerable to “test bloat”: an accumulation of redundant tests that add no value in terms of risk coverage but still require resources to execute, review, and update. Focusing on reuse and applying good test design strategies will keep bloat to a minimum.
Shift to API Testing
Today, UI testing accounts for the vast majority of functional test automation, with only a small fraction of testing being conducted at the API level. However, a second look at the Continuous Testing Rainbow shows that we need to reach a state that’s essentially reversed:
Why? API testing is widely recognized as being much more suitable for modern development processes because:
- Since APIs (the "transaction layer") are considered the most stable interface to the system under test, API tests are less brittle and easier to maintain than UI tests
- API tests can be implemented and executed earlier in each sprint than UI tests (and moreover, with service virtualization simulating APIs that are not yet completed, you can shift testing even further left with a TDD approach)
- API tests can often verify detailed “under-the-hood” functionality that lies beyond the scope of UI tests
- API tests are much faster to execute and are thus suitable for checking whether each new build impacts the existing user experience
In fact, recent studies by Tricentis have quantified some of the key advantages of using API testing versus UI test automation:
This leads to my recommended take on the test pyramid:
The red tip of the pyramid indicates the role that manual testing (typically via exploratory testing) is best suited to play in modern development processes. The green band represents what we’ve found to be the “sweet spot” for UI test automation. The vast majority of the triangle is covered by API testing, which builds upon development-level unit testing.
On a bit of a side note, it’s important to recognize that over time, the test pyramid actually erodes into a diamond. The bottom falls out, making the pyramid unstable—but there are things you can do to prevent that.
From a practical standpoint, how do you determine what should be tested at the API layer and which tests should remain at the UI layer? The general rule of thumb is that you want to be as close to the business logic as possible. If the business logic is exposed via an API, use API tests to validate that logic. Then, reserve UI testing for situations when you want to validate the presence and location of UI elements or functionality that are expected to vary across devices, browsers, etc. In parallel, developers should be testing the API’s underlying code at the unit level to expose implementation errors as soon as they are introduced.
Obviously, if the functionality you’re tasked with testing is not exposed via APIs, this is not a viable strategy for you. For example, if you’re testing an SAP application that’s not leveraging APIs, API testing simply isn’t an option. You need to ensure test repeatability and stability in another way.
Choose the Right Tools for Your Needs
There’s no shortage of open source and free test automation tools on the market. If you’re introducing test automation into a small team testing a single web or mobile interface or isolated APIs, you can likely find a free tool that will help you get started and achieve some impressive test automation gains.
On the other hand, if you’re a large organization testing business transactions that pass through SAP, APIs, mainframes, web, mobile, and more, you need a test automation tool that will simplify testing across all these technologies—in a way that enables team members to efficiently reuse and build upon each other’s work.
However, before you focus on selecting a tool, consider this: The greatest mistake organizations make with test automation initiatives is thinking that acquiring a test automation tool is the most important step in adopting test automation. Unfortunately, it’s not that easy. No matter which tool you select, it’s essential that you regard it as just one component of a much broader transformation that touches process, people, and technologies.
Cost is undeniably a factor in every tool acquisition decision. Be sure to consider the total cost of ownership—including what’s required to train and ramp up your existing resources (or hire additional ones), build test frameworks, and build and maintain tests.
Also, recognize that it’s fully feasible (and often valuable) to have different teams using different tools. A small team creating a mobile app for your annual corporate event does not need to use the same tool as the team testing how your SAP-based business critical transactions are impacted by frequent upgrades. “Single pane of glass” reporting provides centralized visibility while allowing each team and division to choose the best tool for their needs.