Automated testing is as old as software programming itself, yet a lot of companies and teams struggle to adopt it in their work. If you have ever heard people say that automation testing did not work for them, or that they can’t rely on their automated tests, or—my favorite—that automation has made QA roles irrelevant, then maybe they just aren’t thinking about automation the right way.
If you type anything about automation testing into a search engine, you will get flooded with results about numerous tools that help you write automated tests. Similarly, if you search for automation frameworks, almost every link talks about its main components such as the driver, utilities, database component, results storage, test case management, and test data generation. But there are few sites that talk about automation strategy and the role that plays in the success of automation efforts.
Automation strategy is one of the most important aspects of any testing framework, because everything else depends on it. I like to use the 5W1H questions to outline automation plans: who, where, when, what, why, and how. The five W’s make the automation strategy clear, arguably in ascending order of importance, and the “how” deals with the tooling.
A few years back I was leading a kickoff meeting for a new project. I was informed that the client had approached our company with a long version of “Automation isn’t really working for us.” I went into the kickoff meeting thinking it would be a typical “write or tweak framework and tests” kind of assignment, but to my surprise, this was different.
The client already had a lot of UI tests. The tests were written in QTP and were using a good setup of VMs to run parallelly. And most tests were valid, were reliable, and passed with no issues. Honestly, I initially had no clue of what we were even supposed to do, as almost everything we were seeing looked good.
I started asking a lot of questions about their development process and how many unit tests and component or API-level tests they had. At first, I could see they were resisting all those questions because they wanted me to focus on their UI tests, as that’s what they had contacted our company about. But we pushed through, and after a lot of Q&A, we had an idea of what they were struggling with.
They had a lot of tests that they wrote over a period of four years on a legacy product. As far as test organization, it almost seemed like marbles thrown on a floor. Everyone knew what a test did, but no one had any idea what the suite of those tests meant for the product.
Their tests passed but had a lot of white box issues, including poorly defined test objectives, test redundancy, and poorly defined or poorly coded exit criteria. And to top it all off, they had no execution strategy for those tests. Taking a deeper view into their stuff revealed that their tests not only had false failures, but even had false passes, which are much more dangerous than the false failures. Over the next four months, our team worked tirelessly to clean their tests.
We deleted a ton of UI tests in areas where API coverage existed and wrote new UI tests for areas where APIs did not exist or were not consumed by the app. Then we took all the UI tests and divided them into four categories: sanity, which checked that all major modules of the app were up and running; critical, which checked data creation for main functionality; major, which performed CRUD (create, read, update, delete) database testing for the main functionality; and regression, which was the full suite of tests. In the end, their regression completed in one-tenth the time. Plus, they were not running the same tests all the time, which further optimized their workflows.
What they were essentially lacking in their efforts was an appropriate test automation strategy. They had limited their viewpoint of automation testing to just the UI, so when they gave the UI their complete attention, they unknowingly set the wrong expectations for their automation, consequently also setting counterproductive goals for their teams. All we did was rearrange their tests to be used more efficiently in their workflows.
Most quality engineers and SDETs, when they think of automated testing, immediately think of automating the UI—and if there are not enough tests to be written on the UI for whatever reason, then they start getting restless. Instead, I often push engineers to evaluate the state of their unit, component, and integration tests before bothering with UI tests. Of course, no one likes this at first, but it’s the only practical way to determine which layer of the test pyramid to automate first—the unit, service, or UI tests, what tests should be written first, how the tests will be categorized, and what the execution strategy will be.
Once you challenge engineers to determine the answers to these questions, everything about the need for reviewing the existing state of their automation at the unit, component and API, and UI levels slowly starts making sense. You do this exercise once, and you have your first experience with automation strategy.
When people do not have good luck with automation, it is hardly ever because of the tool being used, but almost always because of the wrong automation strategy, wrong expectations, and wrong adoption of automation. Automation tools only answer the “how” of automation, while having an automation strategy gives answers to who, where, when, what, and why.
From determining what tests to write, to determining where to place them, to deciding which tests to execute at which phase of development, every decision is driven by the automation strategy that you put in place. Simply put, having lot of stable automated tests without any strategy is like running on a treadmill—while you do a lot of work, it doesn’t really take you anywhere.
A good test strategy can cover the constraints of a tool, but even the best tool cannot cover the constraints of a bad test strategy.
User Comments
I acutally think, based on experience, that most Automation testing engineers think first in automating the serivce layer and the UI is the last they think about. Anyway, I'm interested in your opinion about the relationshipd between the automation framework seleted and the maturty of the sw to be tested. Look forward to your opinion.
Hi Agustin, wish you a happy new year and thanks for reading through. I apologize for a late reply. I am glad that you have had a good experience with automating the service layer first. On your question, if I correctly understand what you exactly mean by 'maturity' of the application under test I am going to answer assuming you mean how well the application is engineered and/or state of quality of the application I believe choice of the framework for the most cases is/should be agnostic to the maturity of the application, because first, if your application is not matured enough now, hoping that some day it will be and it would not be wise to switch frameworks based on that. However, with a particular frameword how you need to architect the solution I believe always should be in accordance to the maturity of application under test. For example if you are following a Page Object Model and your application is not matured enough it might drive decisions like how well your page objects are abstracted, and how your components are designed but it should not stop you from using POM. I suggest for such occasions you choose the right framework design but then architect your framework alongside the application, scaling it and maturing it as your application matures.
Hi Akash,
Thank you for sharing your experience. I completely agree that setting the right expectations and building a well-thought strategy is of essence to the success of test automation.
This article makes sense even more today with all kinds of tools appearing in the market and lot of hype into becoming a automation test engineer without understanding the basics of SW testing and layers of Test Pyramid.