Five Ways to Make Test Automation Fail

[article]
Summary:

Test automation promises much, but it can deliver disastrous results if implemented poorly. Laura Salazar takes a look at five common practices that can cause automation projects to fail.

Automation testing is undeniably one of the key strategies for any QA manager, and for good reason: Automation promises faster regressions, high productivity, good quality, and reduced costs. However, many QA managers fail to reach those results. They face late deliveries, acquire expensive tools, and deal with a lot of frustration. But what are the causes? In this article, I will review five of the most common practices that cause automation projects to fail.

top down1. We do not need an automation plan.
“Let’s start scripting ASAP,” says the QA manager.

Imagine this: You and your coworker just arrived in the US to perform a knowledge transfer. At the airport, you suggest looking for a road map, but your coworker says, “That’s not necessary. I’ve been here before and can find our way around.” You decide to put your concerns aside, get into the car, and start driving. After two hours on the road, you’re hungry, angry, and your gas tank is running on empty. You stop at a gas station to buy a map and ask a clerk for directions.

Sound familiar? I can assure you the same happens when you don’t have an automation plan—and you are not going to find one in a convenience store. An automation plan helps you attain a clear view of your scope, the resources that you need, the roadmap that you have to follow to achieve your objectives, and a way to measure how far you are from your goal, provide focus, and identify risks.

2. Let’s use the best, most powerful automation tool in the market.
“This guarantees our success,” says the QA manager.

Now, think of your dream car. For example, a Ferrari Testarossa—a high-quality piece of engineering with almost 300 horse power and luxurious interiors that costs half-a-million dollars. Now, picture that fine machine parked in front of an elementary school, 300 HP on hold. The mom in the driver’s seat is waiting for her children, trying to put on some make up, and singing to the baby in the back. The baby is spilling milk and passing his sticky fingers over the luxurious seats. The sweaty boys outside in their soccer gear are scratching the car and damaging the paint job.

I do not have anything against SUVs; I just think they are called “mom-mobiles” for a good reason. They are perfect for that kind of use. The same applies to automation tools. The most complete, powerful tool is not necessarily the best tool for your project.

To select a tool, you will need to consider factors such as budget, the application’s technology, the engineer’s skills, the training curve, vendor support, etc. I assume that you would never buy a car without a test drive, would you? The same applies for your automation tool. Once you have your selection, perform a proof of concept (POC) to verify that the tool satisfies your expectations. If your client or QA organization already has a tool, do a POC anyway, as it will give you information about potential issues with the application technology and help you fine-tune your estimations.

3. We do not need standards nor a framework.
“That requires time that we do not have. We need to start scripting right away, and I’ve got experienced people that know what they´re doing,” says the QA manager.

The recently released movie The Avengers teaches us that having a team of superheroes in a room doesn’t guarantee anything. You may have five automation superheroes, but in the best-case scenario you are going to end with five perfect, innovative superscripts that do the same thing. In other words, you will have used effort and time that probably did not get you closer to your goal.

On the other hand, let´s say that you have a smart, energetic, and proactive team with no experience in automation. How will they know the best way to do it? And, how can they guarantee that their scripts have the same level of quality?

One of the biggest automation problems is not figuring out how to finish the scripts but figuring out how to maintain them. To achieve an efficient maintenance effort, your scripts should be standardized, well documented, reusable, and easy to modify. That is what a framework—even a simple one—can give you: standards that tell your junior team members the expected result, and a mechanism for focusing your senior team members in the highly complex or reusable parts of the automation scripts.

4. Automation and manual test engineers are independent teams.
“They have different skills and different objectives, so let’s keep them apart,” says the QA manager.

Let me tell you, from my perspective, how an offensive team is created in American football. It has two types of players: the lean, muscular, agile, and fast players who can cover a great amount of yards by air or ground, and the aggressive, motivated, larger players who move only a few yards each play and protect the first group. What happens when they do not work as a team with a common strategy? The opposing team painfully crushes them.

Automation testing is not only able to cover a great amount of functionality faster and with the best quality, but it is also a tool that helps you verify that what has already been tested has not changed. Manual effort, on the other hand, tests what’s new and what has changed, what is too complex to implement using automation, or what requires human skills and experience. They are complementary tools, and you are going to get the best ROI when they work together as a global strategy. If you are making the automation team drag after the manual team or using the automation team to execute and debug, or if you are trying to automate everything new and old or trying to make your automation team experts in the application domain, then you do not have a global strategy. You probably also don’t have an effective and productive team.

5. We are going to automate 100 percent of our regressions.
“We have the team and the tool, so let’s get the best out of them,” says the QA manager.

This is the most common problem in automation testing—trying to automate what is not automatable. Have you seen those lengthy cycling races, such as the Tour de France? They plan it by stages. The winner of each stage is assigned as the next stage’s team leader. However, winning a team leader position does not guarantee winning the whole race; in fact, 2011 winner, Cadel Evans, only won stage 4.

Automation is also a long-term race in which you need to analyze how to use your resources to achieve your final goals and, often, in which you may need to lose a battle to win a war. Not all of the test cases can be automated, nor can all candidates automate. To automate a test case, it needs to be tool-friendly from the technology perspective, have the correct data, be free of human interaction, and be stable. A test case that is going to be automated should be required three times at minimum, and it should be automated within the timeframe parameters (i.e., if it takes too long to automate, it may not be a candidate). Usually, in the best of scenarios, only 75-85 percent of regressions can be automated.

Are those five the only causes of failure in automation projects? Absolutely not. I can provide you ten more examples, and you have likely experienced even more than that. But, I would like to provide one more piece of advice: If you are going to start an automation project, put your shoulders back, sit up straight, and wait for the worst. Remember Murphy’s Law: Anything that can go wrong will go wrong.

User Comments

2 comments
Jim Hazen's picture
Jim Hazen

Yep, it is all about misconceptions and false perceptions of what test automation and the tools that implement are about.

The fallacy of getting a tool and it solving all your problems is still an issue today as much as it was 20 years ago when the GUI based tools appeared.

My two favorite sayings apply here: "It's Automation, Not Automagic!" & "A fool with a tool is still a fool."

July 30, 2012 - 11:20am
Juergen Brueckler's picture
Juergen Brueckler

I see a software development project including test automation as a truck with trailer running down the road.

The truck is being steared by the project lead, software architects - whoever.

Inside the truck there's the development team piling up boxes.

Inside the trailer there's the test (automation) team also piling up boxes.

Very often the project lead makes the test automation team believe that they are driving down a highway without any curves. So the automation team does the best it can and builds a beautiful small but high pile of boxes.

Unfortunately the QA manager has not been invited to the meeting where the project roadmap has been discussed and he does not know that the road will make some 90 degree curves within the next few versions.

The developers will be informed and they start re-piling their boxes.

The test automation engineers (in the trailer) do not see what's going on in front of them and are happily continuing to pile up boxes.

Then comes the curve - the next version - and BANG all the piles tumble around the trailer. Test execution for this version: Only 20%, all other tests are broken. Maintainance effort to get it fixed again: Not planned in project.

I've been inside the trailer a few times before.

My lessons learned: If you feel that information flows poorly within the project, keep the pile low and wait for the first curves to come. They will come definitely!

Regards,

Juergen

July 31, 2012 - 8:28am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.