Your Test Automation Strategy Needs to Be Optimized: An Interview with Mary Thorn

[interview]
Summary:

In this interview, Mary Thorn of Ipreo explains how to get the most out of your team's test automation strategy. She explains why you need to implement automation, the balance between manual and automated testing, and what to do when a team's automation completely lacks a strategy.

Josiah Renaudin: Welcome back to another TechWell interview today. I'm joined by Mary Thorn, the director of software test engineering and agile practices at Ipreo and a keynote speaker at this year's STARWEST conference. Mary, thank you very much for joining us today.

Mary Thorn: Thanks, Josiah. I really appreciate it. I'm looking forward to having a conversation.

Josiah Renaudin: Absolutely. Before we do dig into that conversation about your keynote, could you tell us a bit about your experience in the industry?

Mary Thorn: Yeah, I've been doing software test engineering for about twenty years. I've done agile software test engineering for about the past ten. I've crossed multiple domains, primarily HR as well as financial domains, and that's what Ipreo's is, a financial technology company. A lot of work at startups, so I have a real passion around building software test practices from scratch.

Over the years, I've been in a position where I've come in for startups, built their teams from scratch, and then, a lot of times, they move from a waterfall environment to an agile environment. One of my passions is, I've seen a lot of different agile implementations because of that, and that allows many different tooling options, and because of that, I've gained a lot of experience on transformation and agile testing, and that's really what has inspired me around my keynote that we're talking about today.

Josiah Renaudin: When I first started doing these interviews, agile was always a point of conversation, about, like, "Does everyone need to do agile?" Automation has also been one. Automation at the start, when I first started talking to people, was more like, "Do you need automation, and where do you need automation?" But are we at a point where test automation isn't optional but actually required within a modern software team?

Mary Thorn: In a modern software team—so if you were saying agile versus waterfall, even in a non-waterfall environment, it's still essential. There's so much return on investment that you get from having things automated, anywhere from quick feedback around broken code, allowing the manual testers or maybe your more domain testers to be able to actually do really good, in-depth testing, other than just making sure the happy path works, helping with the build and deploy process and being able to really quickly make sure that what you're getting into your environments are in a good state because they were able to pass the automation. There are just so many benefits that you get and risk reduction. I always ask people, "Give me a couple ways to reduce risk in agile," and they first always say, "Automation," and that is primarily the goal. Automation is, to your point and your question, it is ultimately the, in my opinion and everywhere I will be in the future, required to take that software practices to the next level.

Josiah Renaudin: I often hear about the balance between automated and manual testing and how you can't fully rely on one. You can't only be manual, and you can't only be automated. In your mind, is there a universal ratio that works for the majority of teams, or is it one of those situations where it just depends on the makeup of the organizations and the projects being worked on when it comes to your balance between automated and manual testing?

Mary Thorn: That's a million-dollar question. It's really interesting. This market keeps changing. Five of six years ago, you had the whole "Software testing's dead" thing that went through, which was like, "Manual testing's gone. Everybody has to be automated, so we're just going to hire developers to do all the work." Over my past twenty years, I believe in the craft of software testing, and I do believe there is value that manual testers bring, for me, either from a domain user experience perspective versus somebody who's just a straight automation person. Best case, you really want a person who can do both. They have this concept of full-stack developers. They can do front-end work or back-end work or middle-tier work. With testers on a Scrum team, somebody who can write test cases, who understands the domain and use cases, who can do the automation and code in C# or Java, that's maybe the unicorn.

That's what you want, but when I walk into organizations, that's rare that I find those people immediately. What I do is I look at a Scrum team, I look at the skill sets of those testers, and I'll say, "This person's a twenty-year product expert who maybe is now being given to me from a testing perspective. It is imperative that I upscale that person to learn C#? The value that they bring by her or him learning C#, is it really going to help this Scrum team?" Sometimes, it does. Sometimes, it doesn't.

Most of the time, in that case, that domain knowledge, that value that person brings is way more… So I'll pair that person up with a heavy automation person on that Scrum team. Then, also, we have to make sure that we get the development into helping automate, because, typically, one automation person on a team is not enough to keep up with the volume, until you get to the point where you have lots of already automated. When I come in and look at this team, it’s awesome that you can do both, but a lot of times, you just can't find a person who can do both.

It’s from a perspective of high level C# training workshops, and then pairing them with developers. So you get to the point where the team understands the skill sets and the skill set gaps, and team learns, that same team being the Scrum team, the team learns where, "I don't want this product person automating," so developers, knowing how important it is, they'll pick up those tasks. It's a million dollar question. It'd be great to have somebody who can do all of it, but finding that person is rare these days. It's getting better. It's a lot easier than it was ten years ago, but there's still a lot that we have to do from a leadership perspective to upscale this people.

One of my biggest frustrations when you use vendors is, they come in, and they just want to automate. They just want to be fed test cases. To me, that's not much value. I want those people to have thoughts and ideas on how to test. I want to my automation people to be able to write their own test cases. I need them to understand the domain to make sure what they're automating is the right thing. It actually is weird on the back side of, when you do get really good automation people or you hire a vendor to do it, they don't want to do the manual testing part or to even create their own test cases. To me, it's like, "How do you even automate something if you don't know what it's doing? That’s been the flip side, which has been an interesting dynamic over the past years, because people are heavily moving into that realm. Ideally, you want somebody who can do both, but again, you don't always get that opportunity.

Josiah Renaudin: Sticking with automation, how often do you go into a team, and you see a team with plenty of automation but no real strategy for how to actually make it all work?

Mary Thorn: Every client I've ever been to. It's one of those things that's interesting to me. I've come from little startups, and then I've worked at bigger companies, so I worked at a large bank over the past couple years, and they basically had the mandate, "You need to automate." That's the mandate, "Go automate." We had people standing up automation machines on their laptops with some computer that they can grab over in the corner, whatever we can do to get a grid together, and there was no CSCD process behind it. Nobody else on the team knew the status of the test. They just knew that Joe Smith over there has the whole test suite automated, but they don't never know the results or what's broken or constantly write up bugs. It was crazy to me that there was no long term strategy other than, "You're mandated to go automate," and that's what they did, without no real leadership around it.

It's been interesting. I come in, and my goal is, I say it's the peel back the onion strategy. You have some basics that always makes for good automation, and having that going in to continuous integration, continuous deployment, if you're tests aren't being ran, either in a smoke test fashion, after a deployment, or on a schedule that everybody can see the results, they have no value. That's been a small thing of interest, like, "Why are these not hooked up to Jenkins or TeamCity or ran through a grid? Why does it take five days to run a smoke test?" People hit that roadblock, and they just don't know what to do, so they just keep doing what they're doing.

Josiah Renaudin: Speaking of things you do see often when you go into teams, when you see a team with unwieldy test suites and tests that are constantly failing, what are some of your first few steps to actually make sure things get back on track? Again, this can be a case by case basis, but do you have a certain number of steps in your mind that work the majority of the time?

Mary Thorn: Yeah, that happens all the time. The first thing I do is I typically will talk to the business: "We're going to have to stop automation right now, because what we have automated isn't passing. It's providing us no value." Then, if I get them to the idea that "Yes, we're going to stop," then we have to analyze what's wrong with these tests. When you think about sprints and that in agile, I need five points of testing technical debt to sit here and analyze, "Why do tests fail all the time?" I'm probably in business going to need that five points for the next six sprints, because, from my experience, or maybe it's twelve sprints, depending how bad it is and the number of tests, but from my prior experience, it's going to take us some time to stabilize these existing tests.

A lot of times, that's a hard sell. Definition of done says it’s automated, but we've done something wrong. These tests have no value. Nobody is even trusting the tests. What's interesting, my first story in my keynote is actually all around this topic, for one of the companies I worked with previously. For me, once you get them to stop and realize the tests they've built is not that safety net, is not de-risking anything, and then you get to start to stabilize them, and then you get the team to start to care about the tests and keeping them green.

One of the techniques I typically do is, I call it a green cop. Somebody every day is looking at the tests, looking at the failures, why are they failures? Categorizing the failure. It might be an environment issue. If you didn't categorize it, you wouldn't know that the environment was the problem and not the automation, or every day has a coding issue, and maybe it's from a certain developer who's always breaking things, or is it a test issue, and maybe the test issue is that we don't have access, every third time we lose access to some environment. It's interesting. Categorize your failures. Have somebody monitor them daily.

Put a heat map. One of the interesting things that I've always implemented is, a lot of times, tests fail in random ways. You think it's random, but once you do a heat map, you realize there is a pattern to it. Another technique, outside of creating the time and effort to do the work on analysis, is creating heat maps and analyzing what kind of failures that you have and, over time, taking that time that you've asked the business to stop and address those failing tests, getting them to a point of green.

Now, what's always interesting is, just spend that six to twelve sprints, and there's always those two to three or four tests that are driving you crazy, that you just can't get green. Typically, what I do with those is I actually will take them and put them in another CI run. I call stable regression suite and then unstable regression suite. They're still running. We still know that they're failing, but we can start to really look at that. Most people's reactions are to delete the tests. People say, "Mary, why don't you just delete the tests?" I'm like, "The test had value. If the test still has value, and the fact that we can't get it to run but every third time passing, we might have a bigger issue."

I learned that the hard way when a previous company, we had a set of tests that we ignored, and we ignored it for a while, and then all of a sudden the issue started happening in production. It ended up being a performance problem that we were hitting in our automation stuff that nobody would look into. Everybody was like, "No, it's just your tests," and the developers would say, "Your tests." It got to a point where, when it happened in the production, somebody said, "We've never seen that." I'm like, "No, we have seen this issue, but everybody said it was our tests. Nobody would dig really deep into it. We could have found this six months ago." I don't always give up on the failing. A lot of people just give up. I've learned the hard way not to give up on these tests and, if they have value, to make them work.

Josiah Renaudin: Absolutely. We've established here that you need a certain level of automation. You need to balance it with the manual testing, and you also need some sort of strategy, but, in your experience, and this is, I guess, the biggest question, what type of value can proper test automation strategies actually bring to a team? What can that do to transform them or get them on the right track?

Mary Thorn: Let's start with the overall problem that happens from, and this is the return on investment part. Let's look at this high level. Whenever I walk into companies, two things potentially happen. One is they have what's called a harden or regression spread, and/or one is where the automation happens and sprint behind, okay? Let's go with the case where you're trying to automate within the sprint, but you can't, so they've created this week to two weeks of what they would call hardening or regression that has to happen manually, because there is no automation before they get there. I often come in and say, "Guys, there's a ton of value here. If we can actually put some time and effort in automating those critical and high-risk areas, we can reduce that time."

My sell to the business, a lot of times a sale comes from additional resource aspect sometimes is, "We spend, let's say, fifty-two weeks a year between teny Scrum teams doing manual regression. Would you not like fifty-two weeks back a year to do actual development?" The second part of that is, fifty-two weeks for one Scrum team is a million dollars. Typically, a Scrum team runs a million dollars. "Would you like to save a million dollars?" to a business. Typically, that sell works, if you can bring it back to that point, if you can set that up. "You have two options, business. You either give me more resources, or you set aside 10 to 25 percent of each sprint to burn this down."

So the value that's going to bring back to you is you're going to deliver higher quality software, because you are getting constant feedback. You're fixing bugs earlier. You'll have higher velocities, as well as, you get back that fifty-two weeks and that a million dollars." From a team perspective, they get confidence being able to make changes and being able to get feedback. They're less risk adverse. They'll take more risks, because they know they have that safety net. That's a great feeling for a developer, to know when to be able to factor the amount of code, knowing now that they have tests that catch whatever issues that they have."

Josiah Renaudin: You convinced me after you say the million dollars part. Hopefully, that's enough that people are like, "Yeah, deal, got me." Again, Mary, thank you very much. I really appreciate you taking the time. I am looking forward to actually hearing your full keynote at Anaheim in STARWEST.

Mary Thorn: I really appreciate it. I look forward to it. It's going to be a great topic, and hopefully, people have shared a lot of my same experiences, and so I do a lot of storytelling and actual experiences, and hopefully people can relate to it.

Mary ThornChief Storyteller of The Three Pillars of Agile Testing and Quality, Mary Thorn is Director of Software Test Engineering at Ipreo in Raleigh, NC. Mary has a broad testing background that spans automation, data warehouses, and web-based systems in a wide variety of technologies and testing techniques. During her more than nineteen years of experience in healthcare, HR, financial, and SaaS-based products, Mary has held manager- and contributor-level positions in software development organizations. A strong leader in agile testing methodologies, Mary has direct experience leading teams through agile adoption and beyond.

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13