How to Avoid Mistakes in Test Automation: An Interview with Dorothy Graham

[interview]
Summary:

In this interview, Dorothy Graham, a software test consultant and speaker at STAREAST, covers her upcoming keynote. She goes into detail about the differences between manual and automated testing, as well as explains some of the key blunders testers often run into. 

Josiah Renaudin: Today I'm joined by Dorothy Graham, a software test consultant and speaker at STAREAST. Dorothy, thank you very much for joining us.

Dorothy Graham: Thank you for the interview.

Josiah Renaudin: No problem at all. First, could you just tell us just a bit about your experience in the industry?

Dorothy Graham: Well, how long have you got? I've been in the industry for quite a long time. My first job—I got into testing because my first job, I was put into a test group. This was after I left university with a master's degree in math. I was then put into a test group and my job was to write two testing tools. This was at Bell Labs in the States; I'm originally from Michigan.

In the meantime, I had met my husband, who's British, so we emigrated to the UK and I worked as a developer for seven years on police command and control systems for a company called Ferranti over here. Then I decided to get into training and then went independent and have always specialized in testing, I think because my first job was in testing; when I had a job as a developer and then team leader, testing was important to me, much to the surprise of some of my colleagues at the time.

That's basically how I got into testing and test automation. As an independent, then I worked for the National Computing Center, which was a government body to promote the use of computers way back in the '70s and '80s, and I wrote for some publications to do with testing tools back then. I wrote the first testing tools guides that were published in the UK, and then I found out about one that SQE published and it was very interesting getting together with the author of that one.

Josiah Renaudin: I'm a big Ohio State fan, so I'll overlook the Michigan thing, but I do have a question relating to ... You said that you've done a lot of testing in the UK. Have you noticed a big difference between testing in the UK and testing in the US? Is it a different market or are they very similar?

Dorothy Graham: There are more similarities than differences. I mean, when I first started out, I remember vividly the first testing conference I ever went to, which was before SQE started the STAR Conferences, by the way. It was in 1991 and we had recently formed a group in the UK called the Special Interest Group in Software Testing, where people from the UK got together about four times a year to talk about testing. It was wonderful to find other people who liked testing, at the time; no social media, of course, at this time.

Josiah Renaudin: Yes.

Dorothy Graham: Coming to the conference—which was run by Dave and Bill from SQE, but was promoted by the US Professional Development Institute and then the STAR Conferences started the year after that—it was just a revelation to ... The thing that struck me the most was that, you know, suffering from the same problems that we are; we aren't that different. I expected, I think, things to be a lot more advanced over in the States, but it was very much the same.

Josiah Renaudin: Your upcoming keynote focuses on the blunders often found within test automation. Is it fair to say that this topic was born out of experience? How often have you seen people make very bad moves when it comes to test automation?

Dorothy Graham: I'll confess to you that one of my frustrations is to see people making the same mistakes with test automation; well, new people are making the same mistakes as other people did. I mean, my colleague Mark Fewster and I wrote a book called Software Test Automation, which was published more than ten years ago, which gives people advice that if they follow it, they will do automation much better. Many people have told us that it's helped them a lot and yet, people don't seem to have heard of it.

Perhaps it's partly because it was published more than ten years ago, but people don't seem to realize that automation is a skill in itself. I have seen people go badly wrong with automation. You often see on discussions on LinkedIn; there's sometimes some very good discussions on the LinkedIn discussion boards, but a lot of times, you have people just coming in and just making the same mistakes that have been made so many times before.

Josiah Renaudin: Can you talk about some of the core differences between manual testing and automated testing?

Dorothy Graham: Oh yes, that would be an interesting one to talk about. In fact, the very first blunder that I'm going to cover is the one that's called testing tools test, and the blunder is thinking that tools actually do testing. Manual testing and automated testing are very different. For example, with manual testing, obviously, it takes a lot of human time. With automated testing, some aspects of the testing can be done much faster.

When I say test automation, by the way, I'm focusing mainly on functional test automation; system test automation. The human time element, of course, is one of the great benefits of automation, but manual testing involves thinking. Michael Bolton makes a distinction between testing and checking and anything which can be automated, he calls checking, but if you're testing, you're actually thinking. You may well go off pieced; you may well have some intuition and follow a lead somewhere. You're going to be creative about trying to explore the product, trying to find defects.

A testing tool doesn't do any of that. All a testing tool does, a test execution tool, is it runs stuff; that's all it does. A test execution tool has zero intelligence. The least intelligent tester you'll ever have on your team is the tool, so that's a big difference. I think it's a misunderstanding, that which is that the most important blunder, which why I start with it, and the one which underlies some of the others as well.

Then just a couple of other differences between manual and automated testing: One is that when you're testing manually, as you go along, you're working your way through things and something goes wrong and you suddenly say, "Oh! I think I found a bug here." Now at that point in time, you have in your head exactly what's happened over the last, say ten minutes. You know where you've been. You know the context of that bug, but if that test has been run by a tool, what happens? You get "Ping!" Email comes in: Oh! Something failed. Test thirty seven failed. Okay, what was test thirty seven doing? You have no idea what's going on and you have to build that context before you can start looking at it and seeing whether it actually is a bug or isn't.

This is something which a lot of people don't take into account when they're thinking about automation, is that failure analysis can take a lot longer with automated testing than with manual testing. The other thing is that with manual testing, if the tests are changing, if you're working from a script or if you're just doing exploratory testing, the maintenance of the tests is very little and sometimes is almost unconscious, whereas with automated tests, because they are then fixed, they do have to be maintained.

This is one of the things that often puts the tools back on the shelf. A good test ware architecture, a good framework, is critical to getting easy to maintain automation.

Josiah Renaudin: Along with the blunders, you list a handful of key myths during the presentation. I'd like to cover a few of them during this interview. You mentioned the stable application myth. Can you talk about the meaning of this blunder and why it's so prevalent?

Dorothy Graham: Yes, there seems to be a feeling that you can't actually start doing any automation until everything is settled down and you know exactly what the GUI is going to be and so on. It's most often used when you're doing testing from the GUI level. By the way, that's the area where you probably have the least amount of automated testing. It's very good to do the testing at the lower levels; component level, unit level and also the API level, but at the GUI level, people say, "Oh, no, we can't start the automation because the GUI might change."

Well, that's like taking us back thirty years in time when people used to say, "We can't start testing until the code has been delivered," and that's nonsense. We now have approaches which say look, we think about the testing early and that makes our development better; test-driven design.

We should maybe have something like automation-driven testing. You can do a lot of thinking about what tests you want to automate, about structuring them, about getting the architecture, before you have to fill in all the tiny details about where something is going to be on the screen.

In fact, you should use this as an opportunity to make the automation resilient against the changes that will happen most frequently. Any automation which doesn't cope well with instability in this day and age is going to fail.

Josiah Renaudin: Next is inside-the-box thinking, where you automate only the obvious test execution. Branch off of that; what are some nonobvious tests that people should consider automating, but really often don't?

Dorothy Graham: Yes, a lot of times when people have a test-execution tool, they use it to only do the execution of tests, which is understandable because that's what the tools are called, but in fact, if you have a tool that you're using only to execute your tests and compare your results, what about setting up the prerequisites for the test? What about populating a data base? What about the things that have to be done before the tests can start? What about cleaning up things that have to be done after the test is finished?

If you don't automate these other things around the test execution itself, well then you're going to have automated tests embedded in a manual process. You don't want to have automated tests, you want to have automated testing, and the more things around the execution of tests that you can automate, the better. In fact, if you automate some of these things around the execution, you can sometimes use that support with manual testing in the middle, so you have automated support from manual testing.

Josiah Renaudin: Now, I don't want to spoil all of the blunders and different myths that you're going to be talking about during your keynote, but of the other blunders on the list, what do you find to be not only the most common, but the most damaging to the process?

Dorothy Graham: Well, I think the one that's most damaging is thinking that the tools actually do testing, which is the first one. The tools don't test, they just do what they're told, basically. I think another one which is very important; I've called it “Who Needs GPS?” What I mean here is, if you want to go somewhere and you've got a GPS system, you have to tell it where you want to go, otherwise, if you don't know where you're going, how can you use a GPS?

It's important in automation that you know where you're going. You know what are the objectives for the automation, and a lot of people confuse objectives for test automation with the objectives for testing. The most common one there, the most common mistake I see is that people say, "What?" We're going to automate our regression tests and that's going to enable us to find lots more bugs."

Now, there are two things wrong with that. First of all, how likely is it for regression tests to find lots of bugs? Regression tests are running tests that have been run before and they passed, so unless something is changed, nothing's going to change; those tests will still pass, so why would you expect to find more bugs when you haven't changed anything, so automating regression tests is the most least likely way of finding bugs.

If people say we want to find lots of bugs and that's the reason for automating, you could well be jeopardizing the success of your automation because really, the purpose for automating is to make testing more efficient. Finding lots of bugs is a way to make testing more effective; you're finding bugs as your goal for testing, and confusing the two often leads to problems.

Josiah Renaudin: More than anything, what message do you want to leave with your audience at STAREAST?

Dorothy Graham: I think just coming back to your previous question: One other thing I'd like to touch on in the blunders is why considering automation as a project is not right, but not considering an automation as a project is also not right, and I'll expand on that in the talk.

I think probably the most important thought is that tools don't replace testers; they support them. The testing tools don't test, they help testers. Get the computer to do what computers do best and get people to do what people do best.

Josiah Renaudin: Fantastic. Well, I very much appreciate your time today, Dorothy. It was very interesting talking to you and I'm looking forward to hearing more from you at STAREAST.

Dorothy Graham: Thank you very much! I'm looking forward to being there. I'm also doing two tutorials; one on test automation from a management perspective, and the other one, which is the On Testing, where we're going to look at the past, present and future of testing. In that one, I'm telling people a bit about where I came from in my testing journey and what I learned along the way, which I hope will be helpful to people.

Josiah Renaudin: Then you're going to be having a busy week, so I'll make sure everyone is nice to you while you're there.

Dorothy Graham: Thank you very much.

Josiah Renaudin: Have a great day.

Dorothy Graham: See you there.

Dorothy GrahamIn software testing for over forty years, Dorothy Graham is coauthor of four books—Software Inspection, Software Test Automation, Foundations of Software Testing and Experiences of Test Automation—and is currently working with Seretta Gamba on a new book on a test automation patterns wiki. A popular and entertaining speaker at conferences and seminars worldwide, Dot has attended STAR conferences since the first one in 1992. She was a founding member of the ISEB Software Testing Board and a member of the working party that developed the ISTQB Foundation Syllabus. Dot was awarded the European Excellence Award in Software Testing in 1999 and the first ISTQB Excellence Award in 2012. Learn more about Dot at DorothyGraham.co.uk.

User Comments

2 comments
sharene walker's picture

She gave a really good presentation at STAREAST 2015. Is the presentation posted on STICKYMINDS?

May 7, 2015 - 9:23am
Josiah Renaudin's picture

The presentation will be posted on StickyMinds at a later date! You should have an email detailing how to access it on demand, though, if you signed up for virtual. 

May 8, 2015 - 10:12am

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13