Five Patterns for Test Automation: STARWEST 2015 Interview with Matt Griscom

[interview]
Summary:

In this interview, TechWell speaks with Matt Griscom, a software professional with twenty years of experience creating software, including innovative test automation. At STARWEST 2015, he gave the presentation "MetaAutomation: Five Patterns for Test Automation."

Jennifer Bonine: Hello, I can't believe it's already time for our virtual conference, and we're back at STARWEST. We're here with our first virtual interview, so I've got Matt Griscom with me. Hi, Matt.

Matt Griscom: Hey, it's good to meet you, Jennifer.

Jennifer Bonine: Thanks for being here. Why don't you tell us, Matt, for those out there watching who maybe haven't had a chance to meet you or attend one of your talks, just a little bit about yourself, your background and how you got into testing, and how you ended up here at this conference?

Matt Griscom: Okay, oh, this, I could go on for hours. Hello, my name is Matt Griscom, and basically, I have two degrees in physics. I've done some teaching. I've done software development and ended up in testing, especially on test automation, automating verifications. I became frustrated by the state of the art and what people were asking me to do, and so ultimately, I ended up inventing MetaAutomation, which is a pattern language.

Jennifer Bonine: Yup. Now, so do you have books on this MetaAutomation framework that you built?

Matt Griscom: I do.

Jennifer Bonine: Okay.

Matt Griscom: I have a book out and published in December, and it's also in the bookstore here at the conference.

Jennifer Bonine: Okay, fairly new that the book’s been available to the public, but you have a blog on MetaAutomation, correct?

Matt Griscom: I do, yeah.

Jennifer Bonine: If people wanted to learn more about MetaAutomation, maybe just give us a high-level—I know it's a pattern—maybe some of the differences of that to what they might be more familiar with, or some of the traditional uses or frameworks that we have for automation, just to give them an idea of what that is.

Matt Griscom: MetaAutomation is about making people be more productive and achieve greater goals with automated verifications around their software. Part of that comes with the first realization, which is a little bit hard-hitting, is that test automation is actually a contradiction in terms—because it's not the test that you're automating, and you're not doing industrial automation. You don't care about the product output, you're just making it do stuff. What you do care about is the quality of measurements, and that's what MetaAutomation has you focus on, and that's the topic of the first pattern in the language, atomic check. It tells you how to focus on the business requirements of the software product and get a measurement of that—some information about the product and put that in pure data—maybe that comes a little bit later. There's a lot to it.

Some of the things I advocate for, I have noticed that people are starting to do on their own—for example, having their checks be short and simple. And a check is simply a type of a test. It's a type of a test where it's all pre-programmed verifications. We're not doing exploratory testing, and they're not smart like a human tester is smart; they're always indispensable. With checks, it makes it clear that we are going to measure something different about the quality.

Jennifer Bonine: What have you seen in terms of ... Obviously, you mentioned a lot of people out there, when they hear automation, have an opinion, right? Either they love it and think it's a great idea, or, "You know what, I'm not so sure. I think manual testing, you can't get away from it and we need to have it." I heard you say something really interesting, which was that human testers are invaluable. You need that skill set still. How do you see your MetaAutomation and that traditional human involvement testing pairing together for people and working well?

Matt Griscom: Right, well, that's a good question. It's not just a pairing, though. There are many other types of testing as well, but the manual tester will always be indispensable because a human has the smarts to notice stuff and characterize what's going on. You always need a human to look at the layout of a webpage. Did this vocalization work correctly? Is that timing really acceptable? Have we covered all the bases in terms of the UX, or in a webpage, it could be, is the control displayed at all? Because in automated verification, you might not even notice stuff like that. You always need manual testing.

The automated testing, in the simplest form, is what I focus on. It does verification of your business requirements. And performance comes for free in my implementation of it, which is nice, because when you're doing the verifying, the business behaviors of the product, and seeing okay, step, step, does this work? Basically, that's your measurement. Simple as possible, straight to the point, and fast. Now there are lots of them. Because they're atomic, they are as short as they can be. Although there may be dependencies between them functionally, they can still operate independently on different machines, and so the whole thing scales.

Jennifer Bonine: You could run those independently on single work stations, or you could have it in a build where it's running in a nightly cycle or a daily cycle.

Matt Griscom: Yes, even on a bunch of virtual machines in the cloud, you can allocate more resources and your checks will run faster, and that enables you to run more of them. That, with the manual testing ... the other types of testing are important, too. Of course, you have the perf, which in my case with MetaAutomation, you would simply do another round of analysis on the artifacts, because all the data is there waiting for you to measure in terms of how long did it take.

Then there's stress testing and load testing, and that is a different discipline, and that is another important part of testing. Then there is security testing, and there are people who are experts in that stuff. I think it's very important and great that they do that. That's another area, and there are many others as well.

Jennifer Bonine: If someone was interested and said, "You know, this sounds interesting to me, and I think I need to look into it, in how to do the MetaAutomation ..." So obviously you have a blog that they can go to in order to learn more, and what is the name of the blog, just so they know?

Matt Griscom: The blog is MetaAutomation, and you can see it from the website metaautomation.net, and there's a link to the blog there, which is on BlogSpot, and also have resources like downloads for a sample that would run. There are two samples, actually. One of them is delivered as a zip file and all the sources there. It's platform-independent. The second sample is much more complex and powerful in its implementation, mostly platform-independent.

I used to use WCF services, but the cool thing about the samples is that you can show what can be done. And actually there are two resources that I would recommend for people to learn more. One is my book, which you can find on Amazon, and I link to that from the metaautomation.net website.

Jennifer Bonine: Dot net, and they can get to the link on Amazon for the book.

Matt Griscom: The link on Amazon, yes, and that's out.

Jennifer Bonine: Perfect.

Matt Griscom: There's my blog. If you're really tactical and, at some point, if you're really interested in it, the people who are the early adopters of this technology will be very tactical, they'll want to get the samples and try running those. The sample two comes with a whole bunch of examples of what it does, including real-world—well, fake, but real-world tests—and they're distributed across processes. You can distribute them to any number of machines. You can have the artifacts have the check steps be self-documenting. You have an artifact appear data, which tells you which steps passed or failed or were blocked.

The steps are hierarchical in nature, which solves one of the problems of keyword-driven testing, which is, well, you don't quite know exactly what the keyword does. I've been there. I've been in the trenches, because part of the reason I can do this is that I struggled with these problems and came up with some very innovative solutions. For example, getting robot framework to scale, which I did years ago. Also just trying to get a test failure, a failure of an automated check actionable.

Imagine if you're running automation and you have a failure. Well, the first thing you do is bounce it back to the QA team, typically, and this is a typical pattern. Then they have to reproduce the problem. Then they have to figure out what's going on—but imagine that step did not have to happen at all. You didn't have to take the time to reproduce the failure at all because all the information is there for you. Everything you need to know in theory is there. You don't have to reproduce the failure. I don't think you can get 100 percent, but you can get much closer and with a much better detailed way of reporting the data of what happened for that check.

Jennifer Bonine: Yeah, absolutely.

Matt Griscom: We're still just talking atomic check, which is the least dependent pattern. With that data you can have the parallel run. You can write a whole bunch of them. Actually, that pattern already exists, but you need to have short, fast checks to run that well. Then you have a preconditioned pool which takes what you can and puts it out of line to run the checks faster again. Smart retry, if you try a check and it fails, then it will automatically run it again—and no people are involved at this point. Nobody needs to have their workflow interrupted. Just run the check again.

Is it the same thing? The same failure at the same point for the same reason? Oh, then we just reproduce the failure, report that, and it gets kicked up to probably a person at this point, as in, "Here's a reproduced failure." That's highly actionable. Or, if it works the second time, then, well, we just say call it a pass, but all the data is there.

Jennifer Bonine: Perfect, cool. And you know it's amazing that we've already run out of time. It goes so fast. And I know some of you out there are probably saying, “I want to hear more on the MetaAutomation, so I think it's an awesome, amazing new concept.” I'm so glad we got to introduce it to you guys. If they want to contact you, Matt, and learn more, I know you've got the metaautomation.net with the BlogSpot and the place to get the book, but do they get to reach out to you? Is there a way to ask a question off of there to you if they want some feedback or information from you as well?

Matt Griscom: You can post questions on my blog or on metaautomation.net. There is a contact page.

Jennifer Bonine: The contact page, they will be able to find you?

Matt Griscom: Yes. They can find me that way.

Jennifer Bonine: Great. Thanks, Matt, so much for being here with us, and thanks to the audience out there. I appreciate it. Thanks, Matt.

Matt Griscom: Thank you, Jennifer.

Matt GriscomMatt Griscom has twenty years of experience creating software including innovative test automation, harnesses, and frameworks. Two degrees in physics primed him to seek the big picture in any setting. This comprehensive vision and love of solving difficult and important problems led him to create the MetaAutomation pattern language to enable more effective software automation for quality measurement. He started his MetaAutomation blog in 2011, but in 2014 Matt published the ground-breaking and definitive description of the MetaAutomation pattern language in book form. Matt loves helping people solve problems with computers and IT.

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13