The Human Aspect of Testing: An Interview with SmartBear's Michael Punsky and Scott Barber

[interview]
Summary:

Michael Punsky and Scott Barber discuss the importance of the human element in the world of software testing—no matter how advanced testing tools become. Learn why continuous testing is crucial to software's success and why testing should never be looked at as a single event.

Noel: Hello, this is Noel Wurst with TechWell and I’m speaking with two people at SmartBear today to discuss the current state of software testing in relation to focusing on people versus tools, automation, and all the things that are driving the software testing world. I have Michael Punsky, who is the product manager for performance at SmartBear, and also Scott Barber, who is a chief performance evangelist. How are you doing this morning, Michael and Scott?

Michael: Doing well.

Noel: Great.

Scott: Good, thanks.

Noel: Good to hear. I was looking at the website and just reading more about SmartBear and noticed the “test early and test often” mantra, and it was interesting to me how that seems like such a commonsense thing to do, yet, I’m willing to bet organizations still need to be reminded all the time to do that, and I wanted to know why that is and why that’s not just something that everyone does with such ease because of what a great idea it is.

Michael: I think, from my experience, the reason that a lot more companies don’t do the “test early, test often” routine is that they’ve always thought of testing as more of an event, something you build up to, and in reality, it’s better to do your testing continuously throughout the development process, from the very first bit of code that you would write until the end when everything is put together and then drafting.

If you start testing in the beginning and you test the whole way through, you don’t ask questions that you don’t already have a really good idea about the answer to. What you’ll find out is something that performed well as a component all of a sudden may not work well and play well with the other components in a certain area, but overall you have a good idea. When you are building a car, you put quality parts into the car, then you can make an assumption that the car is going to perform well. There are always going to be those circumstances where it won’t, but in general, it’s a much better methodology to follow than saving everything up to the last minute, doing your test, and then finding that you have a problem and now you have to go back and backtrack to find out where that problem is.

If you start from the very beginning and test all the way through, you are going to have a really good idea about what your problem is, and you are going to detect at the earliest possible moment and be able to do the remediation as necessary. The problem with remediation, if you’ve already built a whole framework on something that has a fault in it, is going back and refactoring that whole thing after the fact, and that’s the last thing you want to do because of the time involved.

Scott: I really like the car model because, in software, we tend to think testing is that thing that you do to make sure your release candidate is okay. But if you think about—think about the automotive industry, where does the car start, right? You start by designing. You start in this design shop with a bunch of engineers who, with everything they do, they ask this question: “I wonder if this will work” or “I wonder if this will be cheaper.” They are testing all the way through, and eventually they build a prototype, right, and send it to the Detroit Auto Show or something, and that’s not production grade, right? That hasn’t been approved as road-safe or all of those crash safety tests, maybe—those kinds of things. It’s not until after they’ve decided this is the car that we are going to sell that they do those types of tests and what we tend to think off in software is those final tests, like the crash safety tests, and you are not going to do a crash safety test on every component. Could I do that on a carburetor? It doesn’t make any sense, but that doesn’t mean that I’m not going to test my new design for a carburetor starting in the beginning.

Noel: We were discussing before, talking before the interview, and you said that testing shouldn’t be looked at as an event; it should be part of everything that you do. I think that makes a really good case as far as describing the worth of software testing to a company that may assume that’s going to be something that’s just going to happen at the end by explaining that there will not just be work saved, but money saved as well. It’s like you’re using more testing to save money.

Scott: It’s a pay me now, pay me later thing.

Noel: Right.

Scott: I think it was Crosby—is he the guy?—a big quality advocate who claimed that quality is free. It doesn’t mean that you are not investing in it up front, it means that you make your money back in the end, and that’s the fast pace that we work in today when everybody is trying to beat their competitor out to market, folks are rolling things out on a daily basis. A lot of times we defer that investment, and in some cases that’s a better decision than others, but the point is that you are being active in this issue.

Michael: One more point on that is you are not doing every type of test from the very beginning. There are different types of load testing, different types of performance testing for the various stages that you are going through in your development process. There are smoke tests, there are stress tests, and there are tests where you definitely want to go and crash the server and see how it recovers.

Scott: They’re my favorite.

Michael: We are a destructive bunch.

There are tests that you will run for long periods of time to detect memory leaks. But all of these tests have their place in the development process, and at every step along the way there are different tests that you will run against different scenarios with the software. There is always something that you can be doing and each one of them answers a slightly different question, and then the final test that you’ll run is the test to say, “Is this production-ready?”

Noel: Right.

Michael: Then again, that’s not final because once it goes into production, you have to find out if it starts decreasing in performance over time. If an event that’s coming up is going to cause your server to go down, how much overhead do you have, and ...

Scott: Or even you find out that your user is doing something different on your site than you expected.

Michael: Exactly.

Scott: Right, that’s a type of test, too.

Noel: Right.

Scott: You certainly can’t do that early.

Michael: The more we learn, the more we find out how little we know about what a user actually intends to do, because users always use a tool in a slightly different manner than you would ever suspect, and it becomes very valuable to them in that way but you never expected it, so you didn’t test for it and you need to go back and test after the fact once you find out about that behavior.

Scott: Doggone human nature.

Noel: That’s something I hadn’t really thought of before. I’ve heard people argue or discuss the need to know what test to run and what not to, and I’ve interviewed testers who talked about how you don’t have to test everything, but I don’t feel like I’ve heard as much cases being made of not just what test to run, but when to actually run them. It’s like once you choose what test you're really going to need, it makes a really big difference of knowing exactly what points to run those tests and when not to.

Scott: I always encourage testers to start from the point of not “What test should I run?” but “What information is it I’m trying to learn right now?” or “What information can I learn that’s valuable right now?” and then design your tests based on the question. My son—right now who is fourteen, a freshman in high school, we were talking in the car literally last week—he is learning scientific method. You have a question, you form a hypothesis, right, and then you do an experiment to prove or disprove your hypothesis. To take that same thinking when you say, “Hey, I have a question. Does this software do X or does it not do Y?” All of a sudden what tests you are going to want to execute to answer those questions become a lot more clearer, and it makes it a little easier to figure out when is the right time to start doing those tests.

Michael: It’s as if you have a toolbox, and in that toolbox you have fifteen different types of tests that you can run. What it comes down to is a good tester is going to be able to figure out when to use each of those tools and is going to know what the real question to be answered is every step of the way.

Noel: That’s another great analogy. This is making me think back to a lot of other interviews I’ve done with testers in that it seems like most testers are in the process of advocating the need for the human side of testing and that there is a big focus right now on tools specifically. Testers seem to have been given the task of proving that it’s not just about the tools, it’s about the human side as well. Everything that you’ve all brought up today involves a lot of human input. That is not something that a tool is just going to do for you. But I was curious, as far as giving tools the focus that they also deserve—is there anything about a tool that allows it to incorporate a lot of that human input as well? A tool that doesn’t pretend to be just a one solution for everything but a tool that acknowledges the need for the human element as well?

Scott: I’ll tell you what, my father is a retired shop teacher, industrial arts. I grew up using tools. Not for test automation, obviously, but he’s got, even at home now, in his garage, he’s got even glass-blowing stuff, he‘s done amazing things, all kinds of tools everywhere. But here is the thing: there is not one of them that’s going to build you a toolbox or a shelf without a human.

Now, there is the notion of in an assembly line for repetitive tasks, and that’s the key, right? We built great stuff before electric tools, before power tools, right? It took a long time, it took a lot of work, and people had big muscles, right? Then we get power tools and what happens? Hey, maybe our quality goes up, our speed goes up, maybe we can build more things, but the human doesn’t go away. I think sometimes what we want is more of an assembly line, but software, one of the things people forget is every time you write a line of code in software, it’s research and development, it’s new. We are not going out and buying screws and nuts and bolts and wood and just bolting them together. Software is new every time. So you can’t take the human out of the creative process of software development, which means we can’t take them out of their creative process of testing it because it is new and different every time kind of by definition.

Michael: Code aside, every application that comes off these days has its own unique challenges. The people that are actually using that application, the people that are designing that application, need to have some input into how it’s tested and what they are going to be testing for. It’s very hard for the tools to actually keep up with all of the different ways that applications are going to be used and developed. Just look at some of the security schemes that applications come up with, whether concatenating this value with this value and then running an empty five check on it and creating the string that is not standard in any way, and then passing that forward throughout the test as a security token. You can’t really expect every tool to go in there and get that right. At some point, most applications are going to require that human oversight to go plan and make sure that everything is being done properly.

Scott: Take Captcha, right? Everybody hates Captcha. How are you going to use an automated tool to test that? The whole point of Captcha is to keep automated tools from getting to the next step of the site, right? I mean, it’s an obvious and extreme example, but it’s the same thing, there is that … if there is no new component to what you are doing, why are you doing it? At least, that’s my thought.

Noel: Yeah. You think about testers who are armed with a tool that helps make that tester great, but maybe it’s somewhat the other way around. You think about a photographer who has a really great camera can do these things that a photographer with a crummy camera can’t, or that they can do things that guy can’t do. But maybe it can be looked at the other way around, in that instead of a test being armed with a great tool, it could be a tool …

Michael: Armed with a great tester.

Noel: Armed with a great tester that has a really great knowledge of not just how to use that tool.

Michael: That’s great insight.

Scott: I like that.

Michael: Because I think that what it comes down to in the testing world—it’s not so much the tool you use, and I think Scott has been a great example of that over the years. I think he’s used every form of testing tool that was ever built, and sometimes multiple tools for a single engagement. But it’s the knowledge and the thought process and the methodology and being able to know what the right question is, because the developers may not always have the right question. They may provide you with a question that is close but not exactly what they are really looking for, and so your experience in testing really comes into play when you can sit down and have a consultative session and say, “What it is that you are looking for, and how can I help you prove or disprove whether you are going to be able to do what you want to do?” There is a lot that goes into that, and I think you are right, it comes down to the tester.

Noel: I was just writing this week about the Turing test of “Can machines think?” Maybe one day when they actually can, the tool can appreciate the tester using it just as much as the tester might be able to appreciate the tool.

Michael: Then you have to give them emotions as well, right?

Noel: Exactly.

Scott: I was on part of a two-day conference in, I don’t know, it was in the Netherlands somewhere, that was all about that. And what we concluded was that until AI gets to the point, artificial intelligence gets to the point where we can’t really tell the difference between it and a human, we are still going to need humans. And here is the tricky part about that: Until the AI is smart enough to program itself, somebody is still going to have to teach it.

Noel: Right, that’s very true.

Scott: Yeah, I just don’t see, at least not in my lifetime, the human element going away.

Michael: Once that AI can program itself, we are all in trouble.

Noel: Exactly, that was my first thought.

Michael: It’s all over.

Noel: That’s great. Well, those are all the questions that I have for you today. Is there anything else that you wanted to talk about as far as the state of software testing and anything else?

Michael: I think I would just like to leave everyone with a reminder that holiday season is coming up soon, and if you are following the methodology of doing continuous testing, you’ll probably be fine. For those of you that haven’t done the testing, then you’ve got some unanswered questions that just might get answered in a way other than what you hope for.

Noel: That’s good. Great, well, thank you so much for the conversation today. And again, everyone listening and watching, this is Noel Wurst with TechWell, and I’ve been speaking with Michael Punsky and Scott Barber at SmartBear. You all have a great day.

Michael: Thanks, Noel. Bye-bye.

Scott: Thanks.

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13