Testing as a Craft: A Conversation with Greg Paskal

[interview]
Summary:
Greg Paskal, evangelist in testing sciences and lead author for RealWorldTestAutomation.com, chats with TechWell community manager Owen Gotimer about testing as a craft, choosing the right test automation tools, and current testing trends around the world.

Greg Paskal

Some of the stuff we've been using for a while, Zoom and Teams and other technologies like this, a lot of us are familiar with, but ironically, even outside of the world of what you and I might work with and code and testing and automation, think about how this has helped our families to learn how to use this sort of technology to stay connected, which has been a real blessing in the end of the game. I'm really thankful for that. I was amazed that to be able to do this Meetup internationally how smooth it went. It was with a meetup in in Singapore. We did a just regular presentation, like I would do at a STAR conference. We had probably 25 or so people that attended. And other than the timezone difference, it went really smooth. We didn't really have any technical problems at all, and you got to hand it to companies like Zoom who have scaled up for a situation like this and are doing such a great job with the capacity needs that must be out there right now.

Owen Gotimer

Yeah, absolutely. I know that Zoom is definitely becoming a household name because of all this. And yeah, I agree. I think that there's some funny scenarios. My older brother called me the other day and told me that he had taught my grandma how to use Zoom. So yeah, I think from a familial perspective, it's definitely been awesome that we've been able to, not only as technologists get better at using the technology, but as non-technologists started to use and embrace the technology that will allow us to communicate worldwide, which is really cool. Something interesting is you went and did this meetup in Singapore. I was interested in learning more about if people in Singapore are going through the same types of challenges in the testing world as people here Stateside. So, obviously Stateside, we're talking a lot about test automation and we're moving into the AI space and CI/CD pipelines and that sort of thing. Is that a global trend, where they experienced the same kind of things, the group you talked to in Singapore?

Greg Paskal

Sure, I think that it seems to be pretty universal. I'm really well connected internationally on Meetup. And the questions I get come from every corner of the world are common things about how to up my game and in the area of automation, what's the next steps for me to grow? I literally get that from every region of the world right now. And I love to hear it because it's something I love to help coach people towards and work towards. The talk I gave with the Singapore group was my Minimal Essential Test Strategy talk, which is often referred to as METS. METSTesting.com is where folks can learn about that. So I like that this is still a very fundamental principle that even a group like this that's very experienced at QA is still looking at refining their game at having a great manual test strategy. To me, a great test engineer has a plan, and METS is one of many tools folks can use to have a good manual test strategy. So yeah, I haven't found that there's gaps for somewhere, they're not experiencing it yet. It feels like as a community of quality folks that are pressing into this space, that we're moving along pretty well regardless of location.

Owen Gotimer

Absolutely. We're seeing similar trends with the TechWell community. We have people popping on to the TechWell Hub from all corners of the world. We ask people when they come into our community spaces, what the biggest challenges they're facing right now, and we've put together a word cloud, and it is the same topics from everywhere. We're seeing trends globally. So I think that's important. I know another thing you like talking about is testing as a craft. I was hoping maybe we could dive into that, because when we chatted about that, that's the first time I've ever heard anyone use that expression. So I was hoping maybe you could give an explanation as to what testing as a craft is, and then we can dive a little bit down that rabbit hole and give the audience some insight as to what testing as a craft is?

Greg Paskal

Man, I love this this topic. Oh, and I think probably the first time we spoke a couple years ago, I probably brought it up. Have you noticed in our testing community, and I'm sure a lot of the listeners can relate to this, that the the world of quality assurance, the world of testing, sometimes is really kind of dumbed down just to "you're the guy that runs the test cases," or "you're the guy that approves my code before it gets released to production?" But there's a real science, there's a real craft to the work that we do, an engineering craft. I'm very passionate about this space because there's something to be pursued here that helps us to get better than just that guy over there that I send code and he tests it, and he ships it. So this is something our QA team is going through here at Ramsey Solutions. I've started by using the ISTQB curriculum as a kind of a foundational level course curriculum. And so I'm breaking it up each week. And we're taking the team through it. So we can make sure that they've got the fundamentals of testing really well in their belts and ready to pull out when they need it. But there's something mental, also that a test engineer needs to pursue this work with a desire to become exceptional editor to excellent at it. It's an unfortunate reality that a test engineer, or someone that's a role as an analyst or an engineer, can be very busy doing what appears to be testing and yet reducing no risk at all. Because the work we do looks like I'm moving my mouse, I'm clicking buttons, I'm working on mobile, I'm interacting with APIs. It looks like work, but it can be just busy work that has no risk reduction benefits to it. And this is the part I want to encourage our community towards. I don't want to use the word pride in the wrong sense, but I think there's something to be said that I own this space, and I want to help shape it to be a respectful science, a respectful engineering field that people want to want to seek after and they want to become better at, and it's just not a road to development or just a road to being a product owner.

Owen Gotimer

Absolutely, I think it's crucial that we have people that are passionate about the quality space and as you mentioned, not just using it as a stepping stone to get to a development position or a PO position, a business analyst position. So I think that that's really an important thing that we are taking pride in our work as quality people. And I think that it's also important that we are instilling that quality mindset across our entire team, and then across our entire organization. And hopefully, as people that are not traditionally quality people start to learn what goes into quality and they start thinking about quality and developers start coding quality in and product people start considering quality while they're moving through the pipeline and understanding why it's so important that quality assurance is not really just a roadblock before getting to the end. I think quality maybe has moved away from that direction as organizations and teams are starting to understand the importance of quality. I think security might be going through something that quality went through ten years ago, in terms of seeming like the roadblock. When you were talking about testers clicking buttons, it might seem like busy work because they're not necessarily understanding risks reducing those risks based on some exploration that they've done. How much of a role does test automation play in being able to "click the buttons" and do some of those verification things that are maybe a little bit more repetitive allowing the quality assurance people to actually dive in there and really explore risk and get the risk assessment done?

Greg Paskal

Oh, yeah, I love this topic, you know, automation is in my DNA, and I might have a little different perspective than most people think when it comes to automation. I mean, this has been a great field to be a part of for a very long time. I've been working in it close to over 35 years now, believe it or not. And I think that automation, in a way is kind of like test engineering. Matter of fact, it might even behave more snags along the road that can get you tripped up. Because, like the same way of being able to write code that clicks buttons and moves through fields and things like that, that can look like something is getting tested. And you and I both know, at the end of the day, nothing might be getting tested at all. We might just be moving through an app. I look at test automation almost like a cordless drill or a tool in a tool belt of a manual test engineer, I always couple those things together. And this is a different belief than than maybe some of the listeners have out there. Some QA managers and others look at automation as a means of replacing a manual test engineer. I don't look at that at all. I look at it as empowering the test engineer with new tools and insights that are harder to detect manually. So I believe we automate, not for speed, but we automate for consistency of execution and speed might just come along as a benefit. But our target isn't to do things faster. Our target is to do things consistently the same way over and over. And I teach this concept that applications have a type of personality to them. And when you run automation on it over and over again, you begin to learn the personality of application. Anybody who's ever owned an old car knows what this is. Every morning, you got to pump the gas twice and turn it over and pray a little bit, maybe it'll start up. Some applications can be very temperamental and like that others are very specific, they got to have the right kind of fuel, it's got to be exactly the right things in line for them to come on. Others are like a racecar, and they're very well tuned. But you don't know that unless you have a way of tracking and analyzing results over a period of time. So I always couple automation with good reporting, over timeframe, that gives us incredible insights, as well and sort of automation. I got one other piece I want to throw on there. When we often talk about test automation, most folks go to what I call custom automation. Custom automation is the most traditional type of automation people think, which is I go to this URL, or I go on to this app, and I begin to click this button and I put this data in and blah, blah, blah, at the end of the day, this is the most expensive automation you can write. It's the most expensive to maintain. But what if there were other forms of automation? And there are many others. We have a type of automation, we call the sanity check here. All it does is it goes to a static URL, and it checks for some call to action, some basic things on a page. We've enhanced it to do things we call audits. So it'll do link audits. It'll check for other things based on that page, and it gives us great win. So I look at our sanity test as the Dollar Store of test automation. We also have custom tests so the expensive ones, whenever you have to build something custom, and we try to help our team understand when do they need one. And when do they need the other so that we equip them with great tools going forward all providing reported results that they can analyze at the end of the run.

Owen Gotimer

I love the idea that we're not using automation to replace manual testers. I think the tool piece is so important. I heard an analogy at a conference maybe a year and a half ago: when a chef uses a food processor to cook your food, you don't thank the food processor, the chef still had to put the work in with that tool and had to know how to use that tool and had to know the strategy behind cooking your food to get the result. So it was still a person that's driving the the use of that tool. And I heard another analogy the other day. If you the worst carpenter in the world, the best hammer in the world, they're still the worst carpenter in the world. The idea being that the tool doesn't solve all the problems. You still have to have that mindset of a tester and of quality assurance to be able to drive that forward.

Greg Paskal

That's right to be a test automation engineer on our team, you've got to be a Jedi test engineer. I expect you to be a great test engineer, then you'll write good test automation. You just don't want someone who's only a coder to do it. Because they'll begin to do what we call "go for green": they'll begin to automate things that they automate to pass; they automate a passing test, not a testing test. So you want that mindset of a test engineer behind it to make sure that they're validating the proper things along the way. And you know what, let's face it, automation is pretty blind, for the most part, it can drive right past the most glaring defect and not see it's still pass. So you've got to have the eyes and the brains of that manual test engineer, you're still looking at what's going on to detect things like that.

Owen Gotimer

Absolutely. So you run your tests, whether it's manual, exploratory, through test automation, through some sort of artificial intelligence or machine learning, but at the end of the day you should be getting these reports back about the data behind what the test actually produced, because having the test and not actually analyzing it is basically not having the test. So you need to figure out what steps you want to take after the fact. When you do get those reports back, what do you recommend people do with those reports? What value do we need to add after the reports have been generated and that the teams are able to kind of analyze those reports?

Greg Paskal

Well, believe it or not pass and fail results aren't always the first thing I go after. A lot of times I'm looking at duration of test execution. It's one of the first things that's worth capturing. Of course, you want to capture whether if the tests pass or fail, that's kind of a no brainer. But if you could just capture how long did this test run, did this test take to execute versus the last time I executed? That's actually pretty interesting information, especially if you can look at it over a week, a time, a month ,or even a year, you begin to detect soft fails before an application never fails, something is coming along the way. And this is a great thing to be inside of the app of the personality the application like, "hey, this things get more latent over time." But it's so it's it's so small, incrementally that is happening that I wouldn't know it unless I hadn't used it for a whole year. And I can remember what it was like a year ago. So our first area of reporting was simply we build our automation on top of Ruby and Ruby has a test runner, called RSpec. And we leveraged RSpec's basic reporting capability. It's actually a unit test framework that we've leveraged for test automation, and we've had to kind of tweak it a little bit to be more fitting to say traditional testing terminology. Things like that. But that wasn't that difficult to do. And when we put that in place, we immediately enabled and we began to find things that were right under our nose all along. Because now our tester didn't need to remember 'Hey, did this work like this last time?" They now had it in a simple email, they could look at one over the other, and it became a great tool for us. Now, we put data into Elastic Stack's data lake, and we do analysis and dashboards through Kibana. It's amazing the things that we have now, compared to where we started five years ago.

Owen Gotimer

Absolutely. In the last five years, there's been a huge amount of improvement in that space? How do you see that trend continuing forward? And do you think that the improvements are just going to continue to pile on as we move forward and start to understand the tools and the tech and our needs better?

Greg Paskal

Yeah, I think it needs to, but I think that the QA manager and the automation engineers need to be wise about one thing. Look at everybody selling something. So when you're looking to add a product and you see a pretty pie chart, or whatever, don't be enamored just because there's a pretty picture on the screen. Ask is this data actionable? All the visualizations, which are what dashboards are made up of, are actionable in some way. We just added a new one. I was at a conference called QA in the Highway up a month or two ago, and I came away from that conference with this idea, why don't I go ahead and enhance reporting to provide root cause insights to our automation failures? That turned out to be an exceptional addition. It gives our team insight to trends in types of defects that we were able to detect with our automation. So where I would hope to see this grow is as open source continues to evolve maybe in the area of reporting and as off the shelf tools evolve, that those companies are putting in the right types of visualized data that's actionable and not just a pretty picture. That's that's the biggest hindrance I've seen with reporting, is people can be really enamored by something that looks great on the wall but doesn't really tell you anything. So ask this question, what do I do with this information that helps me develop a better product and reduce risk better than I did a day ago?

Owen Gotimer

It's important too, because the visualization and those pretty charts sometimes translate better to certain groups of people in terms of maybe business executives care less about the granularity of the tests. They want to know the money side of things, and sometimes charts and graphics help illustrate trends in financial situations. So I think that that's obviously crucial, but I think that you're right that if we're, as the practitioners using the tools, deciding on tools based on pretty pictures and pretty graphics, and that they have a cool website, and not really what we're trying to accomplish in the needs we have, I think that's super crucial. A lot of times people pigeonhole themselves and say, "oh, everyone is using Selenium to do this type of automated testing, I need to use Selenium" without actually taking a step back and saying, "What do I want to accomplish? What are my goals? What are my business requirements?" How do you go about recommending which tools organizations should be using and which tools teams should be using when they're trying to go down the path of test automation?

Greg Paskal

Wow, that's a great question. It's one I get asked often. I did a survey a couple years ago and threw some questions out how people chose the tool that they did and by far the largest answer was because it was free. One unfortunate scenario that one of the either that the team has put into place where they have no budget. Can you imagine how many how many people do you know who, who select a screen capture tool, they spend hours looking for a great free screen capture tool when they could just get a great tool like Snagit that's one of the best tools out there to do this. The same thing is true with test automation tools. To be honest, Selenium is probably one of the most expensive automation tools I've ever used, because it's often viewed as an test automation tool, when it's not; it's an API to interact with the browser, the end. You still have to build out the entire framework and all of the versioning pieces and how you're going to distribute and execute this thing on what. So I would be careful about being enamored just after the free price tag. I've used both Selenium, and I've used other tools that are off the shelf. One of the things I like about off the shelf tools, is there someone to call when something's not working right. And we just ran into an issue with WebDriver and Chrome, and it took us a good day to troubleshoot it that somebody had to pay for that. There was nobody to call at the end of the day, so we had to research it and figure it out. So I would say probably the first thing to consider is do I have a lot of extra time on my hands to troubleshoot things and build things from scratch. If you do, then maybe go with the Selenium API to interact with web browser is the way to go. But realize you're about to sign up for a whole lot of other dev you're about to do or finding another open source framework. A lot of other tools are pretty expensive out there as well. But when it comes to one of the thing with tools, whether it's open source or it's an off the shelf tool, I often tell folks use the tool as designed, not as discovered. So if you buy an off the shelf tool, then for Pete's sake set aside some budget to send a couple folks to get the training on how to use it. When we started to use Selenium, the first thing I did is look for a great teacher out there and we found Dave Haeffner, and we wound up purchasing his training materials and we said it seems sound, other people are using it, he seems accessible. And so we wound up purchasing his materials, and it became the foundation for how we move forward with our automation.

Owen Gotimer

I think those are all great points. Free tools aren't free. At some level you're paying for them through extra development work or through trials and tribulations of having to learn the tool and not have the training or the client support that most of the paid tools offer through those services. Open source obviously is such a huge driver for a lot of things. I think a lot of organizations use open source, especially open source that's has a big community behind it, because they're able to spin it up for free. Obviously, there's time involved in that and that costs money. But if they're able to spin it up for free and test it even for a little bit of time, if it's something that's easy to get started with, they're able to explore whether a paid option is going to make sense for them. But you I totally agree. People see that free tag and think that it's free. And in reality, nothing's free; you're either paying for the product itself, which comes with the support and comes with the training, or you're paying for the development work and the learning curve and the not having support when things go wrong if you're using an open source project. So I definitely appreciate both sides of that coin. And I think that, to your point, it's just super important that you're kind of looking for the tool and using it for its design. As you mentioned, Selenium is not a test automation tool. It's an API tool that interacts with a web browser. And if you're using it as a test automation tool, you're definitely spending more money on trying to figure out how to use it as a test automation tool than you would be if you just purchased an out-of-the-box test automation tool.

Greg Paskal

Yes, but it's one of the areas that's probably the most misrepresented in our field. I see this on LinkedIn profiles all the time. You know, "I'm a Selenium Test Automation Engineer." I don't even know what that means. It means they know how to interact with the API, I guess. So it's broader than that. And you're signing up for a lot more than that, when you use Selenium. And we use it exclusively here. And I love it. It's working great for us. But because I've seen both sides of this, I recognize how expensive this tool set really is. And it's one of the things I talked to my leader about, as we were approaching this is there's nothing that's going to be free in here, we're about to embark upon building a whole lot of things that we could buy right off the shelf. And through that came the benefits of us being able to customize the reporting we want, which doesn't exist in off the shelf tools right now. So, you know, in that sense, it gave us a real edge, but it took many, many months of work to build that out.

Owen Gotimer

Absolutely. People would definitely benefit fromtaking the time to understand what their needs are, what the tools actually do, and then being able to implement the ones that are going to fit their business requirements based on the design of the tool. And then of course, as you mentioned, there are opportunities to potentially build some of your own features into some of the tools to help better facilitate what you need at your organization. I think that's super, super key that every team is going to be a little bit different. Every organization is going to be a little bit different based on what their products are, based on what their services are, based on their team composition and their budget and whatnot. So I think that's super, super crucial.

Greg Paskal

One last thing I'd like to pass on a website for folks that can read about some of the articles. I write a lot of articles, including one specific one on the reporting methodology that developed here. And you can find all that at RealWorldTestAutomation.com, and you'll find about 35 articles on there all around testing and automation, and I hope they'd be helpful to folks.

User Comments

2 comments
Saransh Saxena's picture

Can't agree more with each word. Jedi test engineer, loved that.

Thanks for sharing

September 24, 2020 - 1:24am

Upcoming Events

Oct 13
Apr 27