How Humans and Bots Can Work Together to Test Software

We’ve all heard about AI for software testing from some seriously smart people, but there has been a lot of confusion about the idea. This article tackles some of the questions you might be asking: Do I need to be a genius to use AI for software testing? Is AI going to replace me as a tester? Where does AI fit into my testing strategy? With a simple analogy of training a dog, learn how AI fits into testing.

We’ve all heard or read about AI for software testing from some seriously smart people, but there has been a lot of confusion about the idea.

Let me just start out by saying that I’m just not passionate about math and coding enough to care about the guts and technical aspects of AI. I identify as a software tester who has a good grasp of technical concepts and can write mediocre code in a variety of programming languages. I believe that software testing is done by people, and I believe that testing is the process of evaluating a product by learning about it through exploration and experimentation. But I also embrace automation and tools.

Let’s tackle some of the questions you might be asking so we can all better understand how AI fits into testing.

Do I need to be a genius to use AI for software testing?

As I explained above, I’m by no means an AI expert when it comes to the inner workings of AI. Believe it or not, I found myself embracing AI because of my background in dog training. 

Seriously, let’s go super-simplified and compare AI bots to dogs. If you have ever owned a dog or puppy, I’m sure you’ve had the experience of asking them to sit only to have them look up at you with a confused face. So you pulled out a treat, and when their butt hit the ground, you rewarded them. They quickly learned that “butt on ground” equals reward. Then you started to put a label on it. You would say, “Sit,” and if the butt hit the ground, they got a treat, so “Sit” means butt on the ground. Got it!

Similarly, AI bots crawl your application in a confused fashion, trying different paths and exploring different screens. It’s only when the bots start receiving “rewards” for their actions that they start learning what we are asking them to do. Once they have explored a part of an application, they subsequently received rewards, and we labeled that action, then they do that action in a repeatable fashion. It’s that simple, whether you’re a user or a trainer.

You don’t need to be a veterinarian in order to train a dog, but it does help to have a general idea of how dogs think and what motivates them. Likewise, you don’t need to be able to create AI bots in order to understand how to use them for software testing. You should, however, know how to test software and have enough of an understanding of how AI works in order to use it appropriately.

Is AI going to replace me as a tester?

Continuing that analogy, let’s go straight to the fact that there are still dog trainers out there who are actively employed. Dogs have not yet learned, nor do they really seem to have the desire or intention, to start training each other how to sit on command. Part of that is the fact that they can’t speak in a language that we understand.

Bots are similar in that they are not yet able to train themselves. Even if they could, they lack the ability to understand the context and intention required for software testing. Therefore, if you think that you can just hire a lot of AI bots to replace your testers, or if you are a tester who thinks they will lose their job to AI bots, you are flat-out wrong.

Where does AI fit into my testing strategy?

If you are asking yourself this question, you get a gold star. One of the barriers to entry I have been noticing when it comes to AI is the idea that it’s binary: You either have to choose “AI all the things!” or “No AI for you!”

Nothing could be further from the truth. As I have already explained, AI is not going to replace the software tester. I also want to address that AI can’t—and shouldn’t—do everything.

AI and automation are tools that can be used in the testing of software. They cannot test software on their own, and they can easily be abused. Instead, they should be used to complement your testing. Much like personal assistants, they do the errands we don’t have the time or desire to do, which frees us up to focus on the things that matter.

Many companies have tried to “automate all the things.” Many of us are still trying to clean up the mess from trying to automate all the things. There are just some things that should not be automated. Angie Jones has a great presentation and a class on this. Much like automation, AI should not be used to “AI all the things!” You can dip your toe into the proverbial AI waters.

Building upon the dog training analogy, you wouldn’t ask your dog to drive your car. I’m not going to ask AI to do complex, combinatory automation.

You might be thinking, “But wait; AI is smart, so shouldn’t I have it do the hard stuff?” The answer to that is that you absolutely can have it do the hard stuff. But what about that stuff that you have to do over and over again? You know, that stuff that’s incredibly boring, highly repeatable, and takes a tester many hours to do? Is that really the best use of that tester’s brain? Did you hire them to just push buttons day in and day out, or did you hire them to do what they are skilled to do, which is test?

A company I worked for was spending a significant amount of money paying my test team to do redundant UI checks in a production environment. It was mind-numbing work that most of us did not enjoy. Using AI, we were able to continuously run these checks and provide fast feedback to both development and operations. On top of that, the AI service we were using provided metrics including CPU usage, memory usage, and performance. So along with running our automation, we were able to trend some performance metrics and see patterns without doing additional, specific load and performance testing. It was an amazingly effective and efficient solution to our problem.

The best use of AI may just be to handle all that stuff that your testers don’t want to be doing. If you let a bot do a simple, repeatable task, then you can let your tester actually test. So instead of dreading going to work every day to push buttons, you allow them to use their brains.

Now you have a bot that is doing something a tester was previously doing, except faster and more efficiently, and you have a happy human passionately testing. Congratulations, you’ve just increased your product coverage!

This is why you need to factor AI into your testing strategy, much like you factor automation into your testing strategy.

In summary, AI and humans can and should work together to test software. You don’t have to be a genius to use AI for software testing; you just have to be smart about it. AI is not a silver bullet for software testing and should not be used as such. By using AI strategically to complement your other testing efforts, you can greatly increase your product coverage. And if you use AI to do the simple, repeatable tasks, you increase the morale of your tester by allowing them to focus on what they are skilled in doing: testing.

User Comments

1 comment
Akshaya Choudhary's picture

Janna, great article. You know, I've always found this funny how humans created AI - which by the way can only do a fraction of things than the human brain - and yet we are worried about AI being more smarter than us. Yes, it is more efficient, faster, and highly accurate. But then, we can very well use these benefits to progress the whole production cycle.

March 14, 2020 - 8:29am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.