One hundred percent test automation and 100 percent exploratory testing are absolutes. They make for great arguments, but, for those of us stuck in the great middle ground, those positions aren't very helpful. Matthew Heusser describes a blended approach to software testing, explains how some of his clients have used it over the long term, and provides tips for evaluating and adapting your approach.
On the left side, we have all my friends who champion and lead in the use of exploratory test methods. That means humans thinking in the moment, learning about the software by using it, and using those learnings to create and run new experiments. On the right are the test automators, who find the work dull and repetitive, who would rather write code to generate fast feedback, and who want to confirm that the software as built meets the customer needs through code.
The automators have been disappointed in me for years because of my emphasis on exploratory and thinking methods, but I do like the idea of having the computer assist with repetitive tasks when possible. Meanwhile, when I talk about test automation, the exploratory folks assume I mean something creaky and slow that drives the user interface—that it will never work long term. They are worried about me.
So, please allow me to get specific about the blended approach I recommend, the system of forces around it, and, perhaps, a few tips about how this might be helpful to your organization.
For each minimal, viable feature, or “story,” we have a text description, but that is open to interpretation. Before anyone writes any code, we hold a kickoff meeting. The kickoff meeting is designed to get agreement on what the feature will be down to the detail level, so it needs to include everyone who might work on the story, including the customer, analyst, programmers, testers, and anyone else with an interest.
In addition to building a mental model and agreement on what we will build, we also create some examples. Here are some simple examples for a conversion feature:
|Convert Fahrenheit to Celsius|
|Given F||Expect C|
One Approach to Automation
This is not a test plan; it is a list of examples. It is not comprehensive, and that is not our goal. Instead, we want to provide some examples to drive development—to let the programmers feel confident that the code is really, truly, actually ready for exploratory testing without wasting anyone’s time.
Once the kickoff is complete, the programmers build the automation to call the function, which is just a “stub.” The tests will run using a tool like FitNesse, SpecFlow, or Cucumber. At this point, they fail spectacularly. Now, the programmers make the test pass.
Notice what I am saying here: The automated, business-level checks you see above, which work below the GUI, are something the programmers do as they do the work, not after. These tests are all done and pass before a story is moved out of the “dev” column. Add a little bit of developer “poke” testing to make sure the system properly calls the function, and you can radically improve code quality before it gets to a second set of eyes to explore.
This distinction eliminates the classic role of “Bob the test automator,” who is isolated from the original code that tries to drive it in a black-box fashion after the code is “complete.” Notice that the programmers need to make all tests pass before they can call the story “done.” This eliminates the delta, or the difference between the code as it is and the code that the test automation expects to be testing.
If your product is one large, integrated application and most of the bugs are in the graphical layer, then this approach might not be for you. Likewise, if you have a lot of regression bugs, the framework may not catch them. It is designed to be light, cheap, fast, and easy to maintain, but certainly not comprehensive. The kind of teams I have seen achieve the most success with this approach had a large number of mostly isolated applications that were, for the most part, in maintenance mode.