Our Experiment in Exploratory Testing: A Case Study

[article]
Summary:

Many testers use exploratory testing techniques daily in their normal work. Doron Bar's team wanted to go all in and see if they should make it part of their official procedure. Here, he talks about how they prepared and conducted an experiment comparing exploratory testing to their usual scripted testing. Read on to see the results.

For the purposes of this article, let’s define exploratory testing as an approach that studies a product or feature while designing and executing the tests themselves. This means the “dive in and quit” method might be a valid way to tackle a feature, and the primary artifact of the work might be a verbal briefing and bug reports.

When you think about it, most test teams use exploratory testing (I’ll just call it ET from now on) as a daily part of their knowledge work. Yet in many companies, it remains implicit; the official system tries to make testing a documented, predictable process.

At my company, we have teams that do a variety of approaches to testing. My team was doing the plan- and document-centric approach, but we wanted to convert to a model that includes exploring. Here, I’ll tell you about that transition—how we prepared, what we did, and what our results were.

Determining Our Exploratory Testing Approach

We decided to apply a stricter technique toward ET inspired by session-based test management. Brothers James and Jonathan Bach designed session-based test management to provide the oversight, transparency, and metrics their customers were asking for while continuing to give testers the freedom that ET promises. Essentially, SBTM is a mix of method (what testers do) and a management tool that reports more detailed plans and documentation of results.

After we decided we’d use session-based test management, we wanted to evaluate it on a trial basis. We came up with questions the experiment should answer, such as:

  • Are our testers ready to do self-directed work? ET requires understanding, imagination, and commitment, and it was possible our testers would need coaching, mentoring, or outside training.
  • How efficient are the exploratory tests compared to the manual scripted tests (which we’ll call ST)? These methods may be concurrently applied, but what about immediate tasks when there is no time for ST, not to mention automation? Is ET a suitable solution in these cases? To this end, we wrote tests the old way, with detailed steps and expected results, so that we could compare the outcomes.
  • How will our testers implement ET? What kind of timetable will they need, how much learning time will be involved, and what new risks does ET introduce?

We were going to conduct the ET experiment on a new product feature, a browser add-on that provides network protection against various malware and trackers. But before starting, we had to outline the preparations.

Preparing for Implementation

Our first step was educating testers on the essence of ET, creating a shared language and expectations. That was not a problem because they already had a clear idea.

Then we began planning the test effort. Instead of a master test plan and test cases, we created a mind map based on the requirements, design, and discussions with all the relevant factors. The mind map listed risks we could address in a ninety-minute session; if the risk was too big, we broke the node down into other nodes. We found the mind map a very convenient tool for planning high-level tests, and it was easy to ensure that nothing was missed, all the way to the test point level. Once this planning is complete, anyone can look at that list of risks and point out things that are redundant, not important, or missing. And on an agile team, the mind map can become a “big visible chart,” printed large and displayed publicly on the wall.

We then created a form for planning the charters—the main risk the ninety-minute session will cover. The form includes the documentation for a session and covers some main areas:

  • Date proposed and date executed
  • Product and feature
  • Timetable (recommended to be no more than two hours)
  • Test type (such as feature risk management, user testimonials, requirements, etc.)
  • Test objectives (Do things operate as they should? Does the product achieve its main objective? Which specific feature operates under certain conditions?)
  • Test content. (For example, a relevant pop-up disappears after five seconds)
  • Documentation while running (What am I testing? What hypotheses did I check, and what were their results?)
  • This is not a plan for automation, but instead a “what”-based list of risks to manage—an alternative to the “how”-based approach of test cases and writing step-by-step details and the expected results.

The First Exploratory Testing Session

We allocated eight testers for this experiment. We asked them to go over the documents for about an hour and discuss the feature and tests as a group for another hour. The aim was to become acquainted with the product and brainstorm about ideas and running tests. During the discussions, the team raised important questions about the feature submitted to the product manager and R&D.

We then allocated two hours to the tests themselves. When testing ended, another meeting was conducted to discuss the tests. We wanted firsthand impressions of the experience in terms of the test (What can be improved? What was good? What did we learn?), the product (Would you install it in your computer at home? What is it missing? What is unnecessary? How can you help the user?), and product quality.

Once the testers gave their opinions about which direction to take the tests, we updated the reporting form and opened some bugs.

Results and Reports

The team agreed the tests were successful. Each test found bugs in its domain (e.g., user interface tests found bugs mainly within the user interface, except for several bugs that were detected on the fly), and we opened a total of twenty-three bugs.

Based on their impressions noted at the bottom of the form, to my satisfaction, the testers liked the tested feature. The less positive finding, however, was that they thought the feature was not fully mature for testing. We decided that the next time we performed ET, we would ensure that the feature is more stable first—within the sanity tests performed by the developers.

To make the whole experiment valid, we needed something to compare it to, so we ran our traditional ST process with the same testers. They found no new bugs, but we had to view this as a little suspicious because the scripted tests included additional points beyond those tested by ET. Perhaps the detected bugs blocked the path to identifying bugs in those areas.

Another interesting result was that 25 percent of the bugs found by ET would not have been detected by ST! Nevertheless, we see ET and ST tests as complementary. Even with the best ET intentions, we might miss one test or another because there is no methodical way to test for every single situation, but as stated above, it is easy to miss bugs with ST too.

Based on our results, we determined that our testers are prepared for this approach to exploratory testing. Exploratory tests are a very strong tool. If they are performed correctly, many issues can appear, and this provides the testers with an additional warranty and diversifies their work. If your team hasn’t experimented with exploratory testing, formulate a plan and give it a try. You may be surprised by how much it adds to your test coverage.

User Comments

1 comment
Zephan Schroeder's picture

Excellent article! I know ET is a great way to quickly find many bugs that test automation and even manual test case design & execution miss. I have tried a couple times to incorporate formal SBTM as official test planning and outcome artifacts in the past. Having this and similar studies helps validate ET as a worthwhile supplement to other test approaches.

May 1, 2018 - 7:19pm

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.