X-ray Vision and Exploratory Testing

[article]
Summary:
Imagine you have X-ray vision. Instead of seeing through walls, you can see the inner structure of programs, the bugs lying inside, and how to expose them. Anyone could execute the steps you gave them to reproduce the bugs. The difficulty in testing, then, is not in executing steps; it is figuring out what steps to take. How do you find those hidden bugs? We need to be the X-ray vision.

Imagine you have X-ray vision. Instead of seeing through walls, you can see the inner structure of programs, the bugs lying inside, and how to expose them. Anyone could execute the steps (or test cases) you gave them to reproduce the bugs.

The difficulty in testing, then, is not in executing steps; it is figuring out what steps to take. How do you find those hidden bugs, waiting to bite unsuspecting users?

We need to be the X-ray vision.

Questions

Let’s say you have a feature to test in a software application. What do you do? How do you know where to look?

You determine that the requirements can function. What about their negatives? How do you determine which variations of data, conditions, sequences, paths, boundaries, application states, devices, or other contexts to check? You certainly cannot check them all; even a tiny project could be tested indefinitely. In addition, what if you are unfamiliar with the application and don’t get much help? What if the requirements are inadequate?

Now, what if there are 10 such features to test in two days before the next scheduled release? How do you determine what to check without missing anything important?

A Hard Lesson

Recently at my company there was an issue that caught a lot of attention: A bug had not been fixed fast enough. We had a support ticket come from the business reporting that a specific user could not access a report.

At the time, it was my turn for troubleshooting production issues. After studying the entry in our logging system and troubleshooting with different user types, permission levels, and locations for running the report, I finally came up with an isomorphic solution to reproduce the bug. I wrote up the steps to reproduce, assigned the issue to a developer, and moved on with my work.

Meanwhile, the developer quoted a block of code in a comment and assigned the ticket to a project manager. Next, a different project manager assigned it to a different developer. This developer reported that the original code block was not the culprit, committed a fix, and left an abstruse explanation. Finally the ticket got assigned to me to test. Notice the confusion here!

I tried to follow along the history of comments as best I could, but I saw no connection or explanation between the various parties involved or why the ticket changed hands. In addition, I was alarmed that the developers contradicted each other; it told me there was extra risk with this change.

This report was accessible from multiple locations, different tiers of permission levels controlled who could run it and what data would pull, there were a myriad of options to select, and tens of thousands of records with protected health information (PHI) were involved. Looking at the code change didn’t help much.

I was stressed with my other pressing work and no one had expressed any urgency on the ticket, so I decided to come back to it before the next scheduled release. I didn’t have the mental energy to figure out the change, create a test plan, and try to make sure I had all those bases covered.

A few days later, everyone wanted to know why it hadn’t been patched to production—IT management, the business management, even the CTO. The affected user happened to be a prominent figure for our client, and they were not happy.

We had a retrospective soon after, and while preparing for it, it hit me: I had avoided the very complexity, questioning, and investigation that are inherent to my role. My team needed me to disperse the fog of confusion and give everyone the information they needed to decide whether or not to release. It is my responsibility to interrogate the product and determine whether I have gathered sufficient evidence to ascertain the level of risk to our users. I am the X-ray vision.

What Testing Is

As a result of this lesson, it dawned on me like never before that our role as testers is truly unique, is difficult, and requires facing confusion head on. Testing is about figuring things out—finding the knowledge, grounded on evidence, necessary to make an informed decision about a product. We do not aim to confirm others’ version of the truth; we must find the real truth, replacing theories with reality.

We are the X-ray vision, finding the underlying structures and uncovering potential risk to our clients. Testers must deduce what questions to ask, determine what steps are necessary to answer them, execute them, interpret results, use those results to modify our strategy, and repeat—all under pressure, facing a deadline, with conflicting priorities, amidst massive amounts of complexity, while navigating delays and still producing results in a timely manner without missing anything important.

Success Feels Good

Another time, we had a project to replace legacy code with a new design. All the functionality needed to remain exactly the same. Thankfully, the project manager, database admin, and developer had already worked together to define requirements before I got in on the project. Everything seemed to go smoothly, and the requirements were checked off and verified … until I uncovered some functionality not even mentioned in the requirements. Then it happened again. And again.

Pretty soon, this simple project that should have been tested in a couple of hours ended up taking several days for bug fixes. This was nobody’s fault. What occurred was simply the result of a functional team using their X-ray vision to find information about a product.

Opening a ticket to test, what do you know? You may have existing knowledge of that part of the software. You have the claims made on the ticket. You might be able to ask people questions. Finally, you have your experience and skill. With these resources, like the gear of an explorer starting an adventure, you begin your testing journey. Your goal is to find true knowledge about the state of the product.

Where Automation Comes In

What about automation? An existing suite of automated checks could not have helped me solve the problems above. For the report, we couldn’t have known ahead of time to target this relatively benign area with a slew of automated checks for every combination of permissions and filtering logic when other important areas are drawing our resources. For the legacy project, not only was most of the testing done in a legacy app, but even if we had automated all the requirements to the fullest extent, automation can never discover new workflows.

The important skill about testing is not automating test cases. You can automate thousands of checks and not be any better off if you haven’t done the difficult work of testing—questioning, investigating, and learning—first. Within the context of testing, automation can play a part, but only a part. It can be effective only when you become experienced at asking and answering the right questions.

User Comments

2 comments
John Finn's picture

That was a very good article capturing the reality of testing in the real world.  A good tester is a problem finder as well as a problem solver making that tester far more valuable than any automation solution.  Just ask dev teams.  A good tester is highly valued by a dev team and deemed irreplaceable.  When was the last time you heard a dev team saying that about an automation suite?  The dev team understands that a good tester is a force multiplier for their team.  Having super powers, like X-ray vision, is a great way to describe that extra something that a good tester brings.

April 28, 2020 - 3:32pm

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.