The Science of Catching Hidden Bugs

[article]
Summary:

Bugs that make a system crash are the most dramatic, but they may not be the most interesting. Subtle bugs hide where you don't expect them, causing systems to mislead users with incorrect results. Using scientific inquiry, you can expose these deceptive ne'er-do-wells lurking inconspicuously under the covers. Elisabeth Hendrickson offers good examples and pointers to using this investigative method.

Some of my all-time favorite bugs are spectacular crashes. They're fun to watch and rewarding to find. But when software crashes, it's obvious that something has gone wrong. Much more difficult to catch—and often more important—are subtle bugs that mislead users.

I'm reminded of one of the earliest database queries I created. The SQL looked right. The data it returned looked right. But when I started using the data, I noticed several inconsistencies that made me suspicious. Sure enough, I had made a mistake in my query that resulted in real-looking, bogus data being returned. Since I created this query for my own purposes, it was easy for me to spot the problem. But what if it had been part of a bigger system? How could another tester have caught my error? I believe that the answer lies in the scientific method of inquiry.

Scientists are an inherently curious bunch. They formulate questions, then design experiments to find answers. Those answers often evoke additional questions. The search for knowledge goes beyond a simple "pass" or "fail" test result. "I don't know yet," becomes a third possibility.

Designing tests as experiments takes time and in-depth analysis, but the potential rewards are enormous. What's it worth to discover that the software produces inaccurate results? In a complex system designed to convert raw data into usable knowledge, it can mean the difference between happy customers and lawsuits.

A Series of Experiments
Let's take an example. If you were testing software that produces random numbers, how could you verify that the numbers are truly random? The facile answer is that you gather a large sample of generated numbers and analyze it both for the distribution of the numbers and for patterns. I thought that answer was sufficient until I began experimenting.

I created a program that produces a random integer between 0 and 9 (MyRand 1.0). You click a "Randomize!" button and it gives you a number, just like rolling a 10-sided die. I decided to test my creation using my standard answer of how to test a random number generator.

Experiment #1: Collect a large sample and analyze it for distribution and patterns.

I gathered a sample of 10,000 generated numbers. When I charted the results in a spreadsheet, I didn't see any patterns. However, when I counted the number of times each integer came up, I found that some numbers occurred twice as often as others did. Oops. Not very random.

Did MyRand 1.0 pass or fail? I decided I didn't know yet. Perhaps the numbers that came up more often on the first test run would come up less often on the second.

Experiment #2: Re-run Experiment #1 and compare the results to see if the distribution changed.

The result of Experiment #2 was a definitive "fail." If MyRand 1.0 were a Las Vegas betting game, I'd bet on the number 9; it came up a lot. I went back to the code and discovered my mistake. I fixed the bug with MyRand 2.0, then re-ran my first two experiments.

About the author

Elisabeth Hendrickson's picture Elisabeth Hendrickson

The founder and president of Quality Tree Software, Inc., Elisabeth Hendrickson wrote her first line of code in 1980. Moments later, she found her first bug. Since then Elisabeth has held positions as a tester, developer, manager, and quality engineering director in companies ranging from small startups to multi-national enterprises. A member of the agile community since 2003, Elisabeth has served on the board of directors of the Agile Alliance and is a co-organizer of the Agile Alliance Functional Testing Tools program. She now splits her time between teaching, speaking, writing, and working on agile teams with test-infected programmers who value her obsession with testing. Elisabeth blogs at testobsessed.com and can be found on Twitter as @testobsessed.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03