Oracles, Failures, Models, and Sins

[article]
The Ten Commandments of Software Testing (make that nine), Part 2
Summary:
This is part 2 of a three-part list of nine "commandments" that can be used as a guide to improving your life as a test engineer. As a response to numerous inquiries about these tenets listed on James' Web site, this column provides his explanations behind the "Thou Shalts." These are intended to be useful suggestions as well as for fun. Read on and see what you think.

Here is the full list of commandments so you can see them in one list as they originally appeared on my Web site :
 

  1. Thou shalt pummel thy app with multitudes of input
  2. Thou shalt covet thy neighbor's apps
  3. Thou shalt seek thee out the wise oracle
  4. Thou shalt not worship nonreproducible failures
  5. Thou shalt honor thy model and automation
  6. Thou shalt hold thy developers sins against them
  7. Thou shalt revel in app murder (celebrate the BSOD!)
  8. Thou shalt keep holy the Sabbath (release)
  9. Thou shalt covet thy developer's source code

You can click at the end of this column to read part 1, commandments 1 and 2, from last month. And now, here are my interpretations of numbers 3-6.

3. Thou Shalt Seek Thee Out the Wise Oracle
We all know that there are at least two parts to testing. First we apply, then we check. When we apply inputs we are testing whether the software did what it was supposed to do with those inputs. Without the ability to verify that this is indeed fact, testing is much less effective.

Testers call this the "oracle problem" in reference to the wise oracle that knows all the answers. Of course, the answer we are interested in is "did the app do what it was supposed to do when I applied some test?" This requires our oracle to intimately understand what the application is supposed to do given any specific combination of inputs and environmental conditions. Automating the oracle is very hard, but a worthwhile pursuit, not only as a valuable testing tool but also as an intellectual pursuit. Forcing yourself to think like such an oracle can often be more productive than anything else that you might choose to do, whether or not you ultimately succeed in automating it.

4. Thou Shalt Not Worship Irreproducible Failures
We've all been here, haven't we? You see a bug, usually a good bug, then it won't reproduce. The better the bug, the worse you feel about it. I have seen many a good tester waste hours and even days trying to reproduce a bug that they saw only once.

The effort to reproduce such a bug is often valiant but without the proper tools, the effort can be a waste of time. But the problem I see is that the time is wasted anyhow, without the tester even realizing it. I had a tester spend an entire day trying to remember the reproduction steps of a crashing bug, with no success. I would have preferred that the particular tester spend his time in better ways than that. I understand the frustration as well as any tester but the pursuit of such a bug is often time not well spent.

The moral of this commandment is twofold. First, try your best to be ever alert and remember (or record) the sequences of actions you are taking against the software. Remember also the application's response. Second, consider using debugger-class tools that can track your actions and the state of the software. This takes much guesswork out of reproducing bugs and prevents otherwise good testers form breaking this commandment.

5. Thou Shalt Honor Thy Model and Automation
Commandment one was about the importance of random testing-emphasis on random. This commandment is about intelligent random testing-emphasis on intelligent. When intelligence meets automation, the result is called model-based testing. Get used to the term because it is the automation technology of the future.

Software models such as objects, black boxes, or structure diagrams help us to understand software. Testing models help us understand testing. A testing

About the author

James Whittaker's picture James Whittaker

James A. Whittaker is is a technology executive with a career that spans academia, start-ups, and industry. He was an early thought leader in model-based testing where his Ph.D. dissertation became a standard reference on the subject. While a professor at the Florida Institute of Technology, James founded the world's largest academic software testing research center and helped make testing a degree track for undergraduates. He wrote How to Break Software, How to Break Software Security (with Hugh Thompson), and How to Break Web Software (with Mike Andrews). While at Microsoft, James transformed many of his testing ideas into tools and techniques for developers and testers, and wrote the book Exploratory Software Testing. For the past three years he worked for Google as an engineering director where he co-wrote How Google Tests Software (with Jason Arbon and Jeff Carollo). He's currently a development manager at Microsoft where he is busy re-inventing the web.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Oct 12
Oct 15
Nov 09
Nov 09