David Coutts explores the similarities between software testing and the scientific method, and concludes by proposing a new definition for the software testing method. David was inspired to write this article after reading Messrs, Kaner, Pettichord, and Bach, who have each hinted at the usefulness of the scientific method to software testing. A subject David felt was worth exploring in more detail.
Though I am open to the ideas expressed in the Context-Driven School of Testing and Bach's Rapid Testing in particular, my own background in software testing (and development before that) is in a more traditional approach. However, for the purposes of this article I will confine myself to traditional testing, which I define as testing code against specifications via repeatable, scripted test cases.
Testing makes science unique, and it makes testers unique too! Physicist James Trefil (2002), in the introduction to Cassell's Laws Of Nature , stresses the importance of testing to the scientific method:
"This reliance on testing is, I think, exactly what makes science different from other forms of intellectual endeavour. To state the difference in its most blunt and unfashionable form: In science there are right answers."
Although Trefil clearly did not intend to insult software testers by implying that testing is only relevant to science, I've taken this as a bit of challenge, like a gauntlet slapped across the face of software testers everywhere. Testing is somewhat integral to another form of intellectual endeavor, namely software development. But software testing is not generally regarded as a science, nor does it typically feature strongly in computer science. So, what is the relationship between the scientific method and software testing?
Just like science, in software testing there are right answers. Either the software satisfies the requirements or it does not. Either each test case passes or it fails. Through such small incremental black-and-white steps do software testers approach the "truth" about software. In the end, either the software is judged to work, or it does not (and has documented outstanding bugs). Often it is not as black and white as we would like ("What requirements?" or "That's a change request, not a defect!"), which only makes our efforts less scientific and thus less effective. Shades of grey require negotiation, clarification, and definition. The aim is to have as few vague, grey shadows cast over your software as possible.
Test Cases And Experiments
Does that mean that we software testers are scientists? In a way, I think we are. We are the experimental scientists in the field of software development. Yet scientists do not regard themselves as infallible, nor should software testers. We are just as human as developers, and we can make mistakes too. Insufficient test coverage, poorly designed test cases or test data, a badly prepared or supported test environment--these are just some of the common human challenges in software testing (and, I suspect, in science).
Kaner, Falk and Nguyen ( Testing Computer Software , 1999) refer to test cases as "miniature experiments" and argue that a good tester requires "An empirical frame of reference, rather than a theoretical one."
Observation is certainly a key asset for a tester, though I will argue that science (and testing) advances through testing theories while eliminating bad ones, through falsificationism.
Kaner draws upon falsificationism theory and the value of experiments in What Is A Good Test Case? : "Good experiments involve risky predictions. The theory predicts something that people would not expect to be true."
If all a tester did were design test cases that he or she knew would always pass, little value would come of the effort. A specification was written, a developer codes based upon this specification, and everyone at least hopes (if they do not expect) that the code works as specified. The testers job is to attempt to try and pop that bubble of expert belief and blind faith. (See figure below.)
Again on the topic of test cases, Kaner wrote, "In my view, a