Many testers, particularly when working off detailed test instructions, have a tendency to minimize the information they record when testing. This can vary from noting what variables or values they start with and what they end with, to simply noting the test “passed.” The challenge comes when problems are found later, possibly after the software is in production. How do we remember what we did, and when? What records do we have to refer to? How do we, as testers, answer the question “How was this tested?”
How Was This Tested?
That is one question every tester will get asked at some point in his or her career. Sometimes it is asked in a manner of “Wow! That is fantastic!” But sometimes, it is asked in a manner of “Why did you not find this problem?”
Of course, the difference between a tester feeling complimented and a tester getting defensive comes down to the tone of voice. Asking, “How was this tested?” can lead to an informative discussion where everyone learns something, or it can set off a confrontation—particularly when the tester already feels frustrated that a bug was missed and got through to production.
So when there are problems found in production, what information are we—testers and other participants and stakeholders in the project—going to want and need about how the software was tested?
Many testers working from a script operate under the belief that if a test “passes,” the only thing that really needs to be recorded is that it passed. If there are problems, then the test is “failed” and at least some level of information is recorded.
The Importance of Artifacts
Many involved in software focus their understanding of testing around test artifacts. The concept of test plans, test cases, and scripts tend to dominate their discussions on testing. What many fail to understand is that these artifacts are not testing. They are models representing how testing can be done.
The most important set of artifacts are the records kept during testing. Test plans and strategy documents describe what we believe testing should look like. Test cases and scripts prepared in advance describe how we believe the scenarios that need to be tested will be approached.
It is the information around the actual testing—the execution of the tests—that is of greatest interest.
Build a Body of Testing Evidence
When working with test teams not certain of the reason for tracking activities and observations, I often use the analogy that when taking notes on testing, testers need to create enough evidence of having tested the software that it would hold up in a court of law.
What do I mean by that?
Consider this: Rigorous testing requires tracking what was done and what was observed. Part of this involves taking adequate notes so you or anyone else can recreate and understand what was done weeks after testing the feature.
All of us are under pressure to get things done quickly. No matter what environment software is being developed in, the actual hands-on testing of the feature or function always seems to be under some form of time pressure. The challenge of keeping rigorous notes and records comes in how long things take—and not letting that note-taking get in the way of finishing testing on time.
Here’s how I try to address that.