“How Was This Tested?” Providing Evidence of Your Testing

[article]
Summary:
Many testers have a tendency to minimize the information they record when testing. The challenge comes when problems are found later, possibly after the software is in production. How do we remember what we did, and when? What records do we have to refer to? How do we, as testers, answer the question “How was this tested?”

Many testers, particularly when working off detailed test instructions, have a tendency to minimize the information they record when testing. This can vary from noting what variables or values they start with and what they end with, to simply noting the test “passed.” The challenge comes when problems are found later, possibly after the software is in production. How do we remember what we did, and when? What records do we have to refer to? How do we, as testers, answer the question “How was this tested?”

How Was This Tested?

That is one question every tester will get asked at some point in his or her career. Sometimes it is asked in a manner of “Wow! That is fantastic!” But sometimes, it is asked in a manner of “Why did you not find this problem?”

Of course, the difference between a tester feeling complimented and a tester getting defensive comes down to the tone of voice. Asking, “How was this tested?” can lead to an informative discussion where everyone learns something, or it can set off a confrontation—particularly when the tester already feels frustrated that a bug was missed and got through to production.

So when there are problems found in production, what information are we—testers and other participants and stakeholders in the project—going to want and need about how the software was tested?

Many testers working from a script operate under the belief that if a test “passes,” the only thing that really needs to be recorded is that it passed. If there are problems, then the test is “failed” and at least some level of information is recorded.

The Importance of Artifacts

Many involved in software focus their understanding of testing around test artifacts. The concept of test plans, test cases, and scripts tend to dominate their discussions on testing. What many fail to understand is that these artifacts are not testing. They are models representing how testing can be done.

The most important set of artifacts are the records kept during testing. Test plans and strategy documents describe what we believe testing should look like. Test cases and scripts prepared in advance describe how we believe the scenarios that need to be tested will be approached.

It is the information around the actual testing—the execution of the tests—that is of greatest interest.

Build a Body of Testing Evidence

When working with test teams not certain of the reason for tracking activities and observations, I often use the analogy that when taking notes on testing, testers need to create enough evidence of having tested the software that it would hold up in a court of law.

What do I mean by that?

Consider this: Rigorous testing requires tracking what was done and what was observed. Part of this involves taking adequate notes so you or anyone else can recreate and understand what was done weeks after testing the feature.

All of us are under pressure to get things done quickly. No matter what environment software is being developed in, the actual hands-on testing of the feature or function always seems to be under some form of time pressure. The challenge of keeping rigorous notes and records comes in how long things take—and not letting that note-taking get in the way of finishing testing on time.

Here’s how I try to address that.

User Comments

8 comments
Nicholas Straw's picture

Is the belief that all defects should be captured by dynamic testing?  I agree with the author about documentation to an extent but, I feel funny about the tone.  Should we document success as heavily as failure? I would not think to heavily record a test that I found to be successful.  When a defect is detected in a later cycle or in production, we often think about preventing its escape and assume if we had tested better, it would have been detected. This is why I ask the question at the open. If we treat quality as something that can be tested into a solution, we will continue to need to build a cacoon of evidence suggested by Mr. Walen  Quality is a value that should be infused in the work we evaluate, it should be baked in, not slathered on top. 

June 16, 2015 - 2:10pm
Kieron Horide's picture

I agree with Nicholas that the tone suggests something of the context Pete is working under, and thought should be given to how much to capture. The author (Pete) does give some useful options for recording, where it's warranted.

That "testers need to create enough evidence of having tested the software that it would hold up in a court of law" is a very high bar, because keeping detailed records has the potential of doubling manual test execution effort. I would only set the bar that high where the potential benefits of doing so merit the expense, or you could end up in a court of law, something to work through with the PM or Product Owner so it happens only sometimes.

If people are asking about how something was tested, then look for the drivers behind the question. This is not so as to avoid accountability, but to steer the conversation towards useful action. Are testers having to defend themselves often? Is there a perception that testers are gate-keepers? Does management have a problem seeing the value added by testing? Are some testers not being as thorough as they should be, and need peer review or performance management? Relationship and expectation management might be required in order to set the right level of recording.

If there are lessons to learn so we all can improve what we do, what level of recording is needed to assist in learning those lessons? Is the recording level enough to give repeatability of manual tests, assuming an expert user?

Thinking these things through and getting the level of recording right would ensure testing effort is being deployed where it is most useful: building working software.

June 16, 2015 - 8:58pm
Dave Douglas's picture

I've often thought about how to best easily maintain an audit of what/how I have been testing.  I haven't been able to find good open source / free tools that help in this.  I've found screen recorders of course, but they produce enormous files really quickly.  Does any good tools that they've used successfully for this?  It would also help when you stumble across a difficult-to-reproduce bug but can't remember how exactly you got it to happen.  If anyone has a particular tool and would be willing to share how it worked out, that would be great.  Thanks!

June 17, 2015 - 8:31am
Jon Hagar's picture

How much evidence to leave behind is context dependent and not always the same from project to project and even stages inside of a project. If I am testing my simple home web page, the evidence can be little, since the risks with the pages are limited.  If I am testing a life critical software system, then I may very easily find myself in court and evidence becomes very important. The evidence of testing is similiar to that of "how many comments in the code is good enough".  Those of us who have coded and did less commenting often realize we needed more latere during maintenance efforts.  If there were simple pat answers to what Pete is talking about, companies would not pay developers and tester very much money.

As to the issue of tools, I have used many.  I have used data logger and recording system that monitored-recorded almost everything (terabytes of data kept forever).  I have also used capture playback tools to just do capture of Exploratory testing, so if a bug was found I could "repeat" it and when no bug was found I trashed the scripts (not maintenance of fragile scripts).  And to the other extreme, I have had just a few had written notes on a charter statement to remind the tester what we were poking at.

There is no best or right.

June 17, 2015 - 4:07pm
Richard Paul's picture

I respect the author and enjoyed the article, but let me play devil's advocate for a moment....

 

I disagree with such copious notes taking for the following reasons:

 

Time Lost... great if I have a "secretary", pair-tester, or camera recording what is done--and I've heard of all these--but we have entirely too much to test to slow down enough to record this proposed level of detail ("will hold up in court"). Yes, we need more resources, but small companies may not have them and not sure a large company should pay hire wages for time spent note-taking.

 

Unnecessary, generally... I do NOT have a good memory for exactly how I tested something, but my Testing follows an Approach that let's me guess and almost always re-create my steps later. When my exploratory testing has strayed from this approach, I note those few things I did differently, which usually makes the replication steps discoverable. There are times in my career where I have put exact steps on every task, but I no longer find that helpful. How often are they referred to again?

 

Wrong workplace culture/attitude... I was reading the Wikipedia entry for C.Y.A., a few hours before reading this article, and perhaps that was the "funny" tone Nicholas referred to. I would not like working at a place that wants to place blame on the tester. My mistakes lead to self-correction as I take responsibility. Perhaps if the mistake had caused a plane to crash, but if it just meant that a person had to do a workaround for a day or two, it may not be so serious. (That is, I have not worked for the defense or aviation industries, but we did have narrow tolerances for high quality in my medical industry jobs.)

 

 

Taking too long to write up my reasons, so if these prompt discussion, others can add to the list of reasons.

June 19, 2015 - 4:43pm
Eric Beggs's picture

     Being fresh into testing (just over 30 days new), I have found that I can make the most sense out of what I am doing by creating OneNote pages for every test I am working on.  In this I will put copies of all documents including test plans, test cases and email strings.  Combine this with for my own need to recall what I am doing / have done with a step by step (or play by play if you prefer) recount in real time of steps; using the test cases as my guide, and the results of every step taken falls right in line with what Mr. Walen is saying.  To comment on what Mr Paul said about not having the time to take to do this in my opinion is unfounded.

     The value of knowing, not guessing what steps exactly and in what order they occurred seems to me to be invaluable, not only from a retrospective view of the tests ran, but to learn and know exactly what is occurring during testing.  Again, I'm new to testing, not to IT but I would rather have the evidence of my tests and never need it than need it and not be able to provide it.

 

 

June 25, 2015 - 12:43pm
G Gaylard's picture

Yes, I think the real problem here is a cultural one: 'there's a bug in production! How did testing miss that??' ALL non-trivial software has bugs, why do testers have to explain why software has bugs? Yes, if the bug is serious and/or should have been detected by competent testing, then the relevant tester/s ought to have some explanation if asked. But surely our main priority ought to be fixing it, rather than trying to find out whose fault it was. A typical project these days is under enormous time pressure: if testers are bogged down in documentation, that reduces time spent actually testing, which ironically means you've got a bigger risk of missing bugs.

June 26, 2015 - 7:51am
Patrick Higgins's picture

I'm not sure this is a realistic solution for most testers in the business world. In my experience most testing teams are much smaller then there development teams and are buried in testing assignments. Being able to pair test items for note taking is unpractical. I do agree that its the testers responsabilty to record what has been tested as they test. I would argue as tester we should be taking quick notes (even with screenshots) so we can go back and explain in detail how something was tested if we are questioned post testing.

July 15, 2015 - 9:21pm

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.