“How Was This Tested?” Providing Evidence of Your Testing

[article]
Summary:
Many testers have a tendency to minimize the information they record when testing. The challenge comes when problems are found later, possibly after the software is in production. How do we remember what we did, and when? What records do we have to refer to? How do we, as testers, answer the question “How was this tested?”

Many testers, particularly when working off detailed test instructions, have a tendency to minimize the information they record when testing. This can vary from noting what variables or values they start with and what they end with, to simply noting the test “passed.” The challenge comes when problems are found later, possibly after the software is in production. How do we remember what we did, and when? What records do we have to refer to? How do we, as testers, answer the question “How was this tested?”

How Was This Tested?

That is one question every tester will get asked at some point in his or her career. Sometimes it is asked in a manner of “Wow! That is fantastic!” But sometimes, it is asked in a manner of “Why did you not find this problem?”

Of course, the difference between a tester feeling complimented and a tester getting defensive comes down to the tone of voice. Asking, “How was this tested?” can lead to an informative discussion where everyone learns something, or it can set off a confrontation—particularly when the tester already feels frustrated that a bug was missed and got through to production.

So when there are problems found in production, what information are we—testers and other participants and stakeholders in the project—going to want and need about how the software was tested?

Many testers working from a script operate under the belief that if a test “passes,” the only thing that really needs to be recorded is that it passed. If there are problems, then the test is “failed” and at least some level of information is recorded.

The Importance of Artifacts

Many involved in software focus their understanding of testing around test artifacts. The concept of test plans, test cases, and scripts tend to dominate their discussions on testing. What many fail to understand is that these artifacts are not testing. They are models representing how testing can be done.

The most important set of artifacts are the records kept during testing. Test plans and strategy documents describe what we believe testing should look like. Test cases and scripts prepared in advance describe how we believe the scenarios that need to be tested will be approached.

It is the information around the actual testing—the execution of the tests—that is of greatest interest.

Build a Body of Testing Evidence

When working with test teams not certain of the reason for tracking activities and observations, I often use the analogy that when taking notes on testing, testers need to create enough evidence of having tested the software that it would hold up in a court of law.

What do I mean by that?

Consider this: Rigorous testing requires tracking what was done and what was observed. Part of this involves taking adequate notes so you or anyone else can recreate and understand what was done weeks after testing the feature.

All of us are under pressure to get things done quickly. No matter what environment software is being developed in, the actual hands-on testing of the feature or function always seems to be under some form of time pressure. The challenge of keeping rigorous notes and records comes in how long things take—and not letting that note-taking get in the way of finishing testing on time.

Here’s how I try to address that.

Don’t Go It Alone

Have a partner or paired tester working with you, making notes of where and how you navigate, what values you enter, what options you select, even how long it takes between entering data or clicking on the dropdown menu. This partner might notice things you miss—responses or results that you are not paying attention to because you are focused on something else—and these things might be worth investigating later.

The person can also serve as a sounding board to bounce ideas off while working through a scenario. If a behavior is noted as unusual but less important than what you are working through, the partner can help you remember that this could be an additional path to exercise in the next iteration. Another set of eyes on the screen can help you “stay honest” and focus on what needs to be done now.

Record Everything

A common phrase among medical professionals is “If it isn’t written down, it didn’t happen.” Similarly, eyewitness testimony in criminal cases is increasingly coming under critical review because people don’t remember things as accurately as we think we do. Memories are flawed.

Write down everything, as you do it and as it happens. Anything that seems obvious now might not be so obvious in a week or two. Make a note of it.

There are also screen-recording tools to keep track of what the user does and what happens in response to every action: every move, the screen displays, and the messages. This gives you a fairly straightforward way of recording what happened when you tested.

There are other options as well. Tools to facilitate taking notes while testing are available, so one window can have the application under test running and you can have the note-taking tool open in another window. Screen snaps can be copied over into the notes so the tester can show precisely what happened.

Other Forms of Evidence

It’s important not to overlook simple yet easily missed information, such as the build or sprint when the testing was done. The same goes for the version of the database or schema, as those can change. Sometimes when database environments change, there are unexpected consequences that might not show up until later.

Depending on the type of software you are testing, you may have evidence you can identify and capture with minimal work on your part, such as logs: application logs, database logs, system logs—as in, logs on the device you are executing the tests on—and host logs on the system host.

These can contain valuable information about what is being tested. There can also be information to examine that is not readily apparent to an observer. These both have value to the tester, and in the records to be retained as possible evidence around testing.

The Question

Why do we need to keep this information? What is the point?

When unexpected results are found a few iterations or builds after this was tested—or in production—the question about how a given feature was tested will almost certainly arise.

In my experience, the true purpose of keeping this evidence is quite simple: It is a gift your current self is giving to your future self. It might be yourself in a few weeks, in a sprint or two, or months later. It might even be someone else who will make use of your gift.

Whoever it is, when something goes wrong and that person has to search for the reason, he or she—or you—will appreciate the effort you put into explaining how the feature was tested.

User Comments

11 comments
Nicholas Straw's picture

Is the belief that all defects should be captured by dynamic testing?  I agree with the author about documentation to an extent but, I feel funny about the tone.  Should we document success as heavily as failure? I would not think to heavily record a test that I found to be successful.  When a defect is detected in a later cycle or in production, we often think about preventing its escape and assume if we had tested better, it would have been detected. This is why I ask the question at the open. If we treat quality as something that can be tested into a solution, we will continue to need to build a cacoon of evidence suggested by Mr. Walen  Quality is a value that should be infused in the work we evaluate, it should be baked in, not slathered on top. 

June 16, 2015 - 2:10pm
Kieron Horide's picture

I agree with Nicholas that the tone suggests something of the context Pete is working under, and thought should be given to how much to capture. The author (Pete) does give some useful options for recording, where it's warranted.

That "testers need to create enough evidence of having tested the software that it would hold up in a court of law" is a very high bar, because keeping detailed records has the potential of doubling manual test execution effort. I would only set the bar that high where the potential benefits of doing so merit the expense, or you could end up in a court of law, something to work through with the PM or Product Owner so it happens only sometimes.

If people are asking about how something was tested, then look for the drivers behind the question. This is not so as to avoid accountability, but to steer the conversation towards useful action. Are testers having to defend themselves often? Is there a perception that testers are gate-keepers? Does management have a problem seeing the value added by testing? Are some testers not being as thorough as they should be, and need peer review or performance management? Relationship and expectation management might be required in order to set the right level of recording.

If there are lessons to learn so we all can improve what we do, what level of recording is needed to assist in learning those lessons? Is the recording level enough to give repeatability of manual tests, assuming an expert user?

Thinking these things through and getting the level of recording right would ensure testing effort is being deployed where it is most useful: building working software.

June 16, 2015 - 8:58pm
Dave Douglas's picture

I've often thought about how to best easily maintain an audit of what/how I have been testing.  I haven't been able to find good open source / free tools that help in this.  I've found screen recorders of course, but they produce enormous files really quickly.  Does any good tools that they've used successfully for this?  It would also help when you stumble across a difficult-to-reproduce bug but can't remember how exactly you got it to happen.  If anyone has a particular tool and would be willing to share how it worked out, that would be great.  Thanks!

June 17, 2015 - 8:31am
Jon Hagar's picture

How much evidence to leave behind is context dependent and not always the same from project to project and even stages inside of a project. If I am testing my simple home web page, the evidence can be little, since the risks with the pages are limited.  If I am testing a life critical software system, then I may very easily find myself in court and evidence becomes very important. The evidence of testing is similiar to that of "how many comments in the code is good enough".  Those of us who have coded and did less commenting often realize we needed more latere during maintenance efforts.  If there were simple pat answers to what Pete is talking about, companies would not pay developers and tester very much money.

As to the issue of tools, I have used many.  I have used data logger and recording system that monitored-recorded almost everything (terabytes of data kept forever).  I have also used capture playback tools to just do capture of Exploratory testing, so if a bug was found I could "repeat" it and when no bug was found I trashed the scripts (not maintenance of fragile scripts).  And to the other extreme, I have had just a few had written notes on a charter statement to remind the tester what we were poking at.

There is no best or right.

June 17, 2015 - 4:07pm
Richard Paul's picture

I respect the author and enjoyed the article, but let me play devil's advocate for a moment....

 

I disagree with such copious notes taking for the following reasons:

 

Time Lost... great if I have a "secretary", pair-tester, or camera recording what is done--and I've heard of all these--but we have entirely too much to test to slow down enough to record this proposed level of detail ("will hold up in court"). Yes, we need more resources, but small companies may not have them and not sure a large company should pay hire wages for time spent note-taking.

 

Unnecessary, generally... I do NOT have a good memory for exactly how I tested something, but my Testing follows an Approach that let's me guess and almost always re-create my steps later. When my exploratory testing has strayed from this approach, I note those few things I did differently, which usually makes the replication steps discoverable. There are times in my career where I have put exact steps on every task, but I no longer find that helpful. How often are they referred to again?

 

Wrong workplace culture/attitude... I was reading the Wikipedia entry for C.Y.A., a few hours before reading this article, and perhaps that was the "funny" tone Nicholas referred to. I would not like working at a place that wants to place blame on the tester. My mistakes lead to self-correction as I take responsibility. Perhaps if the mistake had caused a plane to crash, but if it just meant that a person had to do a workaround for a day or two, it may not be so serious. (That is, I have not worked for the defense or aviation industries, but we did have narrow tolerances for high quality in my medical industry jobs.)

 

 

Taking too long to write up my reasons, so if these prompt discussion, others can add to the list of reasons.

June 19, 2015 - 4:43pm
Eric Beggs's picture

     Being fresh into testing (just over 30 days new), I have found that I can make the most sense out of what I am doing by creating OneNote pages for every test I am working on.  In this I will put copies of all documents including test plans, test cases and email strings.  Combine this with for my own need to recall what I am doing / have done with a step by step (or play by play if you prefer) recount in real time of steps; using the test cases as my guide, and the results of every step taken falls right in line with what Mr. Walen is saying.  To comment on what Mr Paul said about not having the time to take to do this in my opinion is unfounded.

     The value of knowing, not guessing what steps exactly and in what order they occurred seems to me to be invaluable, not only from a retrospective view of the tests ran, but to learn and know exactly what is occurring during testing.  Again, I'm new to testing, not to IT but I would rather have the evidence of my tests and never need it than need it and not be able to provide it.

 

 

June 25, 2015 - 12:43pm
G Gaylard's picture

Yes, I think the real problem here is a cultural one: 'there's a bug in production! How did testing miss that??' ALL non-trivial software has bugs, why do testers have to explain why software has bugs? Yes, if the bug is serious and/or should have been detected by competent testing, then the relevant tester/s ought to have some explanation if asked. But surely our main priority ought to be fixing it, rather than trying to find out whose fault it was. A typical project these days is under enormous time pressure: if testers are bogged down in documentation, that reduces time spent actually testing, which ironically means you've got a bigger risk of missing bugs.

June 26, 2015 - 7:51am
Patrick Higgins's picture

I'm not sure this is a realistic solution for most testers in the business world. In my experience most testing teams are much smaller then there development teams and are buried in testing assignments. Being able to pair test items for note taking is unpractical. I do agree that its the testers responsabilty to record what has been tested as they test. I would argue as tester we should be taking quick notes (even with screenshots) so we can go back and explain in detail how something was tested if we are questioned post testing.

July 15, 2015 - 9:21pm
John Wilson's picture

Should a defect be found in previously tested and released code anyone on the team, not just testers, should be able to answer the , “How was this tested?” question. If they can’t it signifies that something is wrong, very wrong.

I can see why you might need documentation to make improvements, but once the improvements have been made the need for the documentation is removed. To keep documenting and not make improvements that remove the requirement for documentation signifies that something is wrong, very wrong.

Heavyweight processes and procedures at the end of development signifies that something is wrong, very wrong.

The belief that quality can be inspected into the product by testing signifies that something is wrong, very wrong.

 

 

January 24, 2018 - 4:46am
mrcc Edtech's picture
That is a good idea. I like it. I'll share it with my friends so they can read it. Thank you, and keep writing. Thanks.
September 30, 2021 - 3:27am

Pages

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.