Bug logs and testing dashboards are great reports for testers, but sometimes these reports simply fall short of communicating key information, to stakeholders, such as why testing is blocked. In this week's column, Fiona Charles explains that when sharing information with stakeholders, it's best to use their language and create a report that maps out the system's current status. Fiona's solution: survey reports.
As testers, we learn to communicate details well. Each bug log describes a single bug and its specific symptoms, complete with specific steps on how to reproduce it. If we've done a reasonable job of categorizing the bugs, we, or our stakeholders, can extrapolate some useful generalizations from our bug database about the state of the system at a given point in time.
We're also pretty good at providing whole-project dashboard-type information, showing assessments of quality and progress at a high level.
But there are situations where neither of these reports is good enough. On troubled projects–or in fact in any circumstance where a project could benefit from a directional view of the state of testing and fixing–we might want to consider creating a survey of the terrain.
A survey report can be a useful communication tool showing different progress in different parts of a system and a helpful test management tool that shows where to focus testing for best impact. In a previous column, "Pack Up Your Troubles," I alluded to this kind of report as a way to present a test team's findings about system quality factually and unemotionally.
Here's a sample template for the kind of simple, structured, and colorful report I find useful:
|Summary Assessment||Execution Detail||Verification Detail|
|Functional Area - POS Transactions||Simple (Happy Path)||Medium||Complex||GUI||Complex txn to Post||Other (specific to txn)||Virtual Receipt||Printed Receipt||Inclusion in Repo||Applicable Discounts|
This survey report was for a retail Point of Sale (POS) system. As you can see, each row represents one functional object of the system from a business point of view. (The original had many more rows, while this extract covers only the POS transactions.) Each column represents something meaningful to the business, showing what works or doesn't for each transaction. The table, plus a legend, gives management a survey of testing progress and the system state.
In this example, the first three columns give a summary for three levels of complexity. A green for "simple" sales means that all the columns to the right of the summary are green for that complexity level. In the legend, I define a simple transaction as one with a single-product sales basket, the most vanilla customer type, the most straightforward sales tax category, etc.
The columns to the right of the summary map out what works and what doesn't for any summary that's yellow or red. Maybe the GUI is fine for a complex sale and the transaction posts correctly, but the virtual receipt (what the cashier sees on the screen) doesn't match the printed receipt. Anything we haven't touched stays uncolored. Usually, I add a final column for comments or notes.
If a cell is yellow or red, I put the bug IDs either in that cell or the one beside it so we know exactly which problems are getting in the way at each point. Yellow is for major severity bugs; red is for blockers or critical bugs.