Building Better Bug Reports


Each bug report is colored by the judgment of the person producing it. Testers should want to develop their skills to be better communicators with bug backlog stakeholders so that an issue can be solved in a way that benefits everyone. Read on to challenge your ideas of what builds a clear, concise, contextualized—but still courteous—bug report.

Back in September of 2012, I joined one of my first online meetings for an international testing discussion, which was only international because I was remote in the States. Earlier in the year, I'd met software tester and agile coach Jason Coutu at CAST, where he had mentioned that he organized the local Saskatoon Testing Discussion Group. Being relatively new to conference calls, I was excited but didn't have many specific expectations. Little did I realize that I would be the only one on video chat that day. It seemed like flying as the laptop that was my window to this little world was carried around to introduce me and involve me in the discussion. Now, so many video chats later, I'm entirely used to it.

That day's topic was the ideal bug report. Jason led a brainstorming session in which the group called out many report fields they found useful or that others requested from them. On the couch at work having a late lunch break, I couldn't quite make out the list visually, but Jason picked "me" up and carried me to the board for the dot voting. When we tallied the votes, the most popular bug report fields were:

  • title
  • expected
  • actual
  • steps
  • environment

I listened carefully to the discussions of clarity, simplicity, audience, and actionable next steps. Each bug report would be colored by the judgment of the person producing it, and we wanted to develop our skills to be better communicators with our bug backlog stakeholders.

We concluded that the title is the most essential part of a formally reported bug because that could be the first point of contact with a newly reported problem. Curb appeal was very important. "Crash," "fail," and—my favorite—"It no worky" were right out. Although learning the shorthand of the team might take some time, we valued communicating clearly in words the recipients would understand. I'd read Eric Jacobson's blog post emphasizing using key words for easier discovery within a bug tracking system. At this point in my career, all of my teams had been colocated, making an electronic searchable tracking system optional. For large dispersed groups, I can see this as being more important. The uTest website, designed to connect freelance testers with people in need, describes the same "breadcrumbs" approach that I favor, providing enough context in the title to make this shorthand easier to grasp.

The conversation moved on to other fields that didn't make our top five. We wrestled with the merits of also including severity. First, we talked about possible meanings for the word severity. We decided severity answers the question, "How bad is it?" Now, I'd had an impassioned lunch debate with quality assurance architect Alex Kell on this very subject. He adamantly argued that the priority of bugs in a backlog was the key piece of information telling someone when—or whether—to fix a reported problem. While I saw Alex's point, I happened to be wearing my Severity is for Lovers T-shirt that day. Testers report how impactful the bug is. In making this call, we can not center on ourselves, but instead should focus on the end user. In order to do that, we have to get to know our users better. While the business would certainly make the call on the prioritizing of a defect relative to all the other work for a particular product or project, I still consider a severity rating helpful in making that call. I show my love for the end-user by providing that information to encourage a better outcome for the people who need that bug fix. Again, knowledge of the consumers of our work shapes these early stages of the software development process.

Back in Saskatoon, my new tester friends also considered whether bugs needed to be formally reported. While this might shock some testers of my acquaintance, it was not the first time I'd heard a tester say such a thing. You might think testers are in the business of reporting bugs, and that certainly is a part of our duties. However, finding bugs and getting them resolved more expediently might entail direct conversations with the developers creating the software, especially for straightforward concerns. If now wasn't the right time to correct an error or if the behavior required more investigation, testers using tangible work-tracking systems could add a sticky note or a notecard to the board within the sprint. A conversation with a product owner could also clarify expectations, avoiding overhead for something that was a misunderstanding or just not important enough to track. I'd heard Microsoft’s James Whittaker argue that keeping records of bugs you won't fix is just wasteful. I disagreed, taking the position that we didn't know exactly what our customers would consider to be important. Recording the conversation as an item of work we were intentionally not pursuing had value for customers I'd encountered who liked knowing a problem had been found so that they had an opportunity to weigh in on it, particularly when a known issue appeared in the field.

Acknowledging that some customers read over the known issues list also revealed concerns about tone of voice when writing up bugs. Developer Ben Stolz wrote that effective communication in a bug report should be clear, concise, correct, complete, courteous, and constructive. I'd argue that without those last two characteristics, getting the first four right won't help you. The developers on my product teams are professionals who take pride in their work but also are entirely human and so will make mistakes, just as we all do. Blaming language will not enlist the support of the programmer who must do the heavy lifting of the problem resolution. However, focusing on the observed symptoms keeps the report rooted in the neutral facts of the situation—as well as the observer can understand it. I've certainly had conversations about spurious bug reports I'd filed that were really just confusion on my part. I'd like to be treated politely in this situation, so I keep constructive criticism as my goal.

Although I didn't virtually attend another Saskatoon Testing Discussion Group, this lively exchange of ideas became a model for me later when I started my own meetup. Involving many people with different perspectives, including those who are geographically dispersed, is something I value to this day. The free-flowing conversation and willingness to question our assumptions produced a better result than simply following a complex reporting template with no understanding of how it came to be a standard.

As I was reflecting on this conversation back home, my research revealed this lovely quote from Cem Kaner that sums up our thoughts nicely: "The best tester is the one who gets the most bugs fixed . . . . (T)he effective tester looks to the effect of the bug report, and tries to write in a way that gives each bug its best chance of being fixed. Also, a bug report is successful if it enables an informed business decision. Sometimes, the best decision is to not fix the bug. The excellent bug report provides sufficient data for a good decision."

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.