defect reporting

Articles

Red rubber stamp that says "Rejected" Use the Rejected Defect Ratio to Improve Bug Reporting

There are many metrics to measure the effectiveness of a testing team. One is the rejected defect ratio, or the number of rejected bug reports divided by the total submitted bug reports. You may think you want zero rejected bugs, but there are several reasons that’s not the case. Let's look at types of rejected bugs, see how they contribute to the rejected defect ratio, and explore the right ratio for your team.

Michael Stahl's picture Michael Stahl
Better Bug Reports Building Better Bug Reports

Each bug report is colored by the judgment of the person producing it. Testers should want to develop their skills to be better communicators with bug backlog stakeholders so that an issue can be solved in a way that benefits everyone. Read on to challenge your ideas of what builds a clear, concise, contextualized—but still courteous—bug report.

Claire Moss's picture Claire Moss
The Why, When, and How of Defect Advocacy

When Ipsita Chatterjee started testing about a decade ago, her test manager and mentor told her, "A good tester is not one who finds the most defects, but one who closes the most defects." After years of developing her testing and test management skills, she couldn't agree more. She now asks herself, how can a tester close more defects? Her answer: by using a fine combination of product and technical knowledge, intuition, and personal skills. With that in mind, this article focuses on the definition of defect advocacy; why, when, and how to approach it; and a few ways of achieving it to an optimum level, which should help you release quality software applications.

Ipsita Chatterjee
After the Bug Report

We crank out bug reports and expect them to return like a boomerang so we can check to see if the bugs were fixed. In this week's column, Danny Faught shares some ideas drawn from recent experiences that could make you a better customer advocate subsequent to filing a bug report.

Danny R. Faught's picture Danny R. Faught

Better Software Magazine Articles

Issues about Metrics about Bugs

Managers often use metrics to help make decisions about the state of the product or the quality of the work done by the test group. Yet, measurements derived from bug counts can be highly misleading because a "bug" isn't a tangible, countable thing; it's a label for some aspect of some relationship between some person and some product, and it's influenced by when and how we count ... and who is doing the counting.

Michael Bolton's picture Michael Bolton
So, You've Got a Problem: Crafting Remarks and Abstracts for Defect Reports

Software defect reports are among the most important deliverables to come out of software testing. They are as important as the test plan and will have more impact on the quality of the product than most other deliverables from the software test team. It's worth the effort to learn how to write an effective defect report that conveys the proper message and simplifies the process for everyone.

Kelly Whitmill
Deadlock!

Sean Beatty explains what a deadlock is and why testing probably won't catch it.

Sean M. Beatty
Every Crash, Everywhere

You want to know exactly what your users in the field are experiencing. In most cases, they aren’t going to take the time to tell you. Maybe the solution is automated data collection.

Joel Spolsky

Conference Presentations

Damage Prevented: The Value of Testing

Techniques to help your software team prevent defects in projects are detailed. This article also discusses the economic value of testing.

Tim Koomen, Sogeti
Defect Escape Analysis for Test Process Improvement

An escape is a defect that was not found by, or one that escaped from, the test team. Implementing the escape analysis method for test improvement can increase the quality of software by lessening the occurrence of software defects.

Mary Vandermark, IBM Corporation
Bug Taxonomies: How to Generate Better Tests

This article discusses how to use bug taxonomies to help generate better tests. The author explains that a test team's goal should be to create a useful taxonomy that can be used as a framework to brainstorm for possible risks to the application.

Giri Vijayaraghavan, Texas Instruments Inc
STAREAST 2003: How to Break Software

This course will provide you with some ideas to make your testing more effective. These ideas require self-study, practice, practice, and more practice. Take a look inside as James Whittaker teaches you how to break software.

James Whittaker, Florida Institute of Technology

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.