Bug Counts vs. Test Coverage

[article]
What to Do When Bug Counts Don't Speak for Themselves
Summary:

Occasionally, we encounter projects where bug counts simply aren't as high as we expect. Perhaps the product under test is in its second or third release cycle, or maybe the development team invested an inordinate amount of time in unit testing. Whatever the reason, low bug counts can be a cause of concern because they can indicate that pieces of functionality (which potentially contain bugs) are being missed. When low bug counts are encountered, management may begin to wonder about the quality of testing. This article covers techniques for dealing with low bug counts, and methods for reassuring management that coverage is being achieved.

Bug counts on a project speak volumes about the quality of testing for a particular product and how vigorous the test team is working to "assure quality." Bug counts are invariably a primary area of test metrics that are reported to management. What is the rationale behind drawing so much attention to the number of bugs being found through the course of a project?

I have heard it said that QA's job is to find bugs. If this is the assumption of management, bug counts will be an important indicator to them that QA is doing its job. They expect to see bug counts rise dramatically in the early stages of testing, and they expect to see the find rate decrease as the project comes to an end. These are management's statistical expectations when they believe bug counts are a metric to assess quality of testing.

If high bug counts, then, are an indicator that quality is going up, low bug counts can be seen as an indicator that something just isn't right with the testing process. Management might imagine different problems that are preventing bugs from being found:

  • Test coverage isn't complete; maybe major areas of functionality aren't being tested.
  • Testing is only scratching the surface of all functionality, not digging in to the real complexities of the code.
  • Our testers just aren't that good.

Management might see red flags when bug counts are low, but a number of causes may contribute to low bug counts. On the second or third iteration of a product, the bulk of the defects may have been found on an earlier cycle. Or especially good development practices may have been implemented: strong unit testing, code reviews, good documentation, and not working developers to death. These are supposed to result in lower bug counts.

Ultimately, however, QA will justify low bug counts when it can justify its test coverage . If the product under test is being tested with thorough coverage, the bug count should be treated only as a supporting statistic, not the primary one. After all, we all know that a quality product hasn't been reached when a certain bug count is reached. Quality is achieved when test coverage is maximized and bug finds decrease to a minimum.

There are several things you can do when bug counts are low and management is questioning the quality of testing:

  1. Take stock. Call a meeting with your test team, go through the areas of test, possibly even some test cases themselves, and get a general feel for how much test coverage you really have. Maybe you'll discover that an area of test really is being missed. Perhaps there is some misunderstanding of who should be testing what and some functionality fell between the cracks. Brainstorm more testing methods and techniques, and generate ideas of how your team can broaden the testing efforts. Before going to other groups or departments, get a solid understanding of where your team is in the process.
  2. Talk to development. Go over your current test coverage with development, and see if they have any input on areas you might also investigate. Ask them what the trouble spots are, if they can suggest lower-level tests that may ferret out more bugs, and possibly even conduct a test case review with them. On my last project, we sent out the test cases of a certain functionality to the appropriate developer for review. Though many times developers can be reluctant to help testers, demonstrate to them that it is in their best interest that we thoroughly test their code-if it's solid, they have

About the author

Andrew Lance's picture Andrew Lance

Andrew Lance (andrew@centerspan.com) is a senior quality assurance engineer, technical lead for CenterSpan Communications, a company developing cutting-edge content delivery solutions based in Hillsboro, Oregon. Andrew has worked with test automation technologies for more than five years and has participated in every major phase of automated testing, from design and implementation to maintenance and support.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Sep 22
Oct 12
Nov 09
Nov 09