Defect counts are often considered as measurements of product quality. However, the most important defect count in that respect is by definition unknown; the number of undiscovered errors. Defect counts can be used as indicators of process quality. In doing so, one should avoid assuming simple causal relations. Defect counts can provide useful information but have to be presented with care.
In one of the first projects I joined it was my task to bring the defect process under control. There were hundreds of open defects and they kept coming. At the end of every week I exported the data from the defect management tool and made a few cross-tables in a spreadsheet to show the numbers of defects still open for the next release. We had a weekly meeting to classify and assign new defects and to re-assign old ones. Whenever the workload for the next release became too heavy we would assign some defects to the release after the next. In fact, we spend more than half of the meeting time changing the numbers of the release in which defects ought to ship. As a result, the weekly status report was always optimistic. After a few months my services were no longer needed. I showed someone how to generate the desired reports and the project continued for another year and a half before the plug was pulled and the project had to start all over. In retrospect it is clear to me that things had been going in the wrong direction long before I joined the project and that the weekly status reports hadn't helped one bit to change the course of events. I ease my conscience with the thought that I didn't know then what I know now.
The idea that "you can't control what you can't measure" (DeMarco, 1982) has been abandoned for some time now. DeMarco himself apologizes for overstating the importance of all kinds of software metrics in 1995: "I can only think of one metric that is worth collecting now and forever: defect count. Any organization that fails to track and type defects is running at less than its optimal level. There are many other metrics that are worth collecting for a while."
In this article I want to focus on the use of defect counts. Therefore we need to realize what they are and what they are not. We also have to realize why we do and do not need them. Only then defect counts can be used effectively in decision making.
What a Defect Count Is and What It Is Not
A defect count is a number of defects that have been discovered. In order to be included in a count a defect has to be logged and classified. The number of severe and still open defects caused by specification errors and found during system test is an example of a specific defect count that might be of interest to somebody.
What does a defect count measure? Things are not always what they seem to be. Kaner and Bond (1994) call this "construct validity". This is a well known concept in social sciences. Does an IQ test really measure intelligence or does it only measure the ability to do IQ tests? Software development is a social activity and that implies lots of variables that affect defect counts.