It's Not About the Bugs

[article]
Summary:

What's wrong with finding bugs? Nothing, unless you send the signal that bug-finding is all you measure and care about. Face it. People tend to perform those actions for which they are rewarded. So, if you don't track and value bug prevention, then it's not likely your team will prevent many bugs. In this column, testing authority Harry Robinson explains why and points to some more productive things to measure.

Scene 1:
You are picnicking by a river. You notice someone in distress in the water. You jump in and pull the person out. The mayor is nearby and pins a medal on you. You return to your picnic. A few minutes later, you spy a second person in the water. You perform a second rescue and receive a second medal. A few minutes later, a third person, a third rescue, and a third medal. This continues throughout the day.

By sunset, you are weighed down with medals and honors. You are a hero. Of course, somewhere in the back of your mind there is a sneaking suspicion that you should have walked upriver to find out why people were falling in all day. But, then again, that wouldn't have earned you as many awards.

Scene 2:
You are sitting at your computer. You find a bug. Your manager is nearby and rewards you. A few minutes later you find a second bug. And so on. By the end of the day, you are weighed down with accolades and stock options. If the thought pops up in your mind that maybe you should help prevent those bugs from getting into the system, you squash it-bug prevention doesn't have as much personal payoff as bug hunting.

What You Measure Is What You Get
B.F. Skinner told us fifty years ago that rats and people tend to perform those actions for which they are rewarded. It is still true today. In our world, as soon as testers find out that a metric is being used to evaluate them, they strive mightily to improve their performance relative to that metric-even if the resulting actions don't actually help the project. If your testers find out that you value finding bugs, you will end up with a team of bug-finders. If prevention is not valued, prevention will not be practiced.

For instance, I once knew a team where testers were rewarded solely for the number of bugs they found and not for delivering good products to the customer. As a result, if testers saw a possible ambiguity in the spec, they wouldn't point it out to the development team. They would quietly sit on that information until the code was delivered to test, and then they would pounce on the code and file bugs galore. The testers were rewarded for finding lots of bugs, but the project suffered deeply from all the late churn and bug-fixing.

That example sounds crazy, but it happened because the bug count metric supported it. On the flip side, I know of a similar project where testers worked collaboratively to deliver a high-quality product. They reviewed the spec and pointed out ambiguities, they helped prevent defects by performing code reviews, and they worked closely with development. As a result, very few bugs were found in the code that was officially delivered to test, and high-quality software was delivered to the customer.

Unfortunately, management was fixated on the bug count metrics found in the testing phase. Because the testers found few bugs during the official test phase, management decided that the developers must have done a great job, and they gave the developers a big bonus. The testing team didn't get a bonus. How many of those testers do you think supported prevention on the next project?It's Not About the Bugs
Software testing is not about finding bugs. It's about delivering great software. No customer ever said with a straight face, "Wow! You found and fixed 65,000 bugs-that must be really great software!" So, why do many currently use bug counts as a measurement tool? The answer is simple: Bugs are just so darn countable that they are practically irresistible.

They can be counted, tracked, and used for forecasting. And it is tempting to do numerical gymnastics with them, such as dividing them by KLOC (thousand lines of code), plotting their rate over time, or predicting their future rates. But all this ignores the complexities that underlie the bug count. Bugs are a useful barometer of your process, but they can't tell the whole story. They merely help you ask useful questions.

So What Should We Measure?
Here are some thoughts:

How many staff hours are devoted to a project? This is a real cost that we care about. How effectively did your whole team (project managers, developers, and testers) go from concept to delivery? Instead of treating these groups as independent teams with clear-cut deliverables to each other, evaluate them as a unit that is moving from concept to code. Encourage the different specialties to work together. Have program management make the spec more crisp. Have development provide testability hooks. Have the test team supply early feedback and testing.

How many bugs did your customer find? What are customers saying about your product? Have you looked through the support calls on your product? What is customer feedback saying to you about your software's behavior in the field?

How many bugs did you prevent? Are you using code analysis tools to clean up code before it ever gets past compilation? Are you tracking the results?

How effectively did your tests cover the requirements and the code? Coverage metrics can be a useful, though not comprehensive, indicator of how your testing is proceeding.

Finally, a squishy but revealing metric: How many of your own people feel confident about the quality of the product? In some aircraft companies, after the engineers sign off on the project, they all get on the plane for a quick test flight. Assuming that none of your fellow engineers have a death wish, that's a metric you have to respect! It not only says that you found lots of bugs along the way, but that you are satisfied with the resulting deliverable.

I recently saw an email signature that sums it up: We are what we measure. It's time we measure what we want to be.

For More Info:

Software Metrics: Ten Traps to Avoid by Karl Wiegers
This paper describes how to successfully measure software development activities.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.