Technical debt is the unfinished work your organization owes your product. It
comes in varieties: requirements debt, design debt, and testing debt. It causes
the developers to have trouble finding and fixing defects, and causes testers to
have trouble knowing how and what to test.
You may be working in an organization where technical debt is an intentional
choice, in exchange for gaining the benefit of time or resources. However, if
the debt is hidden, by not collecting metrics, managers can play the "pass
the hidden debt" game, hoping to drop it into someone else's hands before
interest comes due. Keeping the debt visible helps keep the level of gaming
Recognizing Technical Debt
If you’re a tester, how can you recognize technical debt? Look for
indicators like these:
- As soon as the developers fix one defect, five more previously unknown
defects pop up.
- Testing time increases disproportionately to the development time.
- Your developer to tester ratio is 1:1, and you still don’t have enough
testers because you’re sure you haven’t fully tested the product.
- Developers start talking about re-architecting or redesigning the
product. If the design debt is overwhelming, the developers start saying
things like, "Nope, I’m not touching that. That’s a feature, not
a bug. If you want me to fix that, I’ll have to re-architect the
- Developers refuse to touch a part of the product saying, "No one but
Fred can touch that. I know Fred left three years ago, but no one else can
work on it, because we don’t understand it."
- Your developers threaten to leave because all they do is
- Developers and testers become experts at crisis management, spending more
time supporting current customers—solving customer problems, fixing
defects, verifying the fixes, and answering customer questions—than they
spend developing and testing the product scheduled for release next month.
- You’re putting out point releases or patches weekly or daily.
- You can’t decide what not to test, because the risk of not testing
everything, even for a point release, is too high.
- You ship the product not because it’s ready, but because everyone is
too tired to continue the crunch mode you’ve been in.
- You stop testing because you don't want to find more defects.
- Your cost to fix a defect continues to increase, from release to release.
(If you’d like to see a picture of this dynamic, check out Weinberg’s Quality
Software Management, Volume 4).
Taking a Closer Look
Are you seeing signs of technical debt? If you suspect you’re floundering
in technical debt, take a few measurements to see if you can make sense of the
data. The following table shows data I gathered from one project. I looked at
the state of the product on a quarterly basis, counting the total number of
lines of code. (We used the Unix command, wc –l recursively over all
the code in the branch on the specific date.) When taking gross measures, LOC is
appropriate because it gave us insight into what was happening in the project.
For other measurements, LOC is not appropriate.
Aside from LOC, measure the Fault Feedback Ratio, the FFR. To measure FFR, we
took the defect data from the bug-tracking system, and for that date, we
measured the number of bad fixes to the total number of fixes. We also
determined the cost to fix a defect pre-release.
To measure cost to fix a defect, we tracked all the defects for the release,
and took the average cost to fix. See my previous column "What Does It Cost
to Fix a Defect?" for more information on how you would calculate the cost to fix a defect.
Size in LOC
FFR (this week only)
Average cost to fix a defect, pre-release
It’s always tempting to jump to conclusions from looking at the data. Don’t.
Probe and ask questions to make sure you know what the data really means. For
example, the code size increased almost 50 percent each quarter. Do I think the
developers are capable of writing that much high-quality code? What did the
developers do, to generate that amount of code? Did they leave experiments
(rejected designs) in the code? Did they hire more people? Did they use a code
generator? If you’d like more guidance on what you should expect for increases
in code size, see Capers Jones’ book, Software Assessments, Benchmarks, and
In this case, it turns out that instead of re-architecting the code to
account for new knowledge and changes in requirements, the developers were
cutting and pasting code all over the system. The system was bloated, and every
time the developers had to make a fix in one place, they had to remember the
forty-seven other places to fix.
Then I look at the FFR. Does it make sense? What makes the FFR go up or down?
In this case, the FFR started high, and aside from the third quarter, continued
to go up. What happened in the third quarter to make the FFR go down? In the
third quarter, one development team re-architected one small module that was
originally a huge source of defects. With the new version, the total number of
bad defects went down and the cost to fix a defect went down. They had paid off
some of the "debt," so now they were paying less "interest"
on the debt. You may also want to look at FFR by subsystem or module, to see if
one subsystem or module is causing much of the defect-fixing pain.
The cost to fix a defect for this organization is extremely high. If the
developers and testers find twenty defects a day, then they generate about sixty
days’ worth of work for every day of testing.
Let me know your thoughts on technical debt. My next column will provide
pointers for diagnosing and decreasing the debt.
I thank Esther Derby, Dale Emery, Dave Smith, and Jerry Weinberg for their
review on this column.