An Incremental Technique to Pay Off Testing Technical Debt

[article]
Summary:

Technical debt can shorten a product's life. But when technical debt mounts, it can be difficult to see how to pay it off. Using the practices discussed in this column, Johanna Rothman explains how you can start paying off that debt—and how to ease the product's development and maintenance for a long time.

Technical debt is the unfinished work the product development team accumulated from previous releases. This debt includes: design debt, where the design is insufficiently robust in some areas; development debt, where pieces of the code are missing; and testing debt, where tests were not developed or run against the code. Technical debt is common, but the practices in this article may help you reduce the amount in your project.

Let's assume your current project has substantial testing debt—the state of the software could be better understood if you could run more tests, automated or not, in several areas of the product. But the amount of debt is so large that it seems like an overwhelming problem to write, or even plan, a lot of tests, never mind trying to determine which ones to automate.

If that's the case on your project, consider trying these practices:

  1. Decompose each area into several use cases or scenarios. If you don't define requirements with use cases or scenarios, use your own technique to break the big area into smaller pieces. You don't have to have perfect requirements, but you do need to have a specific idea of what you need to test
  2. Rank the areas according to risk
  3. Develop tests in timeboxes, starting with the highest risk areas, until this release is complete

Once you've finished the work for this release, you'll want to reevaluate the risky areas (in case the product has changed direction) and follow these steps again.

Decompose Each Area into Scenarios
Let's assume you have a store application on the Web. The only part of the product that has been tested well is the security for credit cards. Other areas that need more development include the following: the application links to several product catalogs, and searching through each of them works only under specific circumstances; customers can't search all of the catalogs with one search; and not all the images render quickly enough when the pages load.

If you take these areas at face value, they are: broaden search, search all catalogs with one command, and accelerate loading time. But each area has more than one scenario. Here's what it might look like if we decompose each area into several scenarios. Note that the search areas are related, so we'll lump them together in the broaden search category and then decompose by type of search.

  • Broaden search: by types 1, 2, and 3, search by catalogs, and through all catalogs
  • Accelerate page loading time: browsing loading time, and search results loading time

These example phrases are similar to user stories—phrases that mean something in the context of the application but are not fully formed requirements. That's OK, because we want something to rank before we fully define the specific tests.

Rank Each Area According to Risk
Once we have a list of what we want to test, it's time to rank the listed items. This relative ranking is determined by what's most useful to the customer, not necessarily the area with the most technical debt.

One technique is to ask the product manager to assign points to each area. You will use the points to decide which area to test first.

In this example, we'll use fifty points. If you had more than twelve items on your list, you might find it useful to use 1,000 points. That way if the product manager assigned 250 points to one area, you would know that the risk of leaving that area untested was very high. Seeing all the points assigned provides you a relative priority, because the amount of points is related to the product manager's assessment of risk. I find this especially useful if I'm not sure the team can complete all the desired testing prior to release.

 

Timebox Test Development Until Release
Now that each area has an overall ranking, you know how to prioritize the testing. If you have five testers, you can assign each of them one of the areas and be done with it. But if you don't have five testers available, you pick the top-ranked priority and do what you can in the timebox.

A timebox is a specified calendar duration. Say you're working in a timebox of ten working days (two weeks of work). That means that for the "Browsing loading time" area, you'll do as much test development and testing as possible in those two weeks. When the two weeks are up, you stop working on "Browsing loading time" whether or not you think you're done.

Because of the short calendar duration, timeboxes help people make progress. You will have to pick and choose what to test—and what and how to automate. You may have to ask more questions to refine the requirements or the ranking with the product manager. But at the end of the two weeks, you will have accomplished substantive work.

At the end of a timebox, you'll either go on to the next area or re-rank the areas. If the product manager wants more testing of "Browsing loading time," you can do that—as long as the product manager realizes you won't move to the next area on the list.

Decide What to Do Next
How you decide what to do next depends on your agreement with the rest of the team, including the product manager. Some test groups continue to pay down the technical debt as decided by the initial ranking. Others allow the product manager to re-rank at the end of an iteration. It's best to decide when and how plans will change before you start the iterations. But even if you don't decide in advance, make sure you decide before you've done two iterations of work. That way you are using scarce testing resources to effectively pay down technical debt.

At the end of a release, when you've paid off the technical debt you decided to pay for this release, you'll need to develop the list and re-rank for the next release.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.