XP, Iterative Development, and the Testing Community

[article]
Summary:

A recent StickyMinds column criticized the new Agile development methods as bad for business. The column generated many reader comments, and prompted this response from industry veteran Cem Kaner. Read on for his defense of iterative approaches.

I was disappointed with a recent attack on Extreme Programming in StickyMinds, "XP: That Dog Don't Hunt." What concerns me is the broader slam of iterative software development. The author said "XP's fundamental premise that the software will be done when it's done and will cost what it costs is the antithesis of sound business practice—if XP proponents don't successfully address these problems, XP will suffer the same fate as the long list of other iterative practices that preceded it."

I am constantly surprised by testers who denigrate the iterative approaches. They often characterize the various iterative options as undisciplined, unsound, and irresponsible. I think that in the process, they make the wrong enemies, lose the respect of the wrong people, and cut themselves off from opportunities to address serious, chronic problems in software development.

In the waterfall method (the primary noniterative software development process), a "complete" set of "requirements" is defined in advance (not necessarily correctly), then the system is designed (not necessarily well), then coded, and then tested. There is some allowance for redefinition and redesign, but because so much that comes later depends so heavily on what has come before, the process is brittle. Changes become exponentially more expensive as the project goes forward because so much depends on what has been done (and tediously documented) already. The exponential increase in the cost of fixing bugs, for example, is well documented.

Reluctance to fix bugs is an inherent problem of the waterfall, which places testers (change recommenders) late in the project, when every change they recommend will be overpriced (because late changes are exponentially more expensive than earlier changes). In such a process, of course our change requests will be bounced.

Another common waterfall problem is the conflict between testers and project managers. A project manager balances four key factors against each other:

  • Time to delivery
  • Cost
  • Reliability
  • Feature set (scope)
     

Consider these tradeoffs in the waterfall. What happens if the project is behind schedule when it reaches the testing phase? Well, there's not much point in cutting features because they've already been specified, designed, documented, and coded. Most of the project budget has been spent. So we trade time (ship now but buggy) against testing (ship late). Testers fight project managers for more time. I guess some people in the field like these wars. We hear enough about them at conferences—people get together to complain about those nasty developers year after year.

About the author

Cem Kaner's picture Cem Kaner

Cem Kaner is Professor of Computer Sciences at Florida Tech. He is senior author of three books, Lessons Learned in Software TestingBad Software, and Testing Computer Software. He's also an attorney (a former prosecutor) whose idea of a good time is holding companies accountable for releasing defective software. Work towards this article was supported by the National Science Foundation grant EIA-0113539 and by Rational Software.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Sep 22
Oct 12
Nov 09
Nov 09