A recent StickyMinds column criticized the new Agile development methods as bad for business. The column generated many reader comments, and prompted this week's response from industry veteran Cem Kaner. Read on for his defense of iterative approaches.
I was disappointed with a recent attack on Extreme Programming in StickyMinds, " XP: That Dog Don't Hunt ." What concerns me is the broader slam of iterative software development. The author said "XP's fundamental premise that the software will be done when it's done and will cost what it costs is the antithesis of sound business practice. . . . If XP proponents don't successfully address these problems, XP will suffer the same fate as the long list of other iterative practices that preceded it."
I am constantly surprised by testers who denigrate the iterative approaches. They often characterize the various iterative options as undisciplined, unsound, and irresponsible. I think that in the process, they make the wrong enemies, lose the respect of the wrong people, and cut themselves off from opportunities to address serious, chronic problems in software development.
In the waterfall method (the primary noniterative software development process), a "complete" set of "requirements" is defined in advance (not necessarily correctly), then the system is designed (not necessarily well), then coded, and then tested. There is some allowance for redefinition and redesign, but because so much that comes later depends so heavily on what has come before, the process is brittle. Changes become exponentially more expensive as the project goes forward because so much depends on what has been done (and tediously documented) already. The exponential increase in the cost of fixing bugs, for example, is well documented.
Reluctance to fix bugs is an inherent problem of the waterfall, which places testers (change recommenders) late in the project, when every change they recommend will be overpriced (because late changes are exponentially more expensive than earlier changes). In such a process, of course our change requests will be bounced.
Another common waterfall problem is the conflict between testers and project managers. A project manager balances four key factors against each other:
- time to delivery
- feature set (scope)
Consider these tradeoffs in the waterfall. What happens if the project is behind schedule when it reaches the testing phase? Well, there's not much point in cutting features because they've already been specified, designed, documented, and coded. Most of the project budget has been spent. So we trade time (ship now but buggy) against testing (ship late). Testers fight project managers for more time. I guess some people in the field like these wars. We hear enough about them at conferences-people get together to complain about those nasty developers year after year.
In contrast, the well-planned iterative project starts from the expectation that people (developers, users, executives) are not very good at figuring out how they will actually use the product, estimate costs, prioritize features, or anticipate the problems they will encounter during development. It is designed to help us manage the risks associated with errors and omissions in our assumptions and estimates.
We start with estimates. We build consensus around a broad vision for the product (rather than pretending to build consensus around the details), list features of interest and guesstimate their implementation times and costs, work with key stakeholders to prioritize the features, and then get moving on the implementation. Each time we add a feature, we also test it and fix it, stabilizing the product at the desired level of quality. Then we design, code, test, and fix the next feature, and the next. The product is incomplete, but from a point very early in development, the product is usable and testable and tested. From that point forward, people can use the product and give experience-based feedback that reveals erroneous designs, previously unidentified requirements, and bugs.