Jeff Patton has been building software using the agile approach for a while now. His observations of how others are implementing agile development fall short of complete, but he has noticed is that the adoption breaks down during the evaluation phase. In this week's column, Jeff goes through the agile development process and offers guidance on the correct way of conducting an agile evaluation during this phase in the software development lifecycle.
For many, "agile" means "We've stopped writing documents," which doesn't actually mean you are practicing agile. It means you're good at justifying bad behavior.
One of the tenets of agile development is the idea of a healthy development cycle. In XP it's called an "iteration;" in Scrum, a "sprint." But the basic idea of a complete cycle is the same across these methodologies. And, sadly, many agile teams have broken cycles.
This column is about diagnosing and fixing busted cycles-in particular, at the ending of the cycle where I see most teams get a bit sloppy.
|Figure 1: The three parts of a healthy development cycle.|
A good cycle has three parts: planning, performing, and evaluating.
In the planning part we decide what we're going to do. Usually, that means the amount of work we're going to take on. To do this, we'll talk about the pieces of software to build and, ideally, write down acceptance criteria for each piece so we're sure we know what "done" means for this piece. Many teams even define a working agreement, referred to as the "
definition of done " (DoD). The DoD usually determines that done means it has to be coded and tested-which is a big advance for testers for whom "done" used to mean only "coded."
Building software and all the collaboration it takes to do this is the performing part. An important part of performing well is transparency, which is another way of saying it's easy to tell how far along we are. Agile folks often show progress on task boards or using burn-down charts .
Evaluating is the most important part of a healthy cycle. It is where we look at what we've done and make corrections to the product, the schedule, and the process we're following. This responding part is what makes agile development agile and the part where I see most teams start to fall down.
The 3 Ps of evaluation
Evaluation is difficult. Honest evaluation might reveal we're building the wrong thing or not moving fast enough. It may result in the realization that we're not being rigorous enough with the process we're following or that we're mistaking process compliance for process effectiveness.
The stuff we evaluate falls into three categories: pace, product, and process.
Pace is where we measure how much we've done. We may have planned on building five features in the last sprint but only completed four. So, what does that say about how fast we're moving? How does that effect the scheduled delivery date? Do we fool ourselves, say, "We'll get faster next time," and make excuses for poor performance, or do we face facts and adjust the schedule to reflect how fast we're really moving. It's OK to say, "Let's go one more sprint. If our velocity stays the same, we'll adjust the schedule." It's not OK for managers to say, "You guys committed to getting this stuff done. You'd better figure out a way to make up the lost time."
We'll need to inspect what we've built. Not the parts, but the whole thing. I see many teams do a product demonstration, which is a good idea. Knowing you'll have to show product to your peers and stakeholders is strong motivation to finish and do a good job, but don't stop there.
A common concern of folks adopting agile development is that in the rush to build more functionality faster the software will become riddled with bugs or turn into an unmaintainable ball of mud. If you don't pay attention to quality, you will