Agile Documentation

[article]

I often get questions about documentation in agile projects. Some people tell me their team “threw all the documentation out the window” when they started their “agile” transition. It puzzles me that some teams seize on agile as an excuse to not do documentation. Valuing working software over documentation, as stated in the Agile Manifesto, doesn't mean “throw out the documentation.” We still value documentation. The aim of agile development is to produce "just enough" documentation and keep it up to date and useful at all times. It has to work for the technical team and the business experts. 

We Need What We Need
Testers used to traditional phased-and-gated projects are nervous about losing detailed requirements documents and test plans. These things don’t go away, they just take different forms. Rather than Word documents that get out of date, we use wikis and other tools that facilitate collaboration and automated tests that are always up to date.

You need the documentation that you need. It's situation-dependent. I've worked on XP teams where our clients wanted a full complement of documentation—requirements docs, test plans, user manuals, design documents. We had tech writers on our teams to assist with these (which I highly recommend!). We kept things simple and made them work for us. For example, our requirements docs consisted of the user story, high level test cases, and mock-ups as applicable, and the clients were fine with that. For the test plan, we created a document that had information our team could use, such as release plans, names and contact information for everyone involved in the project, and what tests would be automated. Agile Testing, the book I co-wrote with Janet Gregory, includes a test plan templateto help teams create test plans that actually get used and provide value for their team. 

Documenting Early and Often
On my current team (which I've been on since 2003), documentation usually starts with mind maps, examples, and flow diagrams drawn on the whiteboard when we first discuss a new feature or theme. We take photos of these and post them on our wiki—not only for the benefit of remote team members but also to remind ourselves why we made certain decisions. For complex themes, we often create a wiki page on which team members brainstorm design ideas, raise issues, and ask questions. Wikis are a great collaboration tool, particularly if you have remote team members, as we do. A record of our discussions is also helpful later when we want to know how we arrived at a particular design. 

Once we’ve broken a theme down into small, incremental user stories and have begun to work on them, documentation continues with our product owner's (PO’s) conditions of satisfaction and mockups. Our PO has a “story checklist” template to help him think through what areas of the application and the business might be affected by the story. For example, will any reports be affected? He often includes some high-level test cases.

Collaborating Through Examples
We ask the customers for lots of concrete examples of desired and undesired behavior. Since ours is a financial services application, it works well for the PO to give us examples in spreadsheet form. Those are easy to turn into executable tests. Examples on whiteboards work well, too, since discussing a story around a whiteboard is a great way to collaborate.

Our team holds a one-hour “pre-planning” session a day or two before the start of the iteration. The PO goes through each story, and we ask questions. As a team, including all programmers and testers, we write requirements and high-level test cases on a whiteboard during sprint pre-planning and planning. That helps us share a common understanding of each story. Questions that the PO can’t answer right away are written on the board in a different color marker, so we can make sure to get the answers later. We take photos of this whiteboard, even though we also put the information in narrative form on the wiki.

When the iteration starts, we testers flesh out these examples and requirements on the team wiki, adding more information and examples, more detailed test cases, screenshots, mockups, and photos of whiteboard design drawings. For complex stories, we often mind map test cases and scenarios, which helps make sure we don’t miss anything important. We go over these test cases one more time with the PO—and sometimes with other stakeholders—to make sure we’ve understood all aspects of the story.Building Living Documentation Along with Code
When coding starts, we write a happy-path executable test. We use FitNesse for testing at the API level behind the GUI, but there are many tools that let you do this. (Focus on creating tests as documentation, not the tool.) FitNesse allows us to include narrative along with the test cases. This first test gives the programmer a good picture of how the code needs to work. Once the programmer has checked in fixtures to automate the tests, and the happy-path test passes, we add more test cases—boundary conditions, negative tests, edge cases, and whatever we think is necessary to prove the functionality works as desired. Testing and coding are done concurrently, more tests driving more coding, until the customers are happy with the user story. The tests illustrate how the code operates in different scenarios.

By the end of coding and testing, we have wiki pages with examples, requirements, and test cases, along with “living documentation” in the form of automated tests. These tests are added to our regression suites, which run many times per day in our continuous build process. If we make changes to the code or database, the test will fail, so we have to modify them and keep them up to date.

Maintaining Documentation
We regularly invest time refactoring our automated tests to keep them easy to maintain and quick enough for timely feedback. Are you thinking, “There’s no chance we’ll be allowed to take time for something like that”? Gojko Adzic suggests asking the business for time to update the documentation. Businesses understand the value of documentation, and well-designed automated tests are the best form of documentation.

We also refactor our wiki so we can always find the information we need. It's easy for a wiki to get out of control. We cross-reference it several ways: table of contents by business area; a page for each sprint listing all the user stories worked on, linking to the wiki pages for each story; and a wiki hierarchy by functional area. It’s a good idea to have a technical writer help you organize your wiki. It’s such a valuable knowledge base, you need the right skills to optimize it.

The Rewards
Occasionally, someone from the customer support or operations side of our business comes over to ask me about something that happened in production. For example, she might show me a loan payment that was processed and say she thinks the amount of interest applied is incorrect. I can plug the same inputs into one of our automated tests, run it, and show her the results. We know exactly how the production code works, without having a lot of debate about it. Now, perhaps she would like to change the functionality, but at least we’re sure about how the system behaves.

Reflect, Experiment, and Adapt
How do you know how much documentation is enough? Every retrospective, take a look at your documentation and decide if it’s too much, too little, or just right. Does the documentation take the form that works best for your team? Experiment with different tools and approaches. In my experience, documentation is something that has to evolve over time. You have to balance the needs of customers, developers, teams doing production support, and whoever may need to know how a particular feature or piece of functionality works, how it was tested, and what automated tests support it.

User Comments

1 comment
Seb Rose's picture
Seb Rose

To a large extent I agree with Keith (I also worked at Rational on the DOORS product), but... there is a big difference between executable documentation (such as FitNesse or Cucumber tests) and static documentation.

With the former, when the product deviates from the documentation (i.e. tests) the deviation is reported immediately through a failed test. The latter requires manual updating.

Some tools, such as DOORS, live somewhere in between. They have some ability to link from specification through to test results, using integrations with other tools. It feels (subjectively) like this is not as good as fully executable specifications - I'm not aware of any CI server with a DOORS plugin, for example - but a similar effect might be achieved.

June 23, 2011 - 5:54am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.