Test-based Project Progress Reporting

[article]
Summary:

Deliverable-oriented project management and test-driven development can be combined to provide an objective and easily understandable way of measuring project progress for the client, team members, and management. In this article, John Ferguson Smart presents a case study of how this approach was made to work.

Introduction
This article presents a practical study of how we use deliverable-oriented project management and test-driven development as an objective and easily understandable way of measuring project progress for the client, team members, and upper management.

Defining deliverables
All projects, by definition, have deliverables. In an iterative approach, the main deliverable (the application) can be divided into smaller deliverables (e.g., modules, functions, or user story implementations), in order to define an iterative, milestone-based delivery schedule.

The WBS and the project plan
A work breakdown structure (WBS) is a well-known and extremely useful tool for breaking a project into easily manageable--some would say, "chewable"--tasks. At a certain level, you assign WBS tasks to individual team members, or, in some special cases, a small group of team members, and expect them to produce a "concrete" deliverable.

Work breakdown structures tend to be intimately linked to project plans. We use an MS Project plan for global project planning. Here, as in the WBS, work is detailed down until each task corresponds to a deliverable item and is assigned to a team member. Assigning tangible deliverables to each team member helps focus the development activity with concrete, short-term goals, and also helps in obtaining developer buy-in and responsibility.

Test cases
Naturally, we wrote a set of test cases for each deliverable. The test cases represent the acceptance criteria for each module. There are many ways of doing test plans and test cases. Most will contain, in one form or another, a list of actions or steps to be performed, accompanied by the expected result. In our case, for each deliverable module, we put the corresponding test cases in a separate Excel spreadsheet, along with some extra information for ease-of-use:

  • A unique test case number
  • The screen ID
  • The screen zone or area
  • Action to be performed
  • Expected results
  • Obtained results:
    *Result: passed, failed, not tested
    *Description of any undesired behavior
    *Related defect tracking issue(s)

In our experience, a good set of test cases can give an excellent indication of the production readiness of a deliverable. Ideally, test cases are handed to the developer along with the functional specifications, though in practice they often come a little later. The analysis documents and the test cases provide concrete, tangible objectives for each module and keep developers focused on code with real added value for the end-user.

Measuring progress using tests

Measuring test results
Test-based progress reports add an easily understandable, objective view on project progress. In our current project, the main test status report summarizes the following for each module:

  • Total test cases
  • Passed tests
  • Failed tests
  • Untested tests

We base our metrics on three main considerations:

  • A module is considered finished when all test cases have been successfully run by QA. In our case, QA includes internal testing teams and client testers.
  • The number of test cases needed to test a module approximately reflects its complexity. While this is not always true, we find it as good a measurement as any other.
  • The development is iterative: New versions are delivered frequently and testing is done continually, not just at the end of the project.

In these conditions, overall progress on the different modules may be obtained by measuring the relative number of successful test cases for each module. If you can obtain reliable data on the number of test cases passed, failed, and untested at a given point in time, it is fairly easy to put them into a spreadsheet as shown in Figure 1.

Figure 1: Test status follow-up spreadsheet

Module progress status
I never trust a developer who says that a module is

About the author

John Ferguson Smart's picture John Ferguson Smart

John Ferguson Smart is currently project director at Aacom, a French IT firm specializing in J2EE solutions. He holds a PhD in computer science from the University of Aix-Marseille, France. His specialties are J2EE architecture and development and IT project management, including offshore project management. He works on large-scale J2EE projects for government and business and writes articles about J2EE and project-management-related subjects at www.jroller.com/page/wakaleo.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Aug 25
Aug 26
Sep 22
Oct 12