Do Your IT Projects Suffer from Requirements Clarity Issues?

[article]
Summary:

Vague or missing requirements are a frequent source of delay when planning and managing testing and result in system defects and omissions, difficulties estimating test effort, and precious time spent getting clarity. In this article, Clare Roberts shares insights from recent projects, including checklists for spotting gaps in requirements and suggestions for prevention.

Do Your IT Projects Suffer from Requirements Clarity Issues?
A common cause of IT project time and cost overrun is the impact of requirements issues on systems, where the expected system behavior is not clear. Software testing is based on the underlying "oracle assumption," a practice in which testers routinely predict the behavior of the system being tested based on written requirements, old systems, or business representatives. However, the oracle assumption sometimes breaks down due to project pressures or system complexities. Common issues that occur with requirements include:

  • They are too vague and open to interpretation or contain insufficient detail for creation of the required tests.
  • They contain gaps where system behavior is unspecified.
  • They lack clarity about exactly which requirements are to be delivered in the release to be tested.
  • Decisions made about requirements in earlier stages (e.g., clarifications obtained by developers) are not stored centrally and not passed on to later phases, including testing.

Specific examples of requirements issues include lack of exception handling, including recovery actions for data load failures; dealing with incomplete input data; and handling special cases in calculations.

It is well known, yet frequently underestimated, that the later in the project that issues are uncovered, the longer and more expensive they are to resolve. This is due to the increased amount of rework required to correct defects, which involves more people with each completed step in the development process. Surprisingly, late discovery of issues happens even on projects that are following a lifecycle with signed-off requirements and change-control processes. In my experience, the larger and longer the project, more significant the issues uncovered late in the cycle become.

You may wonder how much impact requirements issues have on a project. Here is an example from two large systems that I worked on recently:

  • Ten percent of the issues found during acceptance testing were related to requirements.
  • Those issues alone took the project over 1000 man-days to resolve.
  • The other 90 percent of UAT (User Acceptance Tests) issues, as well as all of the system-testing issues, also had to be resolved, which created a huge impact on overall timescales.
  • The requirements issues on a second system were quicker to resolve, but there were more, resulting in an approximate 300-man-day impact.

Requirements Issues Found During Acceptance Test
The impact that this can have on the project timescales can be considerable and includes:

  • An increase in the effort spent getting clarifications
  • An increase in the effort spent investigating, fixing, and retesting resulting defects
  • An increase in the number of test-fix-retest cycles needed
  • Resources tied up in rework and taken away from other tasks

You could argue that finding these gaps is exactly what testing is for. However, initial project plans created based on the amount of testing effort required in the above examples would be thrown out as unrealistic.

What Can be Done to Address Requirements Issues?
Prevention of extensive requirements issues in the above examples would have been desirable. Even if the preventative measures were only 50% effective, they could have saved 150 to 500 man-days' effort in the example projects.

So what could be done to prevent requirements issues? Projects in the example above use a modified waterfall software development methodology, which includes documented requirements and some document reviews. However, some of the reasons that the requirements issues were not caught earlier include:

  • There were no requirement content checklists available to reviewers. These help to confirm completeness of the requirements by giving pointers to requirement content rather than structure, to help prevent key areas being missed.
  • Testers and end-users did not always get the opportunity

Pages

About the author

Clare Roberts's picture Clare Roberts

Clare Roberts is a principal test consultant with Nmqa Ltd. She has more than ten years' experience in software test management. Her testing experience was gained at major companies in the financial, pharmaceutical, and e-commerce sectors. She has particular interests in efficiency and process improvements to improve software quality. You can contact Clare at croberts@nmqa.com.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!