Before we can build a high-fidelity test system, we have to understand what quality means to our customers. Test professionals can avail themselves of three powerful techniques for analyzing risks to system quality. Targeting our testing investment by increasing effort for those areas most at risk results in the highest return on investment.
In the previous article, I stressed the importance of a high-fidelity test system; i.e., one that allows the test team to forecast the customer's experience of quality after release. How can you create such test systems? Start by understanding what quality means to your customers.
Quality and Quality Risks
Quality, while considered an amorphous concept by some, is actually a welldefined notion in the field of quality management. In Quality Is Free , Phil Crosby describes quality as "conformance to requirements." But what if your requirements-gathering process is poor or non-existent? Many of us have worked on projects with such questionable requirements. In Software Requirements , Karl Wiegers identifies a variety of common problems in software requirements that afflict many projects.
Since I can't always count on requirements, I prefer J.M. Juran's definition of quality. He spends a goodly portion of an early chapter in Juran on Planning for Quality discussing the meaning of quality, but he also offers a pithy definition: fitness for use. In other words, quality exists in a product—a coffee maker, a car, or a software system—when that product is fit for the uses for which the customers buy it and to which the users set it. A product will be fit for use when it exhibits the predominant presence of customer-satisfying behaviors and a relative absence of customer-dissatisfying behaviors.
Armed with this definition of quality, let's move to the topic of risk. Myriad risks—i.e., factors possibly leading to loss or injury—menace software development. When these risks become realities, some projects fail. Wise project managers plan for and manage risks. In any software development project, we can group risks into four categories.
Financial risks. How might the project overrun the budget?
Schedule risks. How might the project exceed the allotted time?
Feature risks. How might we build the wrong product?
Quality risks. How might the product lack customer-satisfying behaviors or possess customer-dissatisfying behaviors?
Testing allows us to assess the system against the various risks to system quality, which allows the project team to manage and balance quality risks against the other three areas.
Classes of Quality Risks
It's important for test professionals to remember that many kinds of quality risks exist. The most obvious is functionality: Does the software provide all the intended capabilities? For example, a word processing program that does not support adding new text in an existing document is worthless.
While functionality is important, remember my self-deprecating anecdote in the last article. In that example, my test team and I focused entirely on functionality to the exclusion of important items like installation. In general, it's easy to over emphasize a single quality risk and misalign the testing effort with customer usage. Consider the fo llowing examples of other classes of quality risks.
Use cases: working features fail when used in realistic sequences.
Robustness: common errors are handled improperly.
Performance: the system functions properly, but too slowly.
Localization: problems with supported languages, time zones, currencies, etc.
Data quality: a database becomes corrupted or accepts improper data.
Usability: the software's interface is cumbersome or inexplicable.
Volume/capacity: at peak or sustained loads, the system fails.
Reliability: too often—especially at peak loads—the system crashes, hangs, kills sessions, and so forth.
In Managing the Testing Process , I provide an extensive list of quality