High Fidelity Test Systems: Investing in Software Testing

[article]
Summary:
Realizing a solid return on your testing investment requires smart selection of tests. Cost-of-quality analysis tells us that it's cheaper to find and fix bugs before the customers do, but, to keep bugs away from customers, we have to find the ones that matter.

Realizing a solid return on your testing investment requires smart selection of tests. Cost-of-quality analysis tells us that it's cheaper to find and fix bugs before the customers do, but, to keep bugs away from customers, we have to find the ones that matter. Doing so requires that we understand how the customers will use the system. 

Introduction
In the last article, I made a financial case, based on cost-of-quality analysis, for investing in software testing. However, just as smart stock market investing requires buying the right stocks, smart software testing involves carefully chosen tests. To achieve a positive return on the testing investment, testers must target this investment at building and applying the right test system. (I use the phrase test system to describe the test facilities, test environment, test data, test cases, and test execution processes.) The matter of what is the "right" test system is a multifaceted one, but let’s start by looking in this article at the importance of a test system that truthfully replicates the customers' experiences of quality.

How to Waste Money on Software Testing
In the last article, the case study’s return on investment arose from the prerelease detection (and repair) of "must-fix" bugs. A must-fix bug is an unacceptable defect that would at some point be identified (by users) and repaired (by the sustaining organization) over the course of the system lifecycle. By handling must-fix bugs before release, we leverage order-of-magnitude differences in the costs of nonconformance between internal and external failures.

However, you can test actions or features that few customers use, verify configurations no customer runs, and report problems no customer cares about. Because time and money are generally fixed during development projects, the effort spent—by testers, developers, and managers—on these pointless tests and bugs represents effort not spent on other tests and bugs that might be critical. To add insult to injury, the results of this misguided testing give management a false sense—either inflated or deflated—of system quality. Since testing is about both finding problems where the product is defective and increasing confidence where the product works, you can fairly say that a test team using the wrong test system is a poor investment of money and time. (For more details on this topic, see my upcoming book, Critical Testing Processes, Volume I. )

Test System Fidelity
I call a test system that allows testers to mimic customer usage a high fidelity test system because it faithfully reproduces the behaviors customers will experience when they use the system under test. The behaviors will either satisfy or dissatisfy the customers, leading to a positive or negative experience of product quality, respectively.

Let’s look at a couple illustrations. In Figure 1, you see a situation where Customer A uses only a portion of the product. Through Test System A, Tester A uses that same portion and then some. Therefore, once testing is completed with Test System A, Tester A understands Customer A's experience of quality. Assuming the product coverage from Test System A represents the union of most customers’ usage of the product, Test System A is a high fidelity test system. If bugs exist in the product that will plague the customer, Tester A will see them before the system ships. The programmers will have an opportunity to fix those bugs before release. And test manager can deliver an accurate, timely assessment of quality to the project management team, enabling smart decisions about release readiness and project progress.

Conversely, Figure 2 shows a low fidelity test system. Using Test System B, Tester B spends time testing a slender portion of

About the author

Rex Black's picture Rex Black

Rex Black is President and Principal Consultant of RBCS, Inc., a consultancy that provides testing experts worldwide, serving clients such as Bank One, Cisco, Hitachi, IMG, and Schlumberger in consulting, training, and hands-on implementation. He has written Managing the Testing ProcessCritical Testing Processes, and numerous articles, along with presenting papers and keynote speeches at international conferences.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!