More Reliable Software Faster and Cheaper

[article]
How Software Reliability Engineering Can Help Testers
Summary:

Do you feel stressed out by pressure to deliver more reliable software faster and cheaper? Customers for software-based products make these conflicting demands, and they trickle (or rather flood) through the management chain down to you. Software reliability engineering (SRE) can help.

As a software manager at Bell Labs in 1973, I was confronted by the stress on myself and the testers who worked with me of being asked to deliver more reliable software faster and cheaper. Hence I was motivated very early to develop and deploy Software Reliability Engineering (SRE).

I will focus on the benefits of SRE for software testers in this article, but SRE also is a big help for software managers and QA staff. It involves and benefits system engineers, system architects, and developers, but I will limit myself to showing you how their roles mesh with yours.

Practically speaking, you can apply SRE to any software-based product, starting at the beginning of any release cycle.

What It Is and Why It Works
SRE is a quantitatively oriented practice for planning and guiding software development and test that meshes easily with other good processes and practices. It is based on two
pieces of quantitative information about the product: the expected relative use of its functions and its required major quality characteristics. The major quality characteristics are reliability, availability, delivery date, and lifecycle cost.

When you have characterized use, you can substantially increase development and test efficiency by focusing resources on functions in proportion to use and criticality. You also maximize test effectiveness by making test highly representative of use in the field. Increased efficiency increases the effective resource pool available to add customer value, as shown in Figure 1.

Figure 1. Increased resource pool resulting from increased development efficiency.
 

When you have determined the precise balance of major quality characteristics that meets user needs, you can spend your increased resource pool to carefully match them. You choose software reliability strategies to meet the objectives, based on data collected from previous projects. For example, you determine how much you will rely on system testing as compared with alternative strategies such as requirements reviews, design reviews, code reviews, and fault tolerant design. You track reliability in system test against its objective to adjust your test process and to determine when test may be terminated. The result is greater efficiency in converting resources to customer value, as shown in Figure 2.

Figure 2. Increased customer value resulting from increased resource pool and better match to major quality characteristics needed by users.

SRE Process and Fone Follower Example
Let's now take a look at the SRE process. There are six principal activities, as shown in Figure 3. I show the software development process below and in parallel with the SRE process, so you can relate the activities of one to those of the other. Both processes follow spiral models, but for simplicity, I don't show the feedback paths. In the field, you collect certain data and use it to improve the SRE process for succeeding releases.


Figure 3. SRE Process

You might wonder, "Why should I, a tester, be concerned about the first three activities? They are not my job." I thought that also when I first started to apply SRE at AT&T to various projects. The answer quickly became clear: they are needed by and benefit testers the most, but they may be seen (although incorrectly) by the design engineers who have to perform them as not helping themselves directly. Rather than struggle to persuade them, we tried the innovation of including test team members on the design team. This not only worked very well, but it greatly increased the professional standing and morale of the test team. Now the test team talked with customers, had direct input into the product, felt they had a role in decision making, and

Pages

About the author

John D. Musa's picture John D. Musa

John D. Musa is one of the creators of the field of software reliability engineering (SRE) and is widely recognized as the leader in reducing it to practice. He currently teaches a two-day course, More Reliable Software Faster and Cheaper, worldwide to organizations who want to deploy the SRE practice. He also consults with a wide variety of clients. He is principal author of the widely-acclaimed pioneering book Software Reliability and author of the practically-oriented Software Reliability Engineering. Elected IEEE Fellow in 1986 for his many seminal contributions, he was recognized in 1992 as the leading contributor to testing technology. His leadership has been recognized by every edition of Who's Who in America since 1990 and by American Men and Women of Science. He has more than 30 years of diversified practical experience as software practitioner and manager. He has published more than 100 papers and given more than 200 major presentations. You can reach him at j.musa@ieee.org.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03