A New Twist on Test Processes

[article]
Summary:

Software Quality Assurance goes head-to-head with Business Process Assurance in this column. Linda Hayes explains the differences between the two processes and tells us why she thinks BPA may be the wave of the software-testing future.

Business processes are hot. Not only are they emerging as literal corporate assets—you can actually patent them—but they are becoming the glue that binds service-oriented architectures together. What does this mean to you?

It means you have to put a spin on your test process and switch from testing from the bottom up to testing from the top down; and from vertical to horizontal.

The Tried and True
Traditional software quality assurance (SQA) methodology follows the V-model, which dictates that the test process tracks the development process. You start with unit testing of the code modules, move into integration testing of component interfaces, then on to system testing, and finally to acceptance. In other words, you test the software the way it is built.

Following this logic, the best SQA testers are technical and often report to development. The more they know about how software is developed, the better equipped they are to predict problem areas, ferret out errors, and do the diagnostics needed to adequately explain and reproduce issues so that the programmers can fix them.

This focus on the software construction also drives the thinking behind test case design: you direct your tests toward the areas with the highest risk of coding errors. For example, you exercise boundaries because it is common for programmers to make errors around comparison arguments and equivalence class partitioning because of how data types are used and stored. For test execution priority, new features often come before existing ones because they have the highest risk of failure.

Even the severity rating of defects is based on the way the software responds. If it crashes, you've got a showstopper. Unexpected error messages rate high, graceful error recoveries somewhat lower, and inconvenient—but—workable issues even lower.

Of course this strategy has an irrefutable logic: If your mission is to expose flaws in the software, then your best bet is to target where they are most likely to occur. But this mission can also lead to unwanted effects, such as motivating testers to devote their time to the fringe cases that are more fruitful for defects, while avoiding mainstream areas because they are more reliable.

The Hot and New
Business process assurance (BPA) is completely different. It follows a top-down model that starts from the operating profile of the enterprise, then breaks down into the business processes that drive it, the applications that support the processes, the services that integrate the applications, all the way down to the components that enable the services. In other words, you test the software the way it will be used.

In this model, your test case design is driven by operational risk. Which business processes represent the greatest exposure in terms of time, money, security, customer, or market impact? For example, if you can't add a new insurance policy, then you can't collect new premiums, which, in turn, means you can't generate new revenue—and that could spell corporate catastrophe. Test execution is prioritized around existing processes first; if they quit working, so does the enterprise. New processes are often less important, because they aren't being used yet.

Because of the operational bias, BPA is more likely to report to the business. Domain knowledge is crucial, and the best BPA testers are often expert users who were promoted because of their extensive experience.

Following the operational risk paradigm, defects are rated by their probable effect on the enterprise. An error that crashes an application may get a low rating if it has minor operational risk—for example, the situation is so rare that it is unlikely to occur or there is a reasonable workaround or avoidance strategy. On the other hand, a result that is not even technically an error—say, a degradation in performance—may be a showstopper if it means that it slows the productivity of 10,000 reservation agents.

The logic behind this strategy is that software serves the business, so it doesn't matter whether the software works perfectly but only whether it meets the business need. It is this overriding economic argument that is behind most decisions to ship software with known defects. The value of making the software available in a timely manner outweighs the potential inconvenience of errors or workarounds.

About the author

Linda Hayes's picture Linda Hayes

Linda G. Hayes is a founder of Worksoft, Inc., developer of next-generation test automation solutions. Linda is a frequent industry speaker and award-winning author on software quality. She has been named as one of Fortune magazine's People to Watch and one of the Top 40 Under 40 by Dallas Business Journal. She is a regular columnist and contributor to StickyMinds.com and Better Software magazine, as well as a columnist for Computerworld and Datamation, author of the Automated Testing Handbook and co-editor Dare To Be Excellent with Alka Jarvis on best practices in the software industry. You can contact Linda at lhayes@worksoft.com.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Apr 29
Apr 29
May 04
Jun 01