Software Quality Assurance goes head-to-head with Business Process Assurance in this column. Linda Hayes explains the differences between the two processes and tells us why she thinks BPA may be the wave of the software-testing future.
Business processes are hot. Not only are they emerging as literal corporate assets—you can actually patent them—but they are becoming the glue that binds service-oriented architectures together. What does this mean to you?
It means you have to put a spin on your test process and switch from testing from the bottom up to testing from the top down; and from vertical to horizontal.
The Tried and True
Traditional software quality assurance (SQA) methodology follows the V-model, which dictates that the test process tracks the development process. You start with unit testing of the code modules, move into integration testing of component interfaces, then on to system testing, and finally to acceptance. In other words, you test the software the way it is built.
Following this logic, the best SQA testers are technical and often report to development. The more they know about how software is developed, the better equipped they are to predict problem areas, ferret out errors, and do the diagnostics needed to adequately explain and reproduce issues so that the programmers can fix them.
This focus on the software construction also drives the thinking behind test case design: you direct your tests toward the areas with the highest risk of coding errors. For example, you exercise boundaries because it is common for programmers to make errors around comparison arguments and equivalence class partitioning because of how data types are used and stored. For test execution priority, new features often come before existing ones because they have the highest risk of failure.
Even the severity rating of defects is based on the way the software responds. If it crashes, you've got a showstopper. Unexpected error messages rate high, graceful error recoveries somewhat lower, and inconvenient—but—workable issues even lower.
Of course this strategy has an irrefutable logic: If your mission is to expose flaws in the software, then your best bet is to target where they are most likely to occur. But this mission can also lead to unwanted effects, such as motivating testers to devote their time to the fringe cases that are more fruitful for defects, while avoiding mainstream areas because they are more reliable.
The Hot and New
Business process assurance (BPA) is completely different. It follows a top-down model that starts from the operating profile of the enterprise, then breaks down into the business processes that drive it, the applications that support the processes, the services that integrate the applications, all the way down to the components that enable the services. In other words, you test the software the way it will be used.
In this model, your test case design is driven by operational risk. Which business processes represent the greatest exposure in terms of time, money, security, customer, or market impact? For example, if you can't add a new insurance policy, then you can't collect new premiums, which, in turn, means you can't generate new revenue—and that could spell corporate catastrophe. Test execution is prioritized around existing processes first; if they quit working, so does the enterprise. New processes are often less important, because they aren't being used yet.
Because of the operational bias, BPA is more likely to report to the business. Domain knowledge is crucial, and the best BPA testers are often expert users who were promoted because of their extensive experience.
Following the operational risk paradigm, defects are rated by their probable effect on the enterprise. An error that crashes an application may get a low rating if it has minor operational risk—for example, the situation is so rare that it is unlikely to occur or there is a reasonable workaround or avoidance strategy. On the other hand, a result that is not even technically an error—say, a degradation in performance—may be a showstopper if it means that it slows the productivity of 10,000 reservation agents.
The logic behind this strategy is that software serves the business, so it doesn't matter whether the software works perfectly but only whether it meets the business need. It is this overriding economic argument that is behind most decisions to ship software with known defects. The value of making the software available in a timely manner outweighs the potential inconvenience of errors or workarounds.Vertical vs. Horizontal
There is yet another paradigm shift between SQA and BPA. While testing software is usually confined to a single application, it may involve many vertical layers: a user front end, middleware, server-side business rules, and then an enterprise services or messaging layer that wraps the back-end databases. Testing these vertical layers is no trivial task.
But testing a business process involves horizontal, end-to-end functionality. A single business process may span multiple applications that reside on different platforms. For example, a business process that adds a new insurance policy may span applications that perform policy entry, underwriting, issuance, printing, invoicing, and mailing. Coordinating the test environment and data stores across all of these applications and platforms can be overwhelming without a robust testing infrastructure.
Good News, Bad News
The good news about the advent of BPA is that it may be easier to get budget allocated by the business because it is seen as mission critical, whereas SQA often has to make do with the leftovers from development because it is viewed as a support function for wayward programmers. The bad news is that your current test processes and environment probably fall far short of what will be needed to succeed. All in all, though, it's exciting to catch the wave of what may be a sea of change in how software is tested.