“But it was just a tiny little change! How could we have known it would cause such big problems?” Regression (going backward) is a fact of life in software systems. Even though something worked before, there is no guarantee that it will work after the latest "minor" change. Yes, modular design and sound system architecture can limit the likelihood of unintended effects, but they won't eliminate them all together.
Regression testing will always be necessary. With the very limited amount of time we have for testing a minor change, though, how can we do sufficient regression testing? How can we know where to look? How can we reduce the risk that there will be problems?
The Regression Problem
The regression problem has its roots in the inherent complexity of software systems. As the complexity of a system increases, so does the likelihood that changes to it will have impacts that are difficult to foresee. This it true even with newly crafted systems whose developers used the latest techniques and are still available to make any necessary changes.
The regression problem grows exponentially as a system ages. After several years, it will change many times, usually by people who were not involved in the original development. Even if these people did their best to understand the underlying design and structure, it is unlikely that the changes they made fit cleanly with the original design theme. The more changes like that this that have been made, the more complex the system grows until it becomes brittle.
Brittle software is just like brittle metal: it has been bent and twisted so many times that anything you do to it is likely to cause it to break. When a software system becomes brittle, people actually fear changing it. They know that anything they do is going to cause more problems than it the changes will fix. Though the term is rarely used, being brittle (un-maintainable) is one of the main reasons that old software systems are replaced. The Regression Testing Squeeze
Because any system is subject to regressions, regression testing of any and all changes is important. Who has time to fully re-test a system to which a small change was made, though? When the development only took a week or so, we certainly cannot afford to spend months fully re-testing the entire system. We would be lucky to have a week for testing. More likely, only a few of days will be allowed.
Since full re-testing is out of the question, we must then decide how to spend the little time that we do have available for testing. How can we know where to look, though? How can we anticipate these unanticipated problems? It's kind of like the joke where the boss demands a list of all of the unexpected problems that will come up!
In reality, we always have a testing squeeze, even when testing a brand new system. There is never enough time to do all of the testing that should be done, so we must spend the time we have available for testing in the best way possible. The method we use in those situations, "risk-Based testing" is the same method we must apply in regression testing.
The essence of risk-based testing is that we assess the risks involved in the different parts of the system, and focus our testing where the risks are highest. This method may allow some parts of the system to remain only marginally (or possibly totally) un-tested, but it guarantees that the risks involved in doing so are low.
· "Risk," as it applies to testing, is the same as risk in any other situation. In order to evaluate risk, we must recognize that it has two distinct dimensions: Likelihood and Impact.
· "Likelihood" is the probability that something will go wrong. Without considering Impact, simply consider how likely it is that there will be a problem.
· "Impact" is the effect if something does go wrong. Without considering likelihood, simply consider how bad it would be if there is a problem.
Consider a hypothetical accounting system in which we are making a change to the way carrying charges are assessed on receivables. The change will take three days and we will have two days to do the testing. Since we cannot fully test the accounting system in two days, we will assess the risks of the change to the various parts of the system.
· The carrying charges functions have a high likelihood of failing, because those are where the changes were made. They are also a relatively high impact part of the system, since they affect revenue. Being high likelihood and high impact means that this part of the system warrants much of our testing effort.
· The rest of accounts receivable has a medium likelihood of failing, since the changed functions are an integral part of that subsystem. Since AR affects revenue, the impact of failures is high. AR, in general, merits some strong testing focus because of its medium-high risk.
· General ledger (GL) also has a low likelihood of failure, but failures in GL would have a high impact on the company. GL has a low-high risk.
· Finally, accounts payable (AP)has a low likelihood of failure because the changed functions are not related to it at all. The impacts of failures in AP are medium at most. With a low-medium risk, AP doesn't merit much testing at all.
Using this risk information, we might choose to allocate our testing this way:
· 50% of our testing (one of our two days) will focus on the new carrying charges functions
· 30% (more than half of the second day) will focus on the rest of accounts receivable
· 15% (a couple of hours) will focus on general ledger
· 5% (the remaining hour or so) will focus on accounts payable
Using a risk-based testing strategy will not guarantee that there will be no regressions. It will, however, significantly reduce the risk that is inherent in making small changes to a large system.