With all of the advancements in defect tracking systems within the past few years, companies are still using the same ambiguous, canned fields known as Severity and Priority to categorize their defects. Let's examine a better way to assign importance to a defect.
Every software development company uses a defect tracking system. A number of vendors have been providing the software development community with useful and powerful defect tracking tools during the past few years. But are these packages being used properly? It seems that everyone uses the same data fields when defining the issue or defect, with the same types of values and the same definitions of these fields and values. But with all the changes and advancements in defect tracking systems, perhaps we need to revise the fields available and the way we categorize defect reports. Two defect tracking system fields in particular, the "severity" and "priority" fields, seem prevalent, but they allow ambiguity to slip into the process.
I have worked for several different companies and have had the opportunity to work with different tracking systems. Different tools provide varying levels of functionality in the software defect tracking process. But most of these tools have the following fields in common: Title, Description, Submitter, Owner, Subsystem, Component, Status, Resolution, ID, Priority, and Severity.
Most of these fields serve a useful purpose. The Title obviously provides a brief description of the issue that can be used in quick ticket management and review. The Description is obviously needed. Without it, the other fields lose their meaning. The Submitter allows for tracking the source of the issue, so that additional information about the defect can be obtained by development if necessary. The Owner field provides us with knowledge of who to go to for current status of the issue. Subsystem and Component help to categorize the issue and allow us to map it to a particular component of the system for use in metrics analysis of number of defects per system module. Status and Resolution are needed to allow us to determine what issues have been resolved and how they have been resolved. And of course, an ID is needed to easily order the issues and assign a unique parameter to them.
But the last two fields, Priority and Severity, seem of questionable usefulness. The tester or test manager usually fills out the Severity field when an issue is first submitted into the defect tracking system. Product management then usually fills out the priority field, following a meeting to gather information about the issue. Some may argue that these fields are the most important in the whole report, allowing a degree of impact and urgency to be associated with the description. The values for the priority and severity fields are usually High, Medium, and Low (or something similar) with the following types of definitions:
- High: A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot continue.
- Medium: A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and testing can continue.
- Low: A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can proceed without interruption.
- High: This has a major impact on the customer. This must be fixed immediately.
- Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development, or a patch must be issued if possible.
- Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.
So the priority and severity fields tell us how severe an issue is to the customer, how severe it is to the testing schedule, and how urgent it is that this issue be resolved. This is indeed vital information to have when identifying issues with software. But while the severity and priority fields serve the purpose of communicating this information, they do not do it in the most effective and unambiguous way possible.
Think of the following type of problem: a spelling error on a user-interface screen. What severity does this issue deserve? Well, judging from our canned definitions, it would seem that this is a low-severity item. After all, the server doesn't crash due to a spelling error. But is this truly a low-severity problem? A spelling error will probably not hinder a customer's ability to use the system, but it greatly affects the customer's perception of the company that created the product and of the quality of the product. So from customer-relations and corporate-image points of view, the severity of this type of issue is indeed high. But the severity field doesn't allow us to express that properly. So the need for the priority field becomes apparent. The priority field does allow product management to define this issue as high priority, but this creates the case where something is low severity but high priority. To me, this is an ambiguous duality. How exactly does this issue stack up against the others in the system? When should a developer look at this issue?
Let's consider another case: the anomalous server crash. We've all seen this type of issue. A server crash that occurs on the first full moon of every leap year but that is not reproducible by any human means on a consistent basis. So how would this issue be categorized within the defect tracking system? Well, since it is a server crash, many would argue it should be a high-severity issue. After all, the system is inoperable until the server is restarted. But what is the impact to the customer? In this case, the impact is quite small. Since the customer may never see this issue present itself at all in a production environment, it would be given a low priority by product management. Here then is another case where an issue has an ambiguous duality: a high-severity issue that is not a high-priority issue.
A Modest Proposal
I recommend eliminating the Severity and Priority fields and replacing them with a single field that can encapsulate both types of information: call it the Customer Impact field. Every piece of software developed for sale by any company will have some sort of customer. Issues found when testing the software should be categorized based on the impact to the customer or the customer's view of the producer of the software. In fact, the testing team is a customer of the software as well. Having a Customer Impact field allows the testing team to combine documentation of outside-customer impact and testing-team impact. There would no longer be the need for Severity and Priority fields at all. The perceived impact and urgency given by both of those fields would be encapsulated in the Customer Impact field.
Let's consider our previous examples. In the first example, the spelling error on the user interface screen was rated as low severity by the testing team, but as high priority by product management. This ambiguous duality disappears when the Customer Impact field is used in place of Severity and Priority. This particular issue would have a high impact on the customer's view of the company that produced the software and the quality systems in place at that company. Therefore, this issue would have a "High" Customer Impact. Is there any other information needed in order to categorize this issue? No. This is a high-impact issue to the customer and customer relations, and would therefore be properly scheduled by the development staff for fix before official release. In this case, the Customer Impact field has allowed the same information to be given, while replacing two fields with one, removing ambiguity from the issue, and reducing the need for a meeting to determine its priority.
Now let's look at our second example. The anomalous server crash under the severity/priority method would again have had a duality: high severity and low priority. This issue would have had a high severity because it was a server crash and caused data loss to the user, requiring the user to reboot the system. But since the user would almost never have noticed it, it had a low priority. Again we can eliminate two fields and a meeting by simply using the Customer Impact field. How does this issue, a server crash on the first full moon of every leap year, impact the customer? It won't affect the customer very much. In some cases, the customer might not even have this release of the software installed long enough on their system to even notice it. So in this case, the issue would have a low customer impact. Two fields replaced by one, and a meeting eliminated, while still providing a categorization of the issue so that it can be scheduled properly by the development team for resolution.
I hope this paper stimulates further review of the way defects and issues are tracked and managed during the software development lifecycle. The canned fields and definitions that seem to have proliferated in many companies should be periodically evaluated to see if there are more meaningful ways to categorize the issues properly in order to resolve them for customers. With all of the recent advances in workflow definition and reporting capabilities in defect tracking systems, this may be an opportune time for such a reevaluation. Hopefully this paper provides ideas for a good place to start to get the most out of your defect tracking system and to ease the pain of dealing with ambiguously categorized and prioritized issues.
Very good point, Brian. As testers, we would do much better to focus on how defects affect business continuity. The end-user, or customer, ultimately determines our employer's success. The single Customer Impact determinant would be more effective in assigning the order to fix defects.