Exit Criteria, Software Quality, and Gut Feelings

[article]
Summary:
Bug counts and trends don't cover all the quality aspects of a product. A good exit criteria list provides an orderly list of attributes that research and experience showed to have impact on product quality, so you can monitor the product quality at any given time and forecast the expected status at release. That's how you improve your product.

A few months ago I happened to meet an old friend who is also a software tester at a social gathering. Soon enough we started to talk shop, and my friend shared his bleak situation.

“Tomorrow we have a project meeting to approve the release of our latest version,” he said. “I am going to recommend we don’t release it, but I know I will be ignored.”

“Why this recommendation?”

“Because I have a strong gut feeling that the last version we got—the one about to be released to our customers—is completely messed up. But the product manager just follows our exit criteria list, which we meet, and ‘gut feeling’ is not considered to be solid data.”

I inquired about the source of this gut feeling and got the whole story: Two weeks ago his team received a build for a last test cycle before the official release. Theoretically, everything was supposed to be perfect. But on the first day of testing, the testers found two critical bugs.

The development team rushed to fix them and released a new build. However, for two weeks, the cycle repeated: The testers would find one or two critical bugs and the developers would release a new build with fixes. This created the problem my friend had.

“By our exit criteria, we can’t release a version that has critical bugs,” he explained. “But the development team fixed every bug we found, so at least for this moment in time, there are no open critical bugs. In the last two weeks I got eight new builds. Do you think we got even close to testing the product properly? Based on the quality level of builds in the last two weeks, I can guarantee that the build we got today contains a good number of additional critical bugs. If I get a few more days to test it, we will find them, but release day is tomorrow, and on paper everything is fine.”

What can we do to deal with such situations? How do we translate our professional intuition, our gut feeling, into hard data?

Creating Comprehensive Exit Criteria

The ideal way to communicate these feelings is to be proactive. You need exit criteria that are a bit more sophisticated than a simple bug count and bug severity. One has to add trends: the rate of incoming new bugs, the rate of bugs being closed, and the predicted bug count based on the number of yet-to-be-executed tests.

Here are some examples for these criteria:

  • The number of critical bugs opened in the last test cycle is less than w
  • The count of new bugs found in the last few weeks is trending down at a rate of x bugs/week
  • Testing was executed for at least y days on the last version without any new critical bugs and not more than znew high-severity bugs

User Comments

18 comments
Diwakar Menon's picture

The problem with coverage data is that it ignores the question "What if I go live?" Testers caught up in trying to prove, as your friends team required, that you have less than x number or critical defects, ignore the perils of going live. We have gone live with critical defecrs, but always with a thorough analysis of which part of the business would be impacted, what risks we faced and whether it was well worth it given the business imperatives. 

Diwakar

November 15, 2016 - 10:20pm
Michael Stahl's picture

Hi Diwakar - 

I don't think there is a contradition. A possible exit criteria for you would be "Analysis of possible impact to the business for each of the critical bugs was done"; "A risk analysis of all critical bugs was done".

Exit criteria are there to make sure you go through the process your organization thinks is needed to take place, before a milestone can be announced. The details will vary between organizations.

Michael. 

 

November 16, 2016 - 5:25pm
Diwakar Menon's picture

The problem with coverage data is that it ignores the question "What if I go live?" Testers caught up in trying to prove, as your friends team required, that you have less than x number or critical defects, ignore the perils of going live. We have gone live with critical defecrs, but always with a thorough analysis of which part of the business would be impacted, what risks we faced and whether it was well worth it given the business imperatives. 

Diwakar

November 15, 2016 - 10:20pm
Johnny  Marin's picture

I just read your article,  really like it and wanted to use as reading in a testing curse I’m making.  But wanted to know if you have a Spanish version.

Else if you grant me to translate

November 15, 2016 - 11:06pm
Michael Stahl's picture

Hi Johnny -

You can use and transalte. Please remember to give credit to the source and to keep the copyright note.

Regards, Michael

 

November 16, 2016 - 5:18pm
John Wilson's picture

Interesting that a quality problem has been percieved as a test problem and not a whole team problem, and that the fix has to come from test not the whole team. There's a far bigger problem than exit criteria to be resolved here and putting in place 'improved' exit criteria, at whatever stage, just ain't gonna fix it.

November 16, 2016 - 4:21am
Michael Stahl's picture

Hi John - 

I agree that if exit criteria (putting them in place; reviewing them) is considered to be solely Test responsibility, it won't lead to a marked improvemnt in the product; it may provide a better release control, but indeed you don't want to even get to this point. 

As I recommend, defining the exit criteria is a cooperative effort that should take place early in the product life cycle, with all the involved stakeholders. Tracking them should be the ownership of a specific person (e.g. product manager) - becasue spread ownership does not generally work in my opinion. Taking measures to meet them is again a cooperative effort. 

In the case of my friend, I agree that the fact the program manager only considered the "dry" numbers and did not read further into the larger picture is hinting to a bigger quality problem. 

Thanks for the comment!

Michael

November 16, 2016 - 5:32pm
Herb  Ford's picture

Did you friend have as part of his plan, regression testing. Normal practice that I try and follow is to always have regression time backed into my QA Strategy. 

November 21, 2016 - 6:24pm
Michael Stahl's picture

Hi Herb -

Planning for regression time is fine - but in this case, regression failed and failed again.... even if you do plan for regression time (as was the case here), there is a limit to how much time you'd assign to it. Jut before release time, you assume things will be pretty much healthy.

 

(but all this is beside the point; the story is just a way to show how Exit Criteria should / could work for you).

 

Michael

 

November 22, 2016 - 5:44am
Tim Thompson's picture

Regression is important, but why does it always happen at the end of a cycle just weeks before release? There is not much time left to uncover and fix issues. Regression tests are in my opinion a subset of the regular tests and regression tests are most effective if they are fully automated and can run autonomously. That way regression can happen every week or even every day depending on how many resources are available for automation.

Things being healthy just before release time is also not a given. Feature development is pushed to the end ignoring that time is needed to properly test and fix. The only way out of this is to eliminate set release dates. Setting release to a date is artificial time boxing. Sometimes things take a few days longer. It will also benefit the customer because if features are done they do not have to wait until release date to get the features. Continuous delivery in unison of continuous regression will generate more value and also take the pressure of teams. The business and customers will get features when they are done and working, not on a date that was selected without quality in mind. I think everyone is served better when we take that extra week to get it right the first time rather than constantly chase preventable issues in production.

November 22, 2016 - 6:19am

Pages

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.