Being the Devil’s Advocate for Software Quality


What if someone were to say that most of the time, quality does not matter? That you should only aim for the minimal amount of investment in testing to get the product out the door to start making money? Here, Rob Cross takes the “devil’s advocate” position and provides some arguments against striving for quality. How would you refute them?

If I were to ask, “What is the definition of software quality?” in an open forum, I would receive hundreds of answers, many of them citing the IEEE or personal definitions. What if I were to say that most of those definitions were wrong, or at least irrelevant; that most of the time, quality does not matter? I call this the “devil’s advocate” position, and I think it’s worth discussing.

The argument goes something like this: Quality only matters when it affects the customer’s happiness, resulting in nonpayment due to issues, threatening to cancel or not renew, deflecting to a competitor, or dealing with exorbitant maintenance costs. There are things that do matter, but the logical reaction to them is not delivering perfection. Instead, it should be asking the opposite: What is the minimal amount of investment we can make in testing to get the product out the door to start making money? For that matter, what is the minimal standard we can use—just enough quality—knowing we can leverage our customers as a test bed? They’ll tell us about the bugs and we’ll fix them. This positions quality by how it impacts revenue.

But why stop there? Here are a few more arguments against quality in testing—or at least my understanding of what quality means in testing. Let’s really be the devil’s advocate.

  • First to Market Is Most Important

Sometimes testing slows down release, and slowed release reduces cash flow for the month, the quarter, and the year. If the competition captures the market before we get our software out, then the pursuit of “quality” could be disastrous to the bottom line. In other words, if we define quality in terms of revenue, sometimes the right thing to do is to live with defects.

  • Human Testing? Why? We Have Tools for That!

The right tools lead to high-quality code. Engineers trained in modern tools can identify and catch defects early in the process. The right software and technologies combined with open source tools eliminates the need to invest in other quality measures. If the tools don’t find defects and our developers don’t find defects, then just ship the software. (Besides, if a bug gets through, the customer will report it.)

  • Let the Process Take Care of It

If the team follows a labeled process, such as agile, Scrum, or Capability Maturity Model Integration, then it must be good, right? After all, doesn’t the very word agile mean “good”? If the team says it is agile, then quality is guaranteed!

Besides, investing in a new quality system would cost money, and quality is not just revenue, it is revenue minus cost: profit. Additional processes to “improve” quality would really just slow the team down, creating still more cost, and thus, by definition of the word, decreasing quality.

  • You Get What You Pay For

At a dollar or two per download of our app, customers don’t expect too high quality. They might get frustrated, but they won’t feel too bad; the application was only a dollar. Software teams can slowly burn down the defects identified by our customers as time and money allows. Given the right price point, even negative comments here and there in the online store won’t deter most customers. Ninety-nine cents just isn’t that much of a risk. No one expects NASA-quality software at that price.

  • Hear No Evil, Speak No Evil, See No Evil

If sales are up and subscriber attrition is low, why rock the boat with new processes to address a quality problem the company doesn’t have—at least according to revenue metrics?

  • It’s a Supply Chain Problem

If there are problems with the software, then there are problems with the vendors who built the basic software; the in-house teams just integrate the vendor software into the core. Software quality problems go back to the vendor, and the way to manage the vendor is to focus on the what —requirements—not the vendor’s process.

Sympathy for the Devil

All of these are real excuses I have heard over the years to not improve quality. In some cases the arguments may seem perfectly legitimate, but I have learned to be more than a little bit skeptical. Sure, a big social media company might get its own users to test at one point in its growth history. But companies that are not big social media platforms at a magical point in time follow that strategy at their peril.

Getting rid of the knee-jerk devil’s advocate, which protects whatever is happening now, and instead talking about real improvement is, in my experience, a very healthy step.

Getting to Real Improvement

Valuing software quality is good. Having a definition of quality that everyone on the team shares is good, too. Getting stuck on that definition can sometimes mean retreating to the devil’s advocate stance.

That is not so good.

When people ask, I suggest having a definition of quality, yes; but don’t take yourself so seriously that you become stuck in your ways. Instead, realize there is no silver-bullet solution for addressing software quality, because the challenges change daily. The fun begins when people realize they need to try new approaches and throw away the fear of challenging the old ways—when people stop saying, “That’s the way it has always been done.”

My Quality Soapbox

I take this challenge to rethink quality with me 24/7. My quality soapbox is with me at cocktail parties, weddings, graduations, informal gatherings, sporting events, on airplanes, trains . . . . I’m happy to stand on it when someone asks me about what I do for a living. I often forget to leave the soapbox at home (my wife sometimes reminds me afterward of this) because when you believe in something so much, it becomes part of you, and if you’re not careful, it will define you. People sometimes don’t want to be challenged. Instead, they want an easy answer, to have some laughs, and to feel good about themselves.

Perhaps I need to sit down, shut up, and stop challenging the status quo, or give these folks a break and stop caring more than they do. Or perhaps I shouldn’t—what do you think?

User Comments

Isaac Howard's picture

So, you give ~7 excuses that people use to NOT have quality on a project.
Yet in the end you say “…have a definition of quality…” and “I take this challenge…”.
I’m afraid I don’t see what you took as challenging?
I see no refuting of the individual excuses people use.
I see no definition of quality from yourself.
I see no example of how you persuaded someone to take up the ‘testing-arms’ and challenge these horrible reasons to NOT have quality.
Isn’t the point of a devil’s advocate to give oneself the point of view of another? I also don’t see how you used these excuses as new views into helping people understand why quality matters.

February 16, 2015 - 5:28pm
Rob Cross's picture


Thanks for reading the article and for your comments.  I could take the excuses highighted and write a seperate article for each, however the purpose was to repeat the top reasons I've heard over the years for companies resisting to changing their ways.  I have a feeling you appreciated these highlights as a QA person yourself and I hope you smiled and said, "Yeah...I've heard that one before."   

I did not define quality in this little article because that was not the purpose and quality means something different to everyone depending on your perspective and role.  Entire books have been written on the definition of quality, that wasn't my purpose but I appreciate your comment.  If you're looking for guidance on this I can direct you to some resources.

I do appreciate your questions and will use your feedback to publish a follow up article arguing the opposite. 

The inspiration for the article came when I was talking with a new prospect and during the meeting they cited at least two of the highighted reasons for resisting change.  On my drive back to the office feeling frustrated I was playing a conversation out in my head as if I were the one resisting change and what excuses I might use. 

I personally found it difficult to write this piece because after every sentence I wanted to write the counterpoint arguement which I purposely resisted.  I have spent my entire career on the solutions side of the equation and I almost felt like I was cheating on myself while I was writing this article.  All that being said, I had fun writing it and found that it challenged me but more importantly reinvigorated my passion to keep swinging and never give up. 
Thanks again for your comments and inspiratoin for my next article! 


February 16, 2015 - 5:56pm
Robert watson's picture

Interesting article - thanks! Have come across many of these during my time in QA, and am currently working through the mental exercise of making sure I have coherent counter arguments to each of them for the /next/ time they appear, as they inevitably will.

Of course when confronted with reasoning like "testing costs too much and doesn't add enough value" or "we'll just release, let our users find the bugs, then patch" my instinct is to begin looking for the nearest heavy object rather than calmly begin a debate on the topic...  this is why I recommend yoga for all QA professionals :)

February 17, 2015 - 9:40pm
Tim Thompson's picture

All valid points, but they only come true when upfront and initial quality is there. And if things are coded and designed right the first time following coding and UI guidelines and requirements then testing is never a hold up.

What holds up releases is putting bugs into code and then having to spend time to redo work. What holds up releases is having fuzzy requirements that get revised many times within a short period of time (not talking about changes based on customer feedback). None of that has to do with classical testing, but a lot with quality....quality assurance that is, not quality control!

If quality does not matter then not only fire all of QA, but also fire product managers and business analysts. Let developers code at will and release using the cheapest hosting provider you can find. Yes, you are first to market and yes, you saved a lot of money in the process, but fixing things later will not only disrupt development, but it will cost more money. Testing might add a few days to the development time, but that's it. If testing takes longer than that you have done something wrong way earlier.

The points made in the article above may apply for some throw away 99 cent mobile app, but anything that has an expected life span of more than a year or two needs initial quality to be high. It is a matter of pay me now or pay me later and in regards to fixing quality issues paying later is incredibly expensive. So pick your poison. I always opt for inserting quality as early as possible and keeping quality high.

February 18, 2015 - 7:56am
Steven Knopf's picture

My tuppence worth ...

Your test strategy (documented or not) should be a response to the risks of the change being made and the quality policy (documented or not) of your organisation. As such some / all of the reasons to reduce / remove testing that you highlight in your article are valid in some contexts. We all know that we cannot create the same test strategy for a low risk change to an ecommerce site as we do to the development of a safety critical system in, say, the aerospace sector.

February 19, 2015 - 2:33pm
Bruce Logan's picture

Definitely an interesting article. While one can argue against all of the individual points, I think that the one telling point you make from your "Devil's Advocate" standpoint is that it is important to define the cost of quality, whether that be for the organisation, the Dev team, the project or product... once that is done, one can apply the Quality policy, the relevant Test strategies, etc. to best effect. If you don't define the cost of quality (which, in effect, tells you how much quality is required for the situation at hand), the level of quality comes down to who can advocate the best.

Sometimes, time to market may indeed be critical, and one has to live with more defects (at least initially); other times, certain functions must work correctly, regardless of the time taken. If you don't know how much a specific level of quality costs, you can never prioritise such issues properly. If you have the cost, it is much easier to argue the case with the various stakeholders, to come to the most appropriate answer.

Although what I've said above may seem like "weaseling out" of a proper answer, my point is that there are as many proper answers as there are situations; if you don't have certain foundations in place, you can never find the proper answer for your particular situation

February 20, 2015 - 9:13am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.