Untruths about software testing are common. Managers, programmers, and other people on software projects don't always mean to deceive. Quite often, they fool themselves into believing what they want to believe. But sometimes they lie deliberately and even pressure testers to lie. And testers can also practice deceptions and self-deceptions of their own. In this week's column, Fiona Charles describes four categories of common deceptions and self-deceptions in testing and outlines what testers need to do to address them.
Have you heard any of these lately?
"The testers are finding too many bugs and holding up the project."
"Anyone can test. We just have to give them the right process to follow."
"Our test cases will provide complete system coverage."
Not one of these common statements about testing is true. At least one of them could have been said by a tester.
Delivering and promoting accurate communications about testing is essential to the tester's and test manager's job. We have a responsibility to dispel myths and misconceptions about good testing and what it can and cannot do. We must also be alert to and prepared to address distortions or attempts to spin the message about testing from any source-including ourselves.
Testing deceptions and self-deceptions often arise from excessive optimism-the triumph of hope over experience (or hope over hard data). Sometimes they come from people attempting to find a place to lay blame. Humans can fool themselves into believing all sorts of impossible things, and occasionally they even resort to deliberate lies. Exaggerating or downplaying risk, inflating test coverage, blaming testing for project delays when the product quality is poor, misrepresenting testing status and findings-these are only some of the kinds of deceptions and self-deceptions testers encounter on software projects.
Let's look at some more typical examples.
Deceptions Practiced on Testers
"The software is done. It's ready to test."
Every tester has heard this one, only to discover that the meaning of "done" and "test-ready" doesn't reflect what was in the project plan to get done by this date. Somehow, little things like unit tests-sometimes even finishing coding on some modules-have ceased to be requirements. The programmers made the date, so they're "done."
We've all heard plenty of others. Here are some common ones:
"We didn't change anything significant. You don't need to test." [Although nobody did an impact analysis, and we don't actually know what might break.]
"The infrastructure upgrade [that includes the operating system, the database engine, and the compiler version] will be transparent to the applications. You'll only need to do a sanity test."
"You only need three weeks for testing." [Because the code is late and we cut three weeks from the test schedule.]
Are programmers, project managers, and others lying when they say these things? Quite possibly not, but if not, they've certainly fooled themselves into believing what they want to believe.
Deceptions about Testers and Testing
Sometimes deceptions appear at a higher level, in what managers, project managers, and programmers say about testers and testing. Often, those untruths reflect what they actually believe:
"The testers don't know what they're doing." [They estimated it would take two weeks, but found incomplete code and so many bugs it has taken six, and they're not done yet.]
"Our mature test process employs all the industry best practices." [Major bugs frequently bring production systems down, but we have all the "right" testing documentation.]
"Total test automation will make testing much faster and more efficient, and we can save on expensive labor costs." [We haven't thought about who will design the automated tests, and we have no idea what it will take to maintain them.]
Deceptions Testers are Pressured to Practice
Testing can be the first place where cracks in a project appear. The light we shine on product quality isn't always welcome, especially when it illuminates a midden full of problems.
Managers who have been hiding ongoing quality issues, or deceiving themselves into not seeing them, can be thrown into a panic by test results that show poor product quality. Some will be tempted to skew the results