3 Testing Practices We Should All Stop

[article]
Summary:
Testing evolves, and it becomes clear that some concepts we’re all used to doing are no longer applicable today. It’s important to periodically take stock of our testing practices and cull the ones that no longer make sense—or are downright harmful. Here are three common testing practices it’s in our best interests to stop doing.

The software industry has gone through many innovations and evolutions. There have been multiple models, cycles, and frameworks, like waterfall, the V-Model, agile and its several variations, and more.

There have been attempts to standardize testing as well, but they have been protested by the testing community. Testers believe it’s a good thing that there are no standards, and they practice many forms of testing, including testing in production, session-based test management, exploratory testing, and test- and behavior-driven development.

But even with diverse practices, testing evolves, and it becomes clear that some concepts we’re all used to doing are no longer applicable today. It’s dangerous to keep doing something just because it’s the way we’ve always done it, so it’s important to periodically take stock of our testing practices and cull the ones that no longer make sense—or are downright harmful.

Here are three common testing practices it’s in our best interests to stop doing.

1. Performance appraisal based on bug counts

If testers are judged by the bugs filed and fixed, the focus is no longer on better quality for the product. People are now more worried about how many bugs were found, and goal displacement comes into picture. 

Testers start filing easy-to-find bugs or creating multiple bug reports for each platform, and developers start rejecting bugs as hard to replicate or not a bug. The tester who was spending considerable time designing a good test for one critical bug is now hunting for the low-hanging fruit. The developer who was designing a long-term solution to make a more robust product is now applying short-term fixes for low-severity bugs.

Bug count alone never tells the complete story. Replace the word “bug” with “idea” and the concept is clearer: One person has five ideas and another has two ideas. Does that mean the one with five ideas is the better tester or programmer? Without understanding what the bugs are, or the complexity of finding or fixing them, we cannot come to good conclusions.

So, how do we appraise testers’ performance now?

First, it’s a good idea to talk to the tester and create a plan for the skills that will help both them and the company. If the tester has never tested a mobile app and your team might need a mobile app tester in the coming months, this can be a good opportunity for everyone.

If you want to quantitatively gauge testers’ skills, you can still measure them. Just be as specific as possible in listing out what activities the tester should be able to perform and assign a mutually agreed upon timeline to it. For example, the tester should be able to find bugs at UI, database, and server levels, to use tools like X, Y, and Z, and to perform the following actions.

These kinds of goals focus on the tester’s learning and skills instead of a narrow focus on a specific attribute. It might be more time-consuming than just focusing on bug counts, but it is more valuable and improves both the individual and the company.

2. Test suite pass and fail percentages

Any tester who has found a few bugs knows that the majority of bugs are found when deviating from the script. Yet teams spend a lot of time scripting the test cases well before interacting with the product. A single test can be a pass on one condition and fail on multiple other undocumented conditions. You can even have a 95 percent pass rate and still have a hundred bugs to be fixed.

Writing out all the test scenarios and having a test case for every bug is not a great idea. The funnier part is when decisions are based on pass/fail percentage alone. Teams rely on these percentages, but it’s a false hope.

We would do better if we get rid of the pass/fail percentage and focus on the confidence given by the teams. We can use low-tech dashboards and have subjective assessments in terms of coverage (blocked, sanity, deep) and confidence (low, average, good) and debrief based on the combinations. We should move away from asking how many tests pass and how many tests fail and start asking, “Is there a problem here?”

I’ve started sharing my dashboard with a list of features and what problems existed in each feature. This lets us know what’s stopping us from releasing a feature and whether we have covered the critical use cases.

3. Signoff by the testing team

It is well known that quality is everyone's responsibility. Testers can attest for their share of quality alone, but they cannot assure quality for the entire product. Still, the project team expects the testing team to give a signoff after the testing cycle.

There are multiple ways the quality of the product can be assured, but it would need a whole-team approach:

  • Everyone agrees for a common definition of acceptable quality and has measurable checkpoints to ensure them
  • Every team is accountable for their share of the quality and takes measures to ensure that the deliverable meets quality standards
  • The barriers between roles are broken and specialists work together as one team and assist each other
  • Everyone's unique strengths are used to create a winning team culture and everyone owns quality

When a representative from every team is involved in the decision-making and can view the testing results, we can be more collaborative and stop playing blame games. The test ideas are reviewed by the developer and business analysts, and the unit testing is done by the development teams and the user acceptance testing is done by the business analysts. This ensures that everyone is now aware of the product quality and the decision to release is a shared one. Both the development and testing teams give input to the product owner and business analysts, and the final decision rests with the product owner. The testing team does not own the signoff.

I have just scratched the surface of improving the test process by highlighting the three concepts I feel should be abolished. What applies to your context, and which practices or concepts have you already dropped from your routines?

User Comments

11 comments
Dave Maschek's picture

It's interesting that my manager and I were just talking today about number 1. We both worked for companies where testers were evaluated on how many bugs they filed. Luckily, the two companies in question long since realized that this was a mistake because it encouraged testers to file trivial bugs in order to boost their scores. 

February 27, 2019 - 1:34am
Ajay Balamurugadas's picture

Yes, Dave. The goal shifts from testing to increase your score. Too common a pattern but very few realize it early. 

February 28, 2019 - 5:54am
Pedram Derakhshan's picture

Point 1: True, not sure why defect count would be a measure for performance evaluation.

Point 2: Disagree. There has to be an "objective" measurement that indicates the state of product. The failures and stats then can be alalyzed to find the root cause (multiple difects and failure may have the same root cause). We would not feel comfirtable for example with the navigation system of an aircraft being in operation just based on the team feeing confident. You need objective measurements (defects count, standard deviation, etc)

3- True, but this also ties into Quality Management in force in the company and enfocement of gating activities throught the SDLC. Quality Assurace team is responsible for the process that governs this, testing team performs the control fucntion which is a subset of duties a mature QA team should perform

February 27, 2019 - 10:14am
Ajay Balamurugadas's picture

Thank you, Pedram for the detailed comment. As seen in the above comment, the practice of evaluating testers based on bug counts is unfortunately common practice. 

Point 2: I appreciate your disagreement. Certain contexts definitely require different practices. 
>>The funnier part is when decisions are based on pass/fail percentage alone
My point is that test cases alone do not mean anything. The accompanying story is what matters a lot compared to the test case count. We would do better if we paid more attention to the accompanying stories. 

Point 3: Maybe, you would appreciate this blog post by Michael Bolton: https://www.developsense.com/blog/2010/05/testers-get-out-of-the-quality-assurance-business/

Thank you, once again for your comment. Much appreciated.

February 28, 2019 - 6:03am
Paul French's picture

Regards to point 2, its clearly all about context.

For Pass /Fail to be really meaningful people have to agree that the planned tests are indeed the correct and only tests to run and then it might be partly useful. But then there are still so many other factors

 

March 17, 2019 - 2:57am
Sanat Sharma's picture

# 2 & 3. 

I introduced a doc in my testing cycle - "Release Confidence Level". This doc covers multiple factors before rating the different areas of the product. This doc totally changed the perspective of leatership team on Software Testing

March 5, 2019 - 3:02pm
Ajay Balamurugadas's picture

Nice, Sanat. If it works for you, great. One of the key problems in testing is that testers are not able to explain their testing methodology. Quite often, we get the answer in a single line - Yes, it was tested or we need more time to test. The story behind the testing is missing. If your document covers the story behind the testing, I think it would definitely give confidence to the stakeholders. 

March 13, 2019 - 12:06am
Paul French's picture

I love point 2. This is something I have been using for a few years now. The basic concept of a RAG status against scenarios/user stories based on arating of no issues found, some minor issues or observations that need discussing, and critical issues impacting its usability. I also use black for untested.

I like the point about adding the degree of testing something has had. Nice added level of information

March 17, 2019 - 3:01am
Juan Alvarez's picture

Thanks Ajay, This three practices has to be forbidden.

1. and 2. should be replaced by the knowledge obtained from the test sessions. Testing is getting new information from the tested system!

3.A tester doesnt have to signoff anything.

March 28, 2019 - 12:02pm

Pages

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.