Testing Your Worth

[article]

In this uncertain economy, testers are losing their jobs, and entire test groups are being laid off. I don't think testers as a whole are out of the job market, but I think there's a new trend in the testing world toward testers who can provide maximum value, either in reading and writing code, writing automated tests, being a subject-matter expert, or being an industry expert. 

Why do I think this is a trend? Many managers I know are working to justify their test group's (and their own) existence. Test managers need to explain the benefits of their current testing strategy, whether it's to senior software managers or to finance people. Managers may or may not know much about software, but they know when the services they're paying for (testing) don't seem to be providing enough bang for the corporate buck.

Amanda, a test manager, wanted to replace a manual black box tester, Ginger, who'd moved to another part of the country. Amanda's VP, Jim, asked what kind of a person Amanda was looking for. When Amanda said she wanted to replace Ginger with another manual black box tester, Jim asked what the cost-benefit analysis was for that kind of a tester versus other kinds of testers. Amanda was initially at a loss for the analysis. She then decided she could look at what people did, and compare their activities to the numbers and kinds of problems the testers found. (Amanda didn't need to compare salaries; salaries were close enough that they were not a consideration.)

Amanda came up with this table for the most recent release of the software, organized by who found the highest total percentage of defects:

Name

type of tester

number of high severity defects found

% of high-severity defects found by this tester

total defects found

% found by this tester

Bertha

Test developer

3

3%

175

22%

David

Test developer

21

24%

153

20%

Harold

Manual black box tester

9

10%

131

17%

Ginger

Manual black box tester

5

6%

114

15%

Edward

Manual black box tester

9

10%

92

12%

Cameron

Manual black box tester

7

8%

75

10%

Franny

Exploratory tester

33

38%

41

5%

Totals found

 

87

100%

781

100%

Table 1: Total Defects Found by Tester
There's not enough information here to make an informed decision about the value of each tester, but if you look at raw defect counts, the test developers look like they've found more overall defects.

Unfortunately, overall defect-find rate per tester is a particularly bad metric. We can all inflate defect counts with not-very-interesting defects. And what about Franny, our exploratory tester who only found 5 percent of the overall defects, but found a whopping 38 percent of the high-severity defects? We not only want to keep Franny, we might consider getting more Frannys in the group. So let's look at the table again, this time sorted by severity of defects found.

Name

type of tester

number of high severity defects found

% of high-severity defects found by this tester

total defects found

% found by this tester

Franny

Exploratory tester

33

38%

41

5%

Harold

Manual black box tester

9

10%

131

17%

Edward

Manual black box tester

9

10%

92

12%

Cameron

Manual black box tester

7

8%

75

10%

Ginger

Manual black box tester

5

6%

114

15%

Bertha

Test developer

3

3%

175

22%

Totals found

 

87

100%

781

100%

Table 2: Total High-Severity Defects Found by Tester
 Amanda used this data to say: "Why are we finding 87 high-severity defects (more than 10 percent of our total defects) in our system test activities? Couldn't we use the testers to find defects earlier or differently?" Franny's defects, especially, were all found during system test. That means that more than one-third of all the high-priority defects were found in system test, via exploratory testing, not testing that could be planned out and described for other people to do.

If I were a manual black box tester, I'd measure my work like this, to see where I'm adding value to the project: Could I add more value by changing what I'm doing when? If I looked at the product architecture, could I test differently? What if I tested the equirements, or reviewed code—would that change anything? If I had inside knowledge of the field, could I become a better tester?Amanda decided she needed to find the high-severity defects earlier, so she started Franny and two test developers, Bertha and David, doing exploratory testing with developers. The developers wrote some code, did some peer reviews (which included Franny, Bertha, and David), unit tested the code, checked the  code in, and then the testers explored the code or wrote automated tests, all before system test officially started. At a test group meeting, Franny, Bertha, and David were very enthusiastic about their ability to get started testing much earlier in the project, and to find problems that the developers would have had trouble fixing later.

Harold, Edward, and Cameron, all manual black box testers, also wanted to start testing earlier, but they didn't have the skill set to attend reviews, or start developing tests without the product's GUI. Harold, Edward, and Cameron are smart people, but they lack enough understanding of software architecture or how to read code.

Amanda realized that she'd rather have someone else like Franny, Bertha, or David who could understand how the product was put together from the inside out. Amanda decided to look for someone who understood code, looking for either an exploratory tester or a test automator. For her organization, those were the high-leverage values that would most pay off.

Now let's look at another kind of high-leverage value. When I showed a draft of this column to a different test manager, he said, "For me, manual testers are more valuable because of my applications' complexities. I could lose the white box testers and not miss a beat." This test manager's products and problems are in a different domain than Amanda's. He wants black box testers who understand how each of the applications is architected and related, and who understand how their users will use the system. This kind of product or subject matter domain expertise is a very valuable skill.

Another kind of expertise is industry expertise. Do you understand how a particular industry works? Does that knowledge change what or how you choose to test?

If you're a manual black box tester, and your organization is evaluating tester worth, consider how you can increase your value:

  1. What do you need to do to find defects earlier?
  2. What do you need to do to discover high-severity defects and discover them earlier?
  3. What can you do to reduce the amount of test time needed?
  4. Is there another way you could test in order to increase your value to the organization?

People who can read code, write automated tests, or otherwise provide value closer to that of a developer, will continue to have their pick of jobs. Black box testers who are (and remain) subject-matter experts in their fields can also provide significant value to their organizations. And testers who use knowledge of their target user's industry to modify their testing will also increase their value to their organization. What kind of tester will you choose to be?

Acknowledgements:
I thank these people for early review of this column: Laura Anneker, Lisa Anderson, James Bach, and George Hamblen.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.