The Wrong Ratio: How Many Testers Do You Need?

[article]
Summary:
Linda Hayes explains that while there is no meaningful relationship between how many developers you have and how many testers you need, there is an unavoidable correlation between how well your developers test and how much is left to testers. The most reliable way to measure how many testers you need is to treat each project as a unique case.

I get the question “How many testers do you need?” a lot. Typically, clients will announce that they have one tester for every two or five or ten developers and then ask how this compares to the industry standard. Not only am I unaware of any universally accepted industry standard for this ratio, I would be suspect of it if I did: I believe it is the wrong ratio anyway.

The fact is the number of testers you need has absolutely nothing to do with the number of developers you have.

Granted, in days gone by when software was handcrafted line by line there was some correlation between the lines of code that were produced and the functionality that had to be tested. But even then the relationship was tenuous, depending on whether you were developing something new or modifying an existing application. A relatively small project to modify—say, the size of a field—could potentially impact every aspect of a massive application. It’s not the number of code lines that creates risk, it is what they are doing and the potential impact.

Today, the process of application development is undergoing a radical transformation. Massive chunks of common functionality are readily available for everything from product catalogs to shopping carts to scheduling calendars to report writers. Complex business rules can now be defined and implemented without writing any code at all. High-level design tools can produce multi-faceted interfaces that are dynamically configurable based on user information and responses. Service layers extend functionality beyond the borders of applications and even enterprises.

And, of course, as technology proliferates and competition flourishes, any single application often has to be tested against multiple platforms, browsers, mobile devices, and so on. The change needed to add a new environment happens once but the testing goes on forever. Add in multiple supported versions of the application and patch combinations and you have a staggering matrix.

In short, there is no meaningful relationship between how many developers you have and how many testers you need, although there is an unavoidable correlation between how well your developers test and how much is left to testers. Engineering organizations who behave as though they are required only to develop and maintain that all testing is left to testers will push defects downstream, forcing testers to perform low-level unit and integration tests. This sacrifices overall testing, and results in a lower quality outcome that, ironically, will be blamed on the testers.

So where does that leave us?

The most reliable way to measure how many testers you need is to treat each project as a unique case. Testing is basically a risk management activity, and every project presents a different risk profile. So, analyze the project according to the changes being introduced and the risks that they incur. Changes can be from many sources, and their impact can be felt in unanticipated areas; of course, there is a cost to remove or reduce a risk. Depending on the degree of risk, test resources may or may not be justified. There will always be trade-offs between removing or accepting a risk.

And don’t forget that risks are best removed early, so your strategy should clearly articulate the expectations for developer testing. By calling this out as part of the strategy, you can clarify the type of testing that you are committing to perform and what you are not.

This analysis allows you to define a meaningful test plan for addressing the prioritized risks that is supported by the budget to execute it. Your plan may be the subject of negotiation with engineering and the business, either for expanding or reducing scope and cost, but once the terms are agreed you know how many testers you need andyou have commitment to deploy them.

But this approach is not just for planned projects. Most companies have a dull roar of constant change that is euphemistically called “maintenance.” It’s what happens to software after it goes into production and the response times collapse. A critical error in operations will not tolerate delay, so traditional process safeguards are bypassed and a barrage of targeted changes flow into the system.

For these constant, under-the-radar changes, you still need a strategy for protecting operations. Because of the tight turnaround, testing is often minimal, so the developer may be the only line of defense before release. Still, it makes sense to have a baseline operations test that runs continuously to spot issues as early as possible. The difference is that the test plan and budget are not tied to any project, just as the related changes are not planned within a project.

I will grant you that during the annual budget time you may not have the level of visibility needed to write a detailed plan for every project, but you can and should develop guidelines around the type, size, and complexity of a project. New development is different from changes to existing or licensed systems, enterprise applications are different from departmental ones, and highly integrated systems are different from isolated ones. But just because it is easier to measure the developer to tester ratio doesn’t make it right.

Resist this trap at all costs.

 

User Comments

3 comments
Nan Krull's picture
Nan Krull

Linda, I entirely agree. Development organizations particularly like to address planning issues with formulae, which in the case of testing is clearly insufficient to ensure success.

April 17, 2013 - 11:21am
Xander Bartels's picture
Xander Bartels

I think the question should be "How much testing do you need?".

April 22, 2013 - 3:11am
Paula Thomsen's picture

I agree with Xander's comments, if testing is a risk mitigation exercise what level of risk is acceptable for your delivery. Our role is to not only prevent defects from occuring where we can but also to provide a degree of comfort that the probability and impact of a "risk aka defect" can be managed within specific business tolerance. If you can provide that without even executing a single test then fantastic. As the boundaries between roles becomes more blurred and technology advances it will become harder to use a formula to determine what resources you need. The more creative a "test professional" can be understand the level of risk tolerance and to decrease testing elapsed time/cost of testing whilst delivering the highest level of confidence, the more in demand they will be.

February 17, 2015 - 12:35pm

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.