The Wrong Ratio: How Many Testers Do You Need?

[article]
Summary:
Linda Hayes explains that while there is no meaningful relationship between how many developers you have and how many testers you need, there is an unavoidable correlation between how well your developers test and how much is left to testers. The most reliable way to measure how many testers you need is to treat each project as a unique case.

I get the question “How many testers do you need?” a lot. Typically, clients will announce that they have one tester for every two or five or ten developers and then ask how this compares to the industry standard. Not only am I unaware of any universally accepted industry standard for this ratio, I would be suspect of it if I did: I believe it is the wrong ratio anyway.

The fact is the number of testers you need has absolutely nothing to do with the number of developers you have.

Granted, in days gone by when software was handcrafted line by line there was some correlation between the lines of code that were produced and the functionality that had to be tested. But even then the relationship was tenuous, depending on whether you were developing something new or modifying an existing application. A relatively small project to modify—say, the size of a field—could potentially impact every aspect of a massive application. It’s not the number of code lines that creates risk, it is what they are doing and the potential impact.

Today, the process of application development is undergoing a radical transformation. Massive chunks of common functionality are readily available for everything from product catalogs to shopping carts to scheduling calendars to report writers. Complex business rules can now be defined and implemented without writing any code at all. High-level design tools can produce multi-faceted interfaces that are dynamically configurable based on user information and responses. Service layers extend functionality beyond the borders of applications and even enterprises.

And, of course, as technology proliferates and competition flourishes, any single application often has to be tested against multiple platforms, browsers, mobile devices, and so on. The change needed to add a new environment happens once but the testing goes on forever. Add in multiple supported versions of the application and patch combinations and you have a staggering matrix.

In short, there is no meaningful relationship between how many developers you have and how many testers you need, although there is an unavoidable correlation between how well your developers test and how much is left to testers. Engineering organizations who behave as though they are required only to develop and maintain that all testing is left to testers will push defects downstream, forcing testers to perform low-level unit and integration tests. This sacrifices overall testing, and results in a lower quality outcome that, ironically, will be blamed on the testers.

So where does that leave us?

The most reliable way to measure how many testers you need is to treat each project as a unique case. Testing is basically a risk management activity, and every project presents a different risk profile. So, analyze the project according to the changes being introduced and the risks that they incur. Changes can be from many sources, and their impact can be felt in unanticipated areas; of course, there is a cost to remove or reduce a risk. Depending on the degree of risk, test resources may or may not be justified. There will always be trade-offs between removing or accepting a risk.

And don’t forget that risks are best removed early, so your strategy should clearly articulate the expectations for developer testing. By calling this out as part of the strategy, you can clarify the type of testing that you are committing to perform and what you are not.

User Comments

2 comments
Nan Krull's picture
Nan Krull

Linda, I entirely agree. Development organizations particularly like to address planning issues with formulae, which in the case of testing is clearly insufficient to ensure success.

April 17, 2013 - 11:21am
Xander Bartels's picture
Xander Bartels

I think the question should be "How much testing do you need?".

April 22, 2013 - 3:11am

About the author

Linda Hayes's picture Linda Hayes

Linda G. Hayes is a founder of Worksoft, Inc., developer of next-generation test automation solutions. Linda is a frequent industry speaker and award-winning author on software quality. She has been named as one of Fortune magazine's People to Watch and one of the Top 40 Under 40 by Dallas Business Journal. She is a regular columnist and contributor to StickyMinds.com and Better Software magazine, as well as a columnist for Computerworld and Datamation, author of the Automated Testing Handbook and co-editor Dare To Be Excellent with Alka Jarvis on best practices in the software industry. You can contact Linda at lhayes@worksoft.com.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Apr 29
Apr 29
May 04
Jun 01