This article takes a seemingly simplistic view at managing resources and projects.
Having been a manger for almost 20 years covering a wide range of positions in technology, I recently became a QA Manager for a small internet-based company.
As a QA manager you are continually barraged with the onslaught of daily questions and hurdles to overcome. The introduction of internet time certainly adds a new dimension to the chaos and procedures/policies require infinite tweaking.
My current dilemma revolves around the basis of headcount analysis. Just exactly how do you staff for an undeterminable amount of projects when you don't know they are coming and you don’t know how complex they are?
I know what you're thinking, "surely, there are ratios that will answer his problem", but before you commit to that thought, I'd like you to see what I see:
A limited number of QA resources.
A very short range forecast, possibly 1 to 2 months out.
An extremely aggressive development schedule.
Senior management's commitment to the bottom line.
Right at about this time your head is nodding in agreement as if you know from whence I have traveled. The ratios will not help me.
What is the magic number? How do you accurately predict the correct number of testers required to successfully test your products? Well, I like to gauge the effort by answering the following questions:
- What is the complexity of the product/release?
- Is the Development effort small, medium or large?
- What is your staff's ability? Are they entry level or experienced?
Complexity of the Product/Release
A product's complexity can be deduced by the technology being used, the architecture and scale.
The amount of effort involved in coding an application may be used to determine whether a project is small, medium or large. This effort should be directly proportional to the testing effort. In other words, if it is a large Development effort, it will inevitably be a large QA effort.
The technical ability and experience of your staff will affect the time required to test. Experienced testers know where and how to test and most junior testers will require some time to get up and running.
After spending some time researching my initial question, I decided that the answer lies in Risk Assessment. Not your conventional "risk assessment", but assessing the risk in not providing adequate testing. What will be the ramification of not testing the product adequately? How will this impact our future business? Can we afford not to test?
In summary, I guess the answer to this question is really "as many as you can afford".