Evaluating a Tester's Effectiveness

[article]
Summary:

Test managers are responsible for monitoring the testing program and the people who carry it out. But with all that testing entails, evaluating a tester's performance is often a complicated task. In this week's column, Elfriede Dustin provides some specifics you can use to assess the effectiveness of a tester.

Testing is an involved process with many components, requiring many skills-so evaluating the tester's effectiveness is a very difficult and often subjective task. Besides the typical evaluations related to attendance, attentiveness, attitude, and motivation, here are some specifics you can use to help evaluate a tester's performance.

The evaluation process starts during the recruitment efforts. Once you have hired the right tester for the job, you have a good basis for evaluation. Of course, there are situations when a testing team is "inherited," and it is necessary to come up to speed on the various testers' backgrounds, so the team can be tasked and evaluated based on their experience, expertise, and background.

You cannot evaluate a test engineer's performance unless you can define the roles and responsibilities, tasks, schedules, and specific standards they must follow. First and foremost, the test manager must make sure to state clearly what is expected and when it is expected from the test engineer. If applicable, training needs should be discussed. Once the expectations are set, the test manager can start comparing the production against the preset goals, tasks, and schedules, measuring their effectiveness and implementation.

Expectations and assignments differ, depending on the task at hand and the type of tester (i.e., subject matter expert, technical expert, or automator), tester's experience (i.e., beginner vs. advanced), and the phase of the lifecycle in which the evaluation is taking place (requirements phase vs. system testing). For example, during the requirements phase the tester can be evaluated based on defect-prevention efforts, such as discovery of testability issues or requirement inconsistencies. Evaluate a tester's understanding of the various testing techniques available and knowledge of which technique is the most effective for the task at hand.

An evaluation of tester effectiveness can be based on a review of the test artifacts. For example, testers are assigned to write test procedures for a specific area of functionality, based on assigned use cases or requirements. During a test case walkthrough, evaluate whether the tester has applied an analytical thought process to come up with effective test scenarios. Have the test procedure creation standards been followed? Evaluate the "depth" of the test procedure (somewhat related to the depth of the use case). The outcome of this evaluation could point to various issues. You need to evaluate each issue as it arises, before you make a judgment regarding the tester's capability.

It is also worthwhile to evaluate automated test procedures based on given standards. Did the engineer create maintainable, modular, reusable automated scripts, or do the scripts have to be modified with each new system build? In an automation effort, did the tester follow best practices? For example, did the test engineer make sure that the test database was baselined and could be restored for the automated scripts to be rerun? In some cases, a test manager has to follow up on the testing progress daily and verify progress in a hands-on way (not just verbally).

In the case of a technical tester, assess technical ability and adaptability. Is the test engineer capable of picking up new tools and becoming familiar with their capabilities? Train your testers on tool capabilities, if they are not aware of all of them. But if they are aware of tool capabilities, evaluate their ability to use them.

Another area of evaluation would be how well a test engineer follows instructions and pays attention to detail. It's time-consuming when follow-through has to be monitored. If a specific task has been assigned to a test engineer to ensure a quality product, the test manager must be confident that the test

User Comments

1 comment
Thao NGuyen's picture
Thao NGuyen

Hello Elfriede,

It is an interesting point to me. I wonder if we have any KPI (satisfy SMART principle) to evaluate this effectiveness of tester. I mean a set of possible KPI that can be applied for tester. Obviously, I do not hope that it will be common for all levels of tester: junior, experienced, analyst as they have different role, responsibilities.

My company would like to apply KPI and i am in charge to define this for my testers. Some ideas i could think:

- Percent of missed defects per project after live production

- Test Case Coverage aligned to Requirements Percentage

- Number of information requests from developers for tracked bugs

Could you advise more?

May 18, 2013 - 10:34am

About the author

Elfriede Dustin's picture Elfriede Dustin

Elfriede Dustin works as a QA/Test Manager at BNA Software (www.bnasoftware.com) Elfriede is co-author of the book Automated Software Testing, and also co-authored the recently published book Quality Web Systems. Her Automated Software Testing white papers are posted on www.stickyminds.com. You can contact her at edustin@bna.com

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Oct 12
Oct 15
Nov 09
Nov 09