Evaluating a Tester's Effectiveness

[article]
Summary:

Test managers are responsible for monitoring the testing program and the people who carry it out. But with all that testing entails, evaluating a tester's performance is often a complicated task. In this week's column, Elfriede Dustin provides some specifics you can use to assess the effectiveness of a tester.

Testing is an involved process with many components, requiring many skills-so evaluating the tester's effectiveness is a very difficult and often subjective task. Besides the typical evaluations related to attendance, attentiveness, attitude, and motivation, here are some specifics you can use to help evaluate a tester's performance.

The evaluation process starts during the recruitment efforts. Once you have hired the right tester for the job, you have a good basis for evaluation. Of course, there are situations when a testing team is "inherited," and it is necessary to come up to speed on the various testers' backgrounds, so the team can be tasked and evaluated based on their experience, expertise, and background.

You cannot evaluate a test engineer's performance unless you can define the roles and responsibilities, tasks, schedules, and specific standards they must follow. First and foremost, the test manager must make sure to state clearly what is expected and when it is expected from the test engineer. If applicable, training needs should be discussed. Once the expectations are set, the test manager can start comparing the production against the preset goals, tasks, and schedules, measuring their effectiveness and implementation.

Expectations and assignments differ, depending on the task at hand and the type of tester (i.e., subject matter expert, technical expert, or automator), tester's experience (i.e., beginner vs. advanced), and the phase of the lifecycle in which the evaluation is taking place (requirements phase vs. system testing). For example, during the requirements phase the tester can be evaluated based on defect-prevention efforts, such as discovery of testability issues or requirement inconsistencies. Evaluate a tester's understanding of the various testing techniques available and knowledge of which technique is the most effective for the task at hand.

An evaluation of tester effectiveness can be based on a review of the test artifacts. For example, testers are assigned to write test procedures for a specific area of functionality, based on assigned use cases or requirements. During a test case walkthrough, evaluate whether the tester has applied an analytical thought process to come up with effective test scenarios. Have the test procedure creation standards been followed? Evaluate the "depth" of the test procedure (somewhat related to the depth of the use case). The outcome of this evaluation could point to various issues. You need to evaluate each issue as it arises, before you make a judgment regarding the tester's capability.

It is also worthwhile to evaluate automated test procedures based on given standards. Did the engineer create maintainable, modular, reusable automated scripts, or do the scripts have to be modified with each new system build? In an automation effort, did the tester follow best practices? For example, did the test engineer make sure that the test database was baselined and could be restored for the automated scripts to be rerun? In some cases, a test manager has to follow up on the testing progress daily and verify progress in a hands-on way (not just verbally).

In the case of a technical tester, assess technical ability and adaptability. Is the test engineer capable of picking up new tools and becoming familiar with their capabilities? Train your testers on tool capabilities, if they are not aware of all of them. But if they are aware of tool capabilities, evaluate their ability to use them.

Another area of evaluation would be how well a test engineer follows instructions and pays attention to detail. It's time-consuming when follow-through has to be monitored. If a specific task has been assigned to a test engineer to ensure a quality product, the test manager must be confident that the test engineers will carry out this task.

You may also want to evaluate the type of defects found by the engineer. Does the test engineer find errors that are complex and domain related, or only cosmetic? For example, cosmetic defects such as missing window text or control placement are relatively easy to detect, whereas more complicated problems relating to data or cause-effect relationships between elements are more difficult, and require a better understanding of the application. Effectiveness can also be evaluated related to "how" a defect is documented. Is there enough detail in the documented defect for a developer to be able to recreate the problem? Make sure there are standards in place that state exactly what information is required in a documented defect.

What about the tester who is responsible for testing a specific area where most defects are discovered in production? First, evaluate the area the tester is responsible for. Is it the most complex, error-prone area? Are the defects discovered using some combination of functional steps that are barely executed? Is it a defect that could have been discovered by executing a simple test procedure that already exists? Different conclusions can be derived, given the various scenarios.

A tester's effectiveness can also make a big difference in the interrelationships with other groups. A tester who always finds bogus errors or reports "user" errors (based on the application working as expected) will lose credibility with other team members and groups. This can tarnish the reputation of an entire test program. Overlooked, nonreported defects can have the same effect.

It is important to evaluate the tester's effectiveness on an ongoing basis, not only to determine training needs, but most importantly to ensure the success of the testing program.

User Comments

1 comment
Thao NGuyen's picture
Thao NGuyen

Hello Elfriede,

It is an interesting point to me. I wonder if we have any KPI (satisfy SMART principle) to evaluate this effectiveness of tester. I mean a set of possible KPI that can be applied for tester. Obviously, I do not hope that it will be common for all levels of tester: junior, experienced, analyst as they have different role, responsibilities.

My company would like to apply KPI and i am in charge to define this for my testers. Some ideas i could think:

- Percent of missed defects per project after live production

- Test Case Coverage aligned to Requirements Percentage

- Number of information requests from developers for tracked bugs

Could you advise more?

May 18, 2013 - 10:34am

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.