Affordable Peer Reviews

[article]

8

Summary:

Many people know that peer reviews can help them to produce better-quality products, but most organizations do not use this potent tool. Why? Because, although they would like to experience the quality benefits, they can't justify the costs they would incur.

 

Peer review doesn’t have to be an expensive proposition. In fact, the right methods can generate more than they cost. How can we do this, though? By ensuring that peer reviews are focused on finding the kinds of defects that are difficult and much more expensive to find using other methods.

Defects Cost Time and Money
Before we can discuss using peer reviews to same time and money, we must first understand what our defects cost us. We find and fix many defects with relative ease. There are others that cost us dozens of person-hours and a few that cost hundreds of hours to diagnose and fix. These expensive defects present us with golden opportunities to reduce our costs using strategically focused peer reviews.

Most of us have a defect-tracking system with information on hundreds (or even thousands) of defects. This real-life data is the map that can point us to these high-cost (high-opportunity) defects. If your system includes a record of the number of hours each defect cost, then picking out the expensive defects is easy.

If that information is not in your system, then you will have to spend some time reviewing the various defect reports and remembering how much work each one required. Usually, the person who diagnosed and fixed the defect is the best one to make this estimate. Though our people's memories will quickly become fuzzy, the most painful of the defects from the past few months will still stand out in their minds. Searching historical records for the expensive types of defects is only a start. We should also begin watching for those "killer" defects from now on. When counting the cost of a defect, we must be sure to count all of the time that it cost us. This includes the time it took:

·         For the person who found the defect to recognize it, confirm that it was indeed a problem, and collect the information they needed to report it.

·         To report the defect, then track, manage and close out the defect report.

·         For the parties (e.g. testers and engineers) to negotiate the defect's priority and agree that it should be fixed.

·         For an engineer to investigate, recreate, and diagnose the problem.

·         For an engineer to rework designs, code, databases, and other artifacts to correct the problem.

·         To produce a new version of the system and prepare it for testing and installation.

·         To test the fix to ensure that it corrected the problem, and to regression test the system to ensure the fix did not introduce new problems.

·         To deploy the fix to whomever was affected by the problem.

·         For the person who originally found the problem to verify that the fix did indeed solve their problem.

When we pay attention to all of the costs associated with our defects, we can see that a few of them are tremendously expensive. Each time a defect costs us more than a few hours, we should add it to our list of candidates for peer reviews. In a short time, we will have quite a list of these high-opportunity defects for our peer reviews. Focusing on High-Value Defects
After we have identified these high-opportunity defects, how can we use this information to make our peer reviews high-value? The key tool for affordable peer reviews is a checklist. Too many peer reviews waste time and effort because the reviewers try to find everything that might be wrong, and end up focusing on the easy-to-spot trivial problems, allowing the more valuable ones to escape (only to be found later when they will cost more to diagnose).

When a peer review is checklist-driven, the reviewer tends to focus his or her attention only on the items that are on the checklist. This can reduce the amount of time the reviewer spends, while making it more likely that the important defects will be uncovered. Naturally, the reviewers will notice other more trivial problems along the way (and of course, should point them out), but the checklist keeps the bulk of the reviewer's attention where it belongs.

Return on Investment (ROI) For Peer Reviews
Investing less time while providing more value makes these peer reviews more worthwhile. How can we be sure that this enhanced value is worth the cost, though? As with anything we do, the investment we making in peer reviews must produce a return that is at least as large as that investment, or it is not worth making. Computing ROI requires that we also understand the full cost of these reviews. There are a few elements to those costs:

·       Preparing a checklist requires analysis, and maintaining it over time means an on-going investment. When spread over the dozens or hundreds of reviews we do each year, these costs will be minimal, but we must account for them and spread those costs over the reviews to make an honest accounting.

·       The time the reviewer(s) spend actually reviewing the designs, code, or other artifacts is the most significant cost. If you hold review meetings, be sure that you count all of the preparation for the meetings as well as the meetings themselves, and multiply by the number of people involved in them.

·       Finally, each defect found in the reviews will have to be recorded, tracked, fixed, etc. just as we discussed above.

Once we know what it costs to find and fix defects in peer reviews, we can then compare those costs with those of finding the same types of defects during testing and having them reported from the field. The difference between those costs is the return we get from our investment in peer reviews. For example:

·         If the average defect we are finding in our peer reviews costs two hours after accounting for all of the costs just listed (review = 2 hours)

·         We know that those same types of defects usually cost five hours when found in test (test = 5 hours)

·         They cost  20 hours when found in the field (yield = 20 hr)

·         Our testing usually catches 75% of defects before they reach the field (yield = 0.75)

We can then compute the ROI for our peer reviews as:

ROI = (Yield*Test + (1-Yield*Field) ) / review
ROI = ( (.75 * 5) + (.25 * 20) ) / 2
ROI = 4.375

This means that, for every person-hour we invest in peer reviews, we are saving more than four person-hours of effort during testing and after deployment. With numbers like this, we can see that those peer reviews are definitely cost-effective. But what effect will they have on our schedule? We can compute the schedule impact as:

Sched = Review - (Yield*Test )
Sched = 2 - ( .75 * 5 )
Sched = -1.75


This means that, for every defect we find in our peer reviews, we are actually shortening our project schedule by nearly two hours. 

In this example set of data, the peer reviews are saving both time and money on the projects. Achieving this sort of resultis within your reach. It takes a small investment in preparing a checklist and also the fortitude to actually invest in doing the reviews. If you watch your data and focus your checklist on those few high-value defects, you can then enjoy the benefits of better quality products without sacrificing time and money to achieve it!
 

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.