Suffering for Success

[article]
Summary:

One of the most valuable services a QA group provides is preventing failure. Ironically if the group succeeds at this, QA might find themselves unpopular or out of a job. Linda Hayes reveals how typical methods of measuring success can actually cause failure. Especially if success is achieved at the loser's expense.

You may ask, how can success turn into failure? Consider this...

A post-mortem review of the preventive measures taken during the Y2K scare churned a lot of skepticism. Everyone complained about all the time and money wasted on preventing a major failure because it turned out to be no big deal. Excuse me? Doesn't the fact that it wasn't a disaster mean all that effort was well spent?

So, how can QA measure its value without making enemies or being penalized?

Defects Found
As reasonable as it may appear, measuring QA by the number of defects found does not work. For starters, it places QA in an adversarial role with development because every win for QA is a loss for development. I worked on a project that awarded bonus points to development for delivering the least number of defects to QA, and for QA to find the greatest number before release production. Makes sense, right?

Wrong. This approach created a bizarre situation in which development and QA made careers out of parsing exactly what a defect was. Was it an undocumented feature? A missing requirement? User misbehavior? The debates were endless. Developers accused QA of deliberately testing absurd scenarios and performing improper actions just to cause errors, while QA accused developers of denying what were clearly failure modes or omitted functionalities.

Even more insidious was the black market that developed: developers would literally bargain with QA to track defects under the table—off the books—"just between you and me." This created discord within QA when one team member, who was plotting a transfer into development and wanted to curry favor, was keeping a spreadsheet on the side with unreported defects. When the other developers found out about it, they were incensed because it cost them bonus points.

The developers argued that rewarding QA based on how many defects were found motivated QA to spend more time testing in dark, unlikely corners than in mainstream tests which were likely to work. The problem is that users are more apt to spend most of their time in the common areas, so failing to test this area thoroughly invites higher risk failures than revealing errors under extreme and unusual conditions.

The worst outcome is that the defect-hunting mindset diverts QA from its true role: Quality Assurance. Instead, quality control is achieved through testing. Think about it—if you are paid to find problems, what would serve as your motivation to prevent them? You are essentially penalized for investing in the processes—requirements, reviews, inspections, walkthroughs, test plans, etc.—that are designed to nip issues in the bud.

Defects Released
So the logical way to reward QA, it would seem, is to measure defects that escape into production. The fewer the better, right?

Not necessarily. I know of another company that compensated the QA manager in this manner. In turn, she methodically and carefully constructed a software development life cycle aimed at producing the highest quality product possible. Development chafed under what they perceived as an onerous formality and time-consuming processes. Product management also complained about the lengthy test cycles. But she prevailed because the proof was there: product quality improved significantly, with virtually no high priority defects reported in the field.

The manager took six months of maternity leave, and in her absence, developers began to make the case that the entire development process was too burdensome and that testing took way too long. They pointed out that the software was stable, so the elaborate QA edifice was excessive. The same tests had been run for years and always passed—who needed them?

Without the QA manager around to defend her strategy, the developers eventually won. The next release went out much faster, and everyone congratulated himself for overthrowing an oppressive regime. The QA manager returned to find out she had been supplanted and was repositioned in a toothless role as the owner of the process, but had no power to enforce it. She left soon afterwards.

Within months, serious defects began to appear in the field. In fact, one defect was so serious that it cost the company contractual penalties and attracted the senior management's attention. When they asked the obvious question—how could this happen?—no one could answer. The supporting documentation—requirements, test plans, test results—no longer existed since it took too long to produce and wasn't needed anyway.
Requirements Verified
It seems to me that the safest bet is to measure the number and priority of verified requirements. This has three key benefits:

  • The focus shifts to requirements, which moves the effort earlier in the development cycle where it belongs.
  • It reveals the inventory of functionality that the system contains. Most managers don't really grasp the enormity of what QA is up against in trying to provide comprehensive coverage.
  • Requirements may be reduced if schedules need to be sacrificed. Related risks can be managed instead of just blindly cutting corners or letting serious defects go.

Do your teams' measurements for success highlight failure? How does your team strive for success? What works for you?

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.