Dear Aunt Fern ... (This is What a Software Tester Does)

[article]
Summary:
Are you a tester who finds himself at a loss for words when well-meaning relatives ask, "Now, what is it that you do again, dear?" In a reprise of a piece from his bimonthly newsletter, Danny gives us the lowdown on the testing life and tries to eliminate some of the mystery surrounding his profession.

Fern Herring, a relative by marriage and one of the brave non-technical souls who try to understand what I write in my newsletter, recently asked:

"Please explain what this 'testing' is that you do, in lay language so I can understand it."

Thanks for asking! I bet many friends and relatives of testers have wondered the same thing. Even some people who work for hi-tech companies are probably not quite sure. How does a tester describe what he does to friends, family, and non-technical business contacts? It's not easy to do! Here's my attempt.

I like to steal from a BASF slogan - "I don't make the software you use. I help make the software you use better." The purpose of most software testing efforts is to find problems (also called "bugs" or "defects") with the software before it is given to the customer. Note that testers rarely contribute directly to the code that makes up the software product. Software teams don't have foolproof methods for making software that works, so the problems are inevitable. It's important to find as many of the problems as possible, though we can never find all of them. That's why I call myself a "software alchemist" - it seems that the only way out is to have a philosopher's stone that can magically turn our software lead into gold.

You've likely run into many bugs in the software you use. Even if you blamed yourself for the problems, I bet bugs in the software caused many of them. The fact that you saw them means that they slipped through the fingers of both the programmers and the testers. It would take a tremendous amount of effort for the testers to be able to find most of the bugs, and that would make the software you buy cost a lot more, perhaps ten times as much, or more. For NASA, it's worth it to do that much testing, but you're probably not willing to pay much more for software than you do now.

What do testers actually do? It starts out with planning, of course, though that part isn't much fun. Software projects change direction so often that I always have this feeling that our plans are for naught. Once a project is underway, testers may participate in reviewing the early documents produced by the software designers. Testers might not be experts in all the various technologies that the designers are linking together, but we're still good at finding mistakes and missing elements that could cause major problems if not discovered until late in the project. Oddly enough, testers often have a better big-picture grasp of the implementation of the software system than programmers, who focus on the details of a small part of it. Some testers also take on a "quality assurance" role, where they audit and improve the overall development process as well as the product.

Once the programmers have produced at least part of the software, it's time for the testers to start running it. The testers may have produced very detailed plans for how to test the software, especially if a flaw in the software could cause a safety risk. Sometimes the testing is "exploratory," with no detailed planning in advance. Usually there will be some combination of these two extremes.

Testers will use the software's features and try to find any behavior that differs from the official documentation or that just doesn't seem right. Sometimes we write our own programs to automate the process. When we find a bug, we report it so designers and programmers can fix it. Writing an effective description of a bug is an art that gets better with experience.

Testing starts by looking at individual features or maybe internal functions that the end user can't see. The programmers may do some of the testing at this level, though there's wide variance in the industry as to how much the programmers test their own code.

Later, testing moves to a higher level and looks at the software system as a whole. This requires more specialized skills than lower-level testing. At this point, we examine things such as whether the software runs fast enough, whether it can run for a long time without failing, and whether it fails if it encounters unexpected situations.

It's hard to grasp the complexity of creating software and verifying that it works. One testing sage (Boris Beizer in his 1996 speech "Software is Different") has estimated that the engineering effort that goes into producing a word processing program eclipses the engineering that goes into a new supertanker and a 100-floor building combined. That's a lot of complexity crammed into that computer disk! While many people who aren't computer experts are very intimidated by computers, you should realize that those of us who have some understanding of what makes them tick also haven't really tamed the technology.

Testers often disagree about the best way to approach our craft, so I'm sure that even this brief description will invoke the ire of some of my colleagues. As always, I invite your feedback. 

Tags: 

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.