Interface or Interfere?

[article]
Summary:

One of the Holy Grails of automated quality toolsets is a fully integrated suite that seamlessly tracks the process all the way from requirements to test cases and on through to defect tracking. This fully integrated suite makes for a great marketing pitch and sexy slideware, but in Linda Hayes' experience its functionality usually stops there. The leap from theory to practice seems to fall short, and it makes her wonder whether the concept of a fully integrated suite is fundamentally flawed or if it's just the implementation that needs attention. In this column, she begins her investigation by studying two test cases to decide whether these experiences are anomalies or the rule.

Tracing requirements to automated test cases seems natural and highly desirable, but frankly I have never seen this done and have puzzled over it. I have always assumed it was because requirements are either glossed over or so high level that they don't support true traceability. While that might be true in some cases, it does not tell the whole story.

One of the more sophisticated test managers I know, who makes a religion out of requirements, explained to me why she doesn't trace them through to her automated test cases.

Most toolsets allow requirements to be linked to test scripts, she pointed out, but anyone who is doing industrial-strength automation makes her scripts reusable and moves test cases into data. While you can theoretically put a requirement identifier into a data record, it makes it tricky to do traceability and to track change impact, because the data is usually stored as text and requirement identifiers are stored as database keys.

Furthermore, she notes that a well-written test case becomes a requirement in effect, so there is no need to duplicate the same information elsewhere. Her test data cases all have fields that describe the conditions being tested, which she uses to document requirements. In fact, she has found that a typical requirements management system allows too much leeway in how requirements are described and leads to ambiguity and inconsistency, whereas a test case by necessity has to be detailed and objective enough to be executed.

Of course not everyone has requirements but everyone has defects. What about interfacing to a defect tracking system? After all, a comprehensive automated test suite should uncover issues that need to be captured and resolved. Again, it's not that simple.

We had a customer who asked that we integrate our automated test framework with his custom defect management system, which we did at significant cost and effort. When our next release came out, I contacted him to verify the changes against his interface, but to my surprise he said he wasn't using it. Of course, I wanted to know why.

He had three reasons. First, when an automated test failed, there were a number of potential reasons: the test environment was incorrectly configured; the test data was out of sync; the test itself had an issue; or the software had an issue. Only one of the four possibilities was an actual software defect. As a rule, his testers would review the test logs for failures and then perform diagnostics—often including manual re-execution—to uncover the underlying issue.This meant that they had to cull out the defects that weren't software issues at all. The process was extremely time-consuming, because they ran lengthy test suites and a test data problem could cause literally hundreds of failures. And even when the problem was a defect, it might also cause multiple failures and therefore create duplicate issues. This adding and closing of defects tainted their metrics by inflating the defect arrival and close rates, thereby invalidating the classic S-curve report used to predict release.

Also, by the time they did their research and reached their conclusions, they had more information and analysis to offer than was available in the test log itself. They still had to retrieve the issue within the defect management system and add the additional information, so there was no real time savings.

Finally, he said, in an automated test, the script or step that failed was not necessarily the actual root of the problem. Often the genesis of the issue occurred earlier than when the failure was logged, so the information available in the test log was not germane. All in all, he decided the integration was more trouble than it was worth.

So I'm on a mission to find out whether these experiences are anomalies or the rule, or perhaps if there is a way to approach integration that makes it productive—or not. What have you found that works or doesn't?

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.