Managing the Testing Process

[article]
Opening the Black Box
Summary:

Treating testing as a black box at the end of the project schedule invites failure. Testing and test preparation need to be open, managed, and manageable processes that everyone on the project team can understand and work with. Having insight into all testing requirements and desired outcomes is an absolute must for effectively managing a successful testing effort.

Planning for testing on a software project is often challenging for program managers. Test progress is frequently unpredictable, and during software testing painful schedule and feature "surprises" typically occur. Software testing is often viewed as an obstacle-more as a problem and less as a vital step in the process. For this reason, testing is treated as a "black box" and addressed at the end of the schedule. While budget and time may be allocated for it, testing is not really managed in the same way as development. Typically, software development is measured in terms of overall progress in meeting functional and business goals. Software testing needs to be measured in similar terms to understand its true progress and make informed decisions. By considering testing dimensions other than cost and schedule, managers and other team members can better understand and optimize the testing process, in effect opening the black box and managing testing more effectively. In this way they can avoid costly and painful "surprises" late in the project.

Years ago I worked on a new "next-generation" project at a software company. Everything seemed to be going well until the time came to do quality testing. As the software approached the test phase, the schedule looked pretty good. There had been some slips, but the project seemed on target for the scheduled release date. That is, until it actually went into testing.

On this project, testing had been treated as a black box. Software went into the lab, the testers did whatever it is that testers do, and bug reports came out. Since it was a black box, it was difficult for management to understand or justify time, resources, or equipment for testing. Also, nobody outside of the testing group, including project management, really understood what the process was or how to determine whether or not testing was complete. Several weeks were allocated to do a single test run, after which it was anticipated that the software would be ready to release.

Unfortunately, that was not the way the project went. Instead of a single pass through testing, ten different versions of the software required complete testing runs. The schedule was hopelessly overrun, and instead of having a new release to announce at an important show, all the company had were some "closed-door" demos for selected customers.

Testing does not have to happen this way. Looking beyond simple cost and schedule considerations opens up the black box, helping managers make better decisions about testing. When the testing black box is opened, and enough detail is provided to managers, it becomes much easier to understand the tradeoffs and to make informed decisions about resources and schedules needed to test a product. Managers are able to plan for and guide testing from the start, rather than ignoring it and hoping for the best. When the entire project team is kept up to date on testing, understands the approach used, and can observe the progress of testing while it is being done, it is easier for the team to function effectively as problems are found and as new versions go through the testing phases of the project. There are other advantages to opening up testing as well.

Often, there are early warning signs that testing is going to have problems. These show up in the details of the analysis and design phases of the tests themselves. They appear in the form of incomplete or deferred work due to missing information, improperly managed problems recorded against key functionality, and other "small" indicators accumulating over time. If these indicators are spotted far enough ahead of time by managers, developers, and the testers themselves, work can be done to head problems off while they are still small. This in turn ensures that the testing group is better prepared for the software and that the software is better prepared for testing.

Organizations can avoid last-minute quality issues by addressing testing problems earlier in the process, when they are still small. Doing this requires better insight into a project than what can be gotten from a Gantt chart. During test development, management needs to know the status of test planning and preparation to properly gauge the readiness of the test team to test the software. This knowledge comes from understanding

  • the relationship between the software functionality and the testing to be done on that functionality
  • the relationship between the software design and the tests expected to verify that software
  • specific problems the tests are intended to address

It is not just management that needs to know this information. Software developers and testers also need to be able to see and work with this information, and to understand the parts that are relevant to their own work. With adequate information, testers are better able to plan and perform their tasks more efficiently, and to get an accurate picture of the system. For example, it is not uncommon for a set of unmeasured functional tests to repeatedly test one part of the code, while partially or completely missing other parts. In addition, feedback and questions from testers can help to prevent problems later as inconsistencies and inaccuracies are found. Likewise, the developers are able to anticipate potential problems with either the software or the tests, and work with the testers to address these before they have a significant impact.

While testing is in progress, results should be measured in terms of software functionality tested, degree of code coverage, progress against schedule, and problems found. Usually these factors provide a much clearer picture of the status of the testing (and the overall quality of the software) than how long the testing has been running.

Compiling and communicating complete information about test activities provides a common way for everyone on the project to track test activity progress. Testing status needs to be presented to each member of the team in a way that is understandable and that lets them make the best decisions possible. The project manager needs to have a comprehensive view of the progress of test preparation and execution, understanding at a global level what tests are being designed, what tests were run against different versions of the software, what areas of functionality are affected by significant bugs, and what parts of the code will be affected by the bugs that have been found. Developers need to have information specific to their part of the software process, knowing what tests will touch on their area, what problems have been found in their area, how significant those problems are, and what testing has actually been done on their software. Testers and QA staff need to have a different, but similarly detailed view of the system. In short, each person on the project has a different role, and so each needs information that is appropriate to their function.

To support these different needs, different levels of detail in each of the following categories must be provided to each group.

  1. Schedule: What tests will be run? When will the tests be ready? How much effort will it take? When will it be complete?
  2. Functionality: What requirements will be tested and where? How will tests divide up application requirements? How much of the functionality has been tested for a given version of the software?
  3. Code: What parts of the code are exercised by the tests? What problems have been found? How much of the code in a given version has been executed during testing?
  4. Problems: What problems are tested for? What problems have been found? How significant are the problems? What parts of the software are affected by the problems? What versions are affected by the problems? What requirements are impacted by these problems? What is the impact of these problems on the testing?

The questions posed for each of these areas must be carefully examined in order to properly understand and track the status of project test activities. In addition, having a solid understanding of the planning and preparation requirements for each testing phase is critical to making correct decisions about project schedule, status, and release.


Table 1 below summarizes commonly used testing metrics and where in the
testing process they apply.







Metric Type

Test Development Metrics

Test Execution Metrics

Functional Metric

  • Number of requirements allocated by test
  • % of requirements by test
    development phase
  • Number of requirements verified
  • % of requirements tested by version
  • % of requirements tested by major software component
  • Stability of server/platform per user

Code Metric

  • % of code covered per test
  • % of code coverage per major
    software component
  • Code coverage of tests completed for each version under test

Problem Metrics

  • Problems tested for in regression
    tests
  • Extreme conditions tested for in
    functional tests
  • Problems found per version tested
  • Problems found per software
    component
  • Number of critical/high problems
    found per version

Schedule Metrics

  • % completion of functional test requirements by testing phase
  • Weighted functional requirement
    completion
  • Tests completed per version
  • Estimated number of days to complete
  • Test cycle completion time
  • Time to complete testing per
    functional area


TABLE 1: Commonly Used Testing Metrics

Managing a successful testing effort requires knowing all about project schedules. The process begins during test planning and involves not only understanding how long it will take to prepare tests, but also estimating how long it will take to run the tests and what resources will be required. During planning, preemptive decisions about required resources and the scope of testing can be made before testing actually begins. Tradeoffs between resources and time are possible at this stage. Also, decisions about what tests to automate and how much to automate can be made and tracked. At the same time, estimates of costs and numbers of test runs for different versions can be made and refined as test time approaches. This allows estimates of how much testing needs to be done concurrently, and how to plan for it. As testing is performed, the number of times the software needs to be tested and the amount of time each version takes to test can be factored in to decision making. However, it is not possible to make scheduling decisions without knowing the functionality, code, and problem aspects of the testing to be done.

Functionality is a critical measure of testing completeness. Decisions need to be made up front about the degree to which functionality will be tested, and which parts of the functionality will be emphasized during testing. Functionality is usually described in terms of requirements, which are divided among different functional and systems tests according to the nature of the requirements and the needs of the system. Often, for reasons of efficiency and to get a better understanding of how software will perform after release, test requirements are allocated very differently from the allocation of requirements among different components of the software. For this reason, it may be necessary to run multiple tests from several areas to completely test software components. Following this process can reveal problems in areas of the software that have been passed over in earlier tests. This is also why it is important to see testing from the code perspective.

When considering functionality in testing, it is important to think also about how the functionality is being tested. It is common to develop a relatively "quick-and-dirty" set of tests that exercise the functionality of the code in a "nominal" way, to make sure that all of the pieces are in place and a given version of the software is worth the added effort of doing a complete set of tests on. The complete testing suite should be targeted at testing the functionality to a more exacting standard, in essence deliberately trying to push the software to its limits. This type of functional testing includes load, performance, and endurance testing where appropriate.

Understanding how tests and problems map to the code is necessary to understand how far along testing has progressed, what remains to be tested, and how complete the testing has been. A detailed knowledge of the code is not required for the entire project team. However, management does need to understand how much of the code has been tested, how much remains to be tested, and how many modules are affected by problems discovered during testing. With this information, informed decisions can be made about the consequences of fixing or leaving problems alone. Armed with such information about test coverage, it is possible to understand how thoroughly the software is being tested, and how much of the software is being exercised by the tests. Often this comes as an unpleasant but enlightening surprise the first time it is measured.

When considering code coverage as a measure of the completeness of testing, it is often important to decide what a reasonable level of coverage is for the software being tested. There are many different ways to measure completeness of code coverage, some of which are extremely difficult to achieve for a typical piece of software. These higher levels of coverage are commonly used for software with a correspondingly high level of criticality-either software that is widely used, such as operating systems or Web servers, or software that is used in very critical functions, such as medical devices or aircraft controls. Regardless of the project, however, teams need to decide on a level of coverage that is most appropriate to the particular software under test.

Measuring the impact and consequences of problems that arise during testing is a critical step in the process. This should include how much of the software is affected by a given problem, at what point during testing a problem was found, and what kinds of problems regression tests are attempting to uncover. This information is needed to monitor the overall progress of the software through testing and to make informed decisions about software release. So by combining all of the different perspectives of schedule, functionality, code, and problem resolution, it is possible to understand and manage software testing, rather than treating it as a black box.

The catch to this is that while these elements of information are usually present in a software project, it is often difficult to find them. Also, typically such information is presented at a level of detail and terminology understandable only by testers and developers. Recently, test management tools have been developed to collect and present this information in the different forms that are better targeted to different members of the software development team. For example, higher level progress and statistical information is presented to management in a simple graphical form, while more detailed technical information is expanded out in drill-down views for users who need that level of detail. These tools can be set up to provide a common basis for discussion and decision making during the testing cycle. They can also help management to avoid treating testing like a black box.

As we have seen, treating testing as a black box at the end of the project schedule invites failure. Testing and test preparation need to be open, managed, and manageable processes that everyone on the project team can understand and work with. Having insight into all testing requirements and desired outcomes is an absolute must for effectively managing a successful testing effort.

Finally, having current information available for testing is important to prepare and manage the project for testing. Testing progress needs to be measured in terms of schedule, functionality, code coverage, and problems. While the exact way in which this information is presented will vary according to the needs of the organization and the project, these four measures of progress need to be available in a form that allows each member of the team to understand the current status of testing, what needs to be done, and how that will affect their part of the project. Using that knowledge, the team can ensure the success of testing and remove one of the major stumbling blocks to a successful project and a successful product.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.