Introducting Automated Test to a Project

[article]
Summary:

In an attempt to do more with less, organizations want to test their software adequately, but as quickly as possible. Faced with this reality, software managers and developers have little choice but to introduce automated testing to their projects....The introduction of automated test tools helps to replace archaic and mundane manual test processes with a more professional and repeatable automated test environment, that promotes test engineer retention and improves test engineer morale.

This article appeared in Methods & Tools, Summer 1999 (pp. 11-15). The text originally appeared in and is adapted from "Automated Software Testing" (sections of chapter 4), copyright AWL, all rights reserved, ISBN 0-20-43287.

At the start, a new candidate for (a) paradigm (change) may have few supporters, and on occasions the supporters' motives may be suspect. Nevertheless, if they are competent, they will improve it, explore its possibilities, and show what it would be like to belong to the community guided by it. And as that goes on, if the paradigm is one destined to win its fight, the number and strength of the persuasive arguments in its favor will increase. (Thomas Kuhn, "The Structure of Scientific Revolution")

Introduction
Today's software managers and developers are being asked to turn around their products within ever-shrinking schedules and with minimal resources. This is due to a push from federal government and commercial industry toward streamlining the product acquisition life cycle as well as the survival necessity within the commercial software product industry of getting a product to market early.

In an attempt to do more with less, organizations want to test their software adequately, but as quickly as possible. Faced with this reality, software managers and developers have little choice but to introduce automated testing to their projects. While needing to introduce automated testing, software professionals may not know what's involved in introducing an automated test tool to a software project, and may not be familiar with the breadth of application that automated test tools have today.

Manual testing is labor intensive and error prone, and does not support the same kind of quality checks that are possible through the use of an automated test tool. Humans make mistakes, especially when mundane, repetitive tasks are involved. Computers are especially effective in performing these mundane and repetitious tasks, without the mistakes. The introduction of automated test tools helps to replace archaic and mundane manual test processes with a more professional and repeatable automated test environment, that promotes test engineer retention and improves test engineer morale.

Introducing Automated Test
Yet, a new technology is often met with skepticism and software test automation is no exception. How test teams introduce an automated software test tool on a new project is nearly as important as the selection of the most appropriate test tool for the project. A tool is only as good as the process being used to implement the tool.

Over the last several years test teams have largely implemented automated testing tools on projects, without having a process or strategy in place describing in detail the steps involved in using the test tool productively.

This approach commonly results in the development of test scripts that are not reusable, meaning that the test script serves a single test string but cannot be applied to a subsequent release of the software application. In the case of incremental software builds and as a result of software changes, these test scripts need to be recreated repeatedly and must be adjusted multiple times to accommodate minor software changes. This approach increases the testing effort and brings subsequent schedule increases and cost overruns.

Perhaps the most dreaded consequence of an unstructured test program is the need for extending the period of actual testing. Test efforts that drag out unexpectedly tend to receive a significant amount of criticism and unwanted management attention. Unplanned extensions to the test schedule may have several undesirable consequences to the organization, including loss of product market share or loss of customer or client confidence and satisfaction with the product.

On other occasions, the test team may attempt to implement a test tool too late in the development life cycle to adequately accommodate the test team’s learning curve for the test tool. The test team may find that the time lost while learning to work with the test tool or ramping up on tool features and capabilities has put the test effort behind schedule. In such situations, the team may become frustrated with the use of the tool and even abandon it so as to achieve short-term gains in test progress. The test team may be able to make up some time and meet an initial test execution date, but these gains are soon forfeited during regression testing and subsequent performance of test.

In the preceding scenarios, the test team may have had the best intentions in mind, but unfortunately was simply unprepared to exercise the best course of action. The test engineer did not have the requisite experience with the tool or had not defined a way of successfully introducing the test tool. What happens in these cases? The test tool itself usually absorbs most of the blame for the schedule slip or the poor test performance. In fact, the real underlying cause for the test failure pertained to the absence of a defined test process, or where one was defined, failure to adhere to that process.

The fallout from a bad experience with a test tool on a project can have a ripple effect throughout an organization. The experience may tarnish the reputation of the test group.

Confidence in the tool by product and project managers may have been shaken to the point where the test team may have difficulty obtaining approval for use of a test tool on future efforts. Likewise, when budget pressures materialize, planned expenditures for test tool licenses and related tool support may be scratched.

By developing and following a strategy for rolling out an automated test tool, the test team can avoid having to make major unplanned adjustments throughout the test process. Such adjustments often prove nerve-wracking for the entire test team. Likewise, projects that require test engineers to perform hundreds of mundane tests manually may experience significant turnover of test personnel.

introduction process. This process is essential to the long-term success of an automated test program. Test teams need to view the introduction of an automated test tool into a new project as a process, not an event. The test tool needs to complement the process and not the reverse.

Test Process Analysis
The test team initiates the test tool introduction process by analyzing the organization's current test process. Generally, some method of performing test is in place, and therefore the exercise of process definition itself may actually result in process improvement. In any case, process improvement begins with process definition.

The test process must be documented in such a way that it can be communicated to others. If the test process is not documented then it cannot be communicated or executed in a repeatable fashion. If it cannot be communicated or is not documented then a process is often not implemented.

In addition, if the process is not documented then it cannot be consciously and uniformly improved. On the other hand, if a process is documented it can be measured and therefore be improved.

If the organization’s overall test process is not yet documented, or it is documented but outdated or inadequate, the test team may wish to adopt an existing test process or adopt an existing test process in part. The test team may wish to adopt the Automated Test Life-cycle Methodology (ATLM), outlined in the book "Automated Software Testing," as the organization's test process.

When defining or tailoring a test process, it may prove useful for the test engineer to review the organization’s product-development or software-development process document, when available.

Process Review
The test engineer needs to analyze the existing development and test process. During this analytical phase, the test engineer determines whether the current testing process meets the defined prerequisites listed below.

  • Testing goals and objectives have been defined
  • Testing strategies have been defined
  • Tools needed are available to implement planned strategies
  • A testing methodology has been defined
  • The testing process is communicated and documented
  • The testing process is being measured
  • The testing process implementation is audited
  • Users are involved throughout the test program
  • Test team is involved from the beginning of the system development lifecycle
  • Testing is conducted in parallel to the system development lifecycle
  • Schedule allows for process implementation
  • Budget allows for process implementation
  • Understand whether the organization is seeking to comply with industry quality and process maturity guidelines (i.e. CMM, ISO)

The purpose of analyzing the organization's test process is to identify the test goals, objectives and strategies, which may be inherent in the test process. These top-level elements of test planning are the cornerstones for which a project’s test program develops.

The purpose of documenting the test tool introduction process is to ensure that the test team has a clearly defined way of implementing automated testing, so that the team can fully leverage the functionality and time saving features of the automated test tool.

The additional time and cost associated with the documentation and implementation of a test tool introduction process is sometimes an issue. A well-planned and well-executed process will pay for itself many times over by ensuring a higher level of defect detection and fielded software fixes, shortening product development cycles, and providing labor savings. A test team, which can be disciplined in defining test goals, and can reflect the test goals within defined processes, the skills of test team staff, and the selection of a test tool, will perform well. It is this kind of discipline, exercised incrementally, which supports the test team's (and the entire organization's) advancement in quality and maturity from one level to the next.

Safeguard Integrity of the Automated Test Process
To safeguard the integrity of the automated test process the test team needs to exercise new releases of an automated test tool in an isolated environment; it can then validate that the tool performs up to product specifications and marketing claims. The test team should verify that the upgrades would run in the organization’s current environment.

Although, the previous version of the tool may have performed correctly and a new upgrade may perform well in other environments, the upgrade might adversely affect the team’s particular environment. Therefore, the test team needs to make sure that the test of the test tool upgrade is performed in an isolated environment.

Additionally, using a configuration management tool to baseline the test repository will help safeguard the integrity of the automated testing process./p>

Although process definition, metric gathering and process improvement activities can be expensive and time-consuming, the good news is that creating and documenting standards and procedures for an automated test program is no more expensive than the same activity for a manual test program. In fact, use of an automated test tool with scripting, test identification, and automatic documentation capabilities can reduce costs by providing some of the framework and content required.

Considering the use of a test tool on a particular project
Once the test engineer has reviewed the test process and has defined test goals, objectives and strategies, the test engineer can decide whether to continue the consideration of using an automated test tool. Specifically, the test engineer seeks to verify that the previously identified automated test tools will actually work in the environment and effectively meet the system requirements.

Review System Requirements. The first step in test tool consideration is to review the system requirements. The test team needs to verify that the automated test tool can support the user environment, computing platform, and product features. If a prototype or part of the system-under-test already exists the test team should ask for an overview of the system. An initial determination of the specific sections of the application that can be supported with automated testing can be made.

Review Project Schedule. Next, the test schedule needs to be reviewed. Is there sufficient time left in the schedule or allocated within the schedule to support the introduction of the test tool? Remember that automated testing should ideally be incorporated at the beginning of the development life cycle. The project schedule may need to be adjusted to include enough time to introduce an automated testing tool.

Manage Expectations. During test tool consideration, the automated test tool should be demonstrated to the new project team enabling all pertinent individuals to gain an understanding of the tool’s capability. Project team personnel in this case should include application developers, test engineers, quality assurance specialists and configuration management specialists. Remember that software professionals on the project may have a preconceived notion of the capabilities of the test tool, which may not match the tool's actual application on the project.

Test Tool Compatibility. If part of the application exists at the time of test tool consideration, conduct a test tool compatibility check. Install the testing tool in conjunction with the application and determine whether the two are compatible. One special concern is the availability of memory to support both the application and the automated test tool. Another concern is the compatibility of 3rd party controls (widgets) used in the application. Once a compatibility check has been performed and a few problems arise, the test team will need to investigate whether work-around solutions are possible.

Define Roles and Responsibilities. The use of automated test tools with a particular application requires the services of a test team that has the appropriate blend of skills to support the entire scope of the test effort. Roles and responsibilities need to be clearly defined and the skills and skill levels of test team personnel need to be considered carefully.

nical Expertise. Another element of test tool consideration relates to the need to determine whether the test team has sufficient technical expertise to take advantage of the tool's capabilities. If this technical expertise is not resident within the test team, individuals who can mentor the test team on the advanced features of the test tool might be applied to the project on a short-term basis. Another possibility is test tool training for all test team personnel.

After completing the test tool consideration phase, the test team can perform the final analysis necessary to support a decision of whether to commit to the use of an automated test tool for a given project effort.

Once the test team has concluded that the test tool is appropriate for the current project, it continues with the ATLM by performing Test Planning, Test Analysis & Design, and Test Development. The outcomes of activities performed in this discussion need to be recorded as part of the test planning activities that are documented within the test plan.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.