What Not to Test When It's Not Your Code

[article]
Summary:

This article is a continuation of a previous write-up on "What to Test When It's Not Your Code." As mentioned previously, test strategies should be radically different and flexible when it comes to testing code delivered by any vendor external to an organization. Similarly, the rationale behind deciding what does not need to be tested or what is given the lowest testing priority for external software products should be radically different from the rationale practiced for in-house software products. The reason for the differences has a lot to do with the risk posed by the third-party application on the daily operations of the organization. Also, the credibility of the vendors can play a major role when deciding what takes a lower priority in testing.

In almost all software projects, the following valid question is posed by the test team: What must be tested and what can be left out? The ideal answer is that everything should be tested, but achieving that is unrealistic. Most of the time, prioritization is necessary and plays a very important role in achieving the maximum test execution before the project is live in production. Test analysts should make these decisions based on their knowledge, experience, and various other testing techniques and tools available to achieve the desired results.

But when we have to decide what will not be tested in a vendor-supplied application, extreme caution must be exercised apart from all the knowledge and experience of a test analyst. It is imperative to have management support, and due diligence is required to make this decision. In this article, I have outlined some points to take into consideration.

Insight into the Quality of the Vendor's Product

Most vendors are selected using the process of putting out notice for tenders or bids. In making decisions about picking up a vendor, a few things apart from cost should be considered, such as:

  • How many customers does the vendor have?
  • Is the vendor willing to provide a reference site to verify successful implementation of their software?
  • Is the vendor ISO or CMM certified?
  • If the vendor is not quality certified, are they willing to demonstrate their project processes to the customers?

From the above points, the project sponsor and the steering committee can get some idea of the risks involved in implementation due to poor quality of the vendor's software. In most cases, overconfidence in quality leads to poor test planning. This can blow the project schedule and budget out of proportion and chew up all the contingencies. This can also lead to the extreme decision of canning the implementation.

Getting an Idea of the Standard Software versus the Customizations
Software packages bought from a vendor rarely work to completely address all the business requirements of the organization. There are varying degrees of customization involved. If the core product has proven quality in terms of stability and functionalities, the focus of testing should be on the changes made by implementation of the customizations. In most cases, the core product becomes stable over time. If a new interface is introduced to the core application, it should be tested as a part of the customization to make sure the display and the values are correct. The test cases can be designed for maximum coverage of the business requirements covered by the customizations. When the application is tested in an end-to-end fashion, these scripts can be designed to touch the core of the application as well. This will ensure that an appropriate amount of testing is being done on the product as it is implemented.

Levels of Testing that Is Required to be Done
There are several levels of testing that can be done on a project. An ideal project includes a requirements review, an architectural review, design reviews, unit testing, system and integration testing, and user acceptance testing. For the category of projects we are talking about here, all the above listed verification processes are still an added value. But the focus on each of them is not the same as in an in-house software development project.

For example there should be an extensive requirements review. The architectural review will be limited in scope and will have to focus more on the customized part rather than on the core application because, in most cases, the core architecture is pretty much fixed (which can cause a lot of issues in the performance aspect of the application). The design reviews also should be focused around the customization rather than on the core application. The extent of unit testing that is required also will not be the same. More unit testing should be carried out for the customized code than for the core application code. The changes made to the standard application to cater for the business requirements should be the main focus of system and integration testing.

All the business requirements as interpreted by the users should be verified in the user acceptance testing phase.

Quality Ownership from the Vendors
In most cases, the contract between the vendor and the customer is legally binding in terms of delivery schedules. It also ties the delivery dates to the payment schedules. The contract may or may not have clauses about the quality of the software delivered. If the quality of the software is highlighted as an important clause in the contract, then it is relatively easier to get the vendors to test their products before delivering them. Signing off on testing by the vendor's project manager before releases, records of test results, and a copy of the vendor's test beds can help boost their quality and our confidence in the delivered software.

It is also beneficial to create the test plans and test scripts in association with the vendor's implementation team. This lets the vendors know the test approach and the main focus. At times vendors may execute our test scripts before delivering the software so that we receive good quality software. From our test approach, they can guide us in deciding what the test scope should be. Since the vendors have knowledge and expertise in their applications, if vetted out correctly, their input can be very valuable.

Conclusion
The point I have tried to stress is to not leave things out of testing, but reprioritize on the basis of what is most important in an implementation in terms of quality. These types of projects have been around for a long time, but the quality issues are not addressed that often. The idea is to get equal or greater value from the software application than the amount invested by our organization. Therefore, it is best to work closely with the vendors to achieve quality and successful implementation.

 

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.