What Not to Test When It's Not Your Code

[article]
Summary:

This article is a continuation of a previous write-up on "What to Test When It's Not Your Code." As mentioned previously, test strategies should be radically different and flexible when it comes to testing code delivered by any vendor external to an organization. Similarly, the rationale behind deciding what does not need to be tested or what is given the lowest testing priority for external software products should be radically different from the rationale practiced for in-house software products. The reason for the differences has a lot to do with the risk posed by the third-party application on the daily operations of the organization. Also, the credibility of the vendors can play a major role when deciding what takes a lower priority in testing.

In almost all software projects, the following valid question is posed by the test team: What must be tested and what can be left out? The ideal answer is that everything should be tested, but achieving that is unrealistic. Most of the time, prioritization is necessary and plays a very important role in achieving the maximum test execution before the project is live in production. Test analysts should make these decisions based on their knowledge, experience, and various other testing techniques and tools available to achieve the desired results.

But when we have to decide what will not be tested in a vendor-supplied application, extreme caution must be exercised apart from all the knowledge and experience of a test analyst. It is imperative to have management support, and due diligence is required to make this decision. In this article, I have outlined some points to take into consideration.

Insight into the Quality of the Vendor's Product

Most vendors are selected using the process of putting out notice for tenders or bids. In making decisions about picking up a vendor, a few things apart from cost should be considered, such as:

  • How many customers does the vendor have?
  • Is the vendor willing to provide a reference site to verify successful implementation of their software?
  • Is the vendor ISO or CMM certified?
  • If the vendor is not quality certified, are they willing to demonstrate their project processes to the customers?

From the above points, the project sponsor and the steering committee can get some idea of the risks involved in implementation due to poor quality of the vendor's software. In most cases, overconfidence in quality leads to poor test planning. This can blow the project schedule and budget out of proportion and chew up all the contingencies. This can also lead to the extreme decision of canning the implementation.

Getting an Idea of the Standard Software versus the Customizations
Software packages bought from a vendor rarely work to completely address all the business requirements of the organization. There are varying degrees of customization involved. If the core product has proven quality in terms of stability and functionalities, the focus of testing should be on the changes made by implementation of the customizations. In most cases, the core product becomes stable over time. If a new interface is introduced to the core application, it should be tested as a part of the customization to make sure the display and the values are correct. The test cases can be designed for maximum coverage of the business requirements covered by the customizations. When the application is tested in an end-to-end fashion, these scripts can be designed to touch the core of the application as well. This will ensure that an appropriate amount of testing is being done on the product as it is implemented.

Levels of Testing that Is Required to be Done
There are several levels of testing that can be done on a project. An ideal project includes a requirements review, an architectural review, design reviews, unit testing, system and integration testing, and user acceptance testing. For the category of projects we are talking about here, all the above listed verification processes are still an added value. But the focus on each of them is not the same as in an in-house software development project.

For example there should be an extensive requirements review. The architectural review will be limited in scope and will have to focus more on the customized part rather than on the core application because, in most cases, the core architecture is pretty much fixed (which can cause a

About the author

Ipsita Chatterjee's picture Ipsita Chatterjee

Ipsita Chatterjee works as a senior test analyst at the Australian Stock Exchange in Sydney. She's worked in testing, quality assurance, and implementing best practices in several software companies for about eight years and has experience in implementing and monitoring ISO 9000:2001, Tick IT standards, and CMM. Ipsita is a certified test engineer from International System Examinations Board of the British Computer Society in the UK and currently pursuing the test practioner's certificate.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Oct 12
Oct 15
Nov 09
Nov 09