During the lifetime of a test tool it will be necessary to upgrade, maintain, or customize the test tool. The upgrade and maintenance of test tools needs to be carefully managed and planned in order to mitigate the risk of affecting the end users. This article offers a framework for helping organizations cope with the changes that a test tool experiences during its lifetime.
Test tools fall under various categories. Test Management tools handle version control, defect management capabilities, storage for test scripts, test planning artifacts, test execution, and test results. Record/Playback tools capture and playback recorded processes. Capacity Testing tools generate system load and traffic. These test tools have different user populations.
Test Management tools typically have a large end user population. Developers, business analysts, and testers have access to the tool. In contrast to Test Management tools, Record/Playback tools and Capacity Testing tools have a much smaller end user population. The test manager should notify end users when a test tool will be brought down for either maintenance support or an upgrade. There are many reasons for upgrades, maintenance, and customization of test tools. A few of those reasons are:
- Tool needs to have some patches installed.
- Tool needs to be customized to support a new software release or testing effort.
- Vendor stops supporting an older test tool version or might phase out a test tool.
- Tool's back-end requires database maintenance (i.e., archive older records, database upgrade, etc).
- Vendor folds up due to financial problems, and the test tool no longer has technical support from vendor.
- Vendor has completely upgraded and revamped the test tool to include new functionality or to fix previous bugs.
The test manager should document the risks and rewards associated with making changes to a test tool and only make changes to a test tool when the risk is minimal to the end users and the testing efforts. An organization should not upgrade a test tool merely because a vendor has come up with a newer version. An organization should initiate a tool change only when there are compelling business needs driving the change in the test tool. A reason for upgrading a test tool would be that a previous version of the test tool does not recognize the project's custom controls during recording, and the new tool version will.
Test Script Maintenance
The test manager should review with the vendor what would occur to previously recorded test scripts if a test tool is upgraded to a newer version or if patches are installed. If a test tool is upgraded from version 1 to version 2, will the previously automated test scripts also need to be upgraded or re-recorded? Are the automated test scripts capable of playing back on the newly upgraded test tool? What if the automated test scripts require immediate execution and there is no work around, other than re-recording the test scripts because the test tool has been upgraded?
These are issues that require much cogitation, especially for companies that have libraries of recorded test scripts from various testing releases. The test manager needs to consider the existing testing phases and assess whether upgrading the test tools is feasible. If all the test scripts need to be reconstructed or modified for the newly upgraded test tool, it might be wise to wait until the project has more resources and no imminent deadlines to upgrade the test tool. If the test tool upgrade does not affect previously recorded test scripts but offers new enhancements and features, it might be practical to work with the latest version of the test tool.
A test tool upgrade might require a transitioning phase for the end users and the reconstruction of customizations to the test tool. As an example, I once upgraded a test management tool from a fat client installation to a new thin client installation running as a Web based solution and requiring the use of a browser to launch the test management tool.
Although the newly installed thin client software version had the same functionality as the fat client version of the software, the end users now had to adjust to working with the software's Web based browser interface as opposed to the GUI client interface from the older version of the software. Since the end users had to adapt to a new interface I had to create customized training materials for the end users to learn how to work with the new interface of the test management tool.
The training materials included information as to how the previous tool's features would be displayed and accessed in the new interface of the software, and how to remove the older software from the client machine, and any other unnecessary connections to the database that the older software had since the new software was completely Web based.
One also might have to assess the impact on upgrading a test tool given its customizations. For instance, when upgrading a test management tool's repository from one software version to the other, it is conceivable that the test management tool's project specific customizations are not carried over to the new version of the test tool. Under this scenario, the test tool administrator would have to re-establish the customizations to the new version of the test tool.
Some test tool vendors release new versions of a test tool that are substantially different from the previous versions of the test tool. I have also seen test tool vendors that completely phase out a test tool and replace it with a completely revamped test tool with a programming language, interface, and features that are different from the ones offered in the previous test tool.
Under these scenarios the test manager might have to retrain the testers to help them adapt to the new test tool, or hire new contractors to help the project transition off to the new test tool. At any rate, a new version of a test tool that differs substantially from the previous version of the test tool could impose on the test manager the need to allocate new training dollars for the test team. A scenario that impels a test manager to come up with new training dollars might not be viable for every organization.
Any upgrades, customizations, and possibly maintenance on the test tools will necessitate the need to bring down the test tool. Key questions to ask are how long will the application be down? Who will be impacted as a result of the test tool application being down and inaccessible?
The test manager should build a contingency plan in the event that the software cannot be upgraded successfully within the specified down time. If the test management tool cannot be upgraded successfully, the tester might have to construct test cases in a word processor or log defects in a spreadsheet, or the testers might have to test a business process manually as opposed to doing so with an automated test script.
The test manager should coordinate with the tool vendor's support desk when bringing down an application for an upgrade or installation. They should also enlist the vendor's help if the project's resources are unsuccessful in upgrading the test tool within the down time window.
Customization and Changes
Test tools are customized to meet a project's business specific testing needs. A project might need to customize a test management tool to include new fields, modify the defect workflow functionality, or to link it to an email server. Customizing a test tool will require that the newly made customizations have not adversely affected the tool's other existing and previously working functionality, and that the end users understand how to effectively interact with the tool's new functionality.
The new customizations to a test tool might also have a cascading effect on the training documentation. Some organizations create customized training materials for using the customized test tool, which could differ from the vendor's generic training materials. These customized training materials might need revision after the test tool undergoes a new series of customizations.
An organization undergoing test tool changes will need a dedicated resource for doing so within the project or hire consultants from the vendor. Even if a tool administrator is available to the project he/she might not have enough knowledge to upgrade a test tool or install from scratch a new version of the test tool.
Other test tool changes require support from various personnel. The project should identify its current resources and availability for initiating a tool change. For instance, an upgrade to a test management tool might necessitate an upgrade to its underlying database back-end and, thus support from the DBA, and the server administrator will be needed in addition to the tool administrator with knowledge of the test tool.
Test tool maintenance might be needed to install a new patch for the test tool. Vendors release patches for a variety of reasons, such as bugs and defects with the test tool, or a test tool's inability to recognize a particular object during recording, or it might be to include new functionality. Patches, no matter how innocuous or trivial they seem, need to be reviewed and assessed before they are installed.
Installing a patch might require that the tool be brought down for a prolonged period of time and might consume time away from other testing efforts. Additionally, some vendors do not really consider how a tool patch to one of their software solutions would affect how the software interacts with other solutions from the same vendor, which creates technical problems.
For example, in one of my projects we had a test management solution and a load-testing solution from the same vendor where the two solutions interacted with each other. I placed a patch on the test management tool to fix a problem that I had for generating reports.
After I placed this patch the load-testing solution could no longer communicate with the test management solution, and I could no longer save load-testing results into the test management tool. The test administrator installing patches on the test tool should consult with the vendor before installing any patches to ascertain whether the tool's functionality and the interaction with other test tools would be affected.