Testing is an essential and time-consuming part of software development. Manual testing is often required, but many software groups try to minimize it. Test automation has advantages in most cases, especially when software is expected to have many releases. Automated testing is more reliable, repeatable, less time consuming, less boring, and less expensive. Because of these advantages, companies are spending more money and employee time on test automation.
After a software group decides to automate tests, it should choose whether to buy a test tool or develop it in-house.
Our test system consists of a set of main scripts that do general setup and then call test drivers. Each test has its own driver script, which tests specific setup, runs the test, and determines if the test has passed or failed. Then the main script gets information from the test drivers and collects, reports, and analyzes the results. Main scripts can also distribute tests between machines and can handle hung tests. This system allows a lot of independent tests. Each test can have its own flow and pass/fail criteria. If some of the tests temporarily don't work, it is easy to turn them off. If a lot of tests have the same flow and pass/fail criteria, a standard test driver can be called by other test-specific drivers to simplify development of the test suite.
I have used this test system for several years. It has demonstrated some significant advantages compared to other systems. This strategy has worked well for us and I believe it can work in other environments as well.
Buying a Tool
Buying a tool requires researching all the tools available on the market. It is important to have realistic expectations regarding these tools. Many of them are unable to live up to a designer's expectations. "Record and play back" tools are typical examples of unrealized expectations. The formula of "just install it, play with software, and the tool will record everything you do and you will have test cases" never works. Choosing the correct tool and learning how to use it in your environment takes time.
Developing a Tool In-House
If a software group decides to develop a tool in-house, it needs to treat this development as a project: write a specification for it, budget time to develop it, document it, test it, let everybody in the group know how to use the tool, and encourage the developers (not only QA) to use it. This is difficult, considering that the test system is an internal project-it doesn't go to customers and therefore doesn't bring an immediate reward.
No matter how a company chooses to test software, it should be prepared to spend money on the testing. What follows is a description of a test system developed internally to test a synthesis compiler. It worked well for us and may be used for testing other software products.
Description of Test Tool
Suppose you have a number of independent tests. Some of the tests are positive, some negative. It is necessary to run them on each build of your product. You expect the product you are testing to have many more releases. So you expect to add more tests, which will be suitable for new features. You want to develop a system that can work for you now and in the future for the life of the tested product. Your automation system should be reliable, repeatable, maintainable, easy to use, and very flexible. It should allow adding tests and new test scenarios with minimum effort and headache. It should allow turning some tests off. We all know how annoying