Jose Fajardo delineates techniques for building more maintainable and robust automated test scripts. The author provides valuable insights for testers working with automated test tools and building a repository of automated test scripts for future testing efforts. A myriad of suggestions for documenting test scripts, debugging test scripts, performing peer-reviews on test scripts, and synchronizing test scripts are offered.
Debug Scripts Incrementally
Recorded test scripts, like other software development efforts, can become quite large. Hundreds of lines of codes will need debugging for successful playback, which might contain several sets of data for parameterized data driven test scripts. The common approach to debugging a test script is to first record all the business processes and requirements, then have the tester play back the test script to identify and correct any problems. The tester continues to debug the test script until it successfully plays back with a single set of data and/or multiple sets of data.
Debugging and troubleshooting test scripts becomes extremely tedious and intractable when the test script has hundreds of lines of code, verification points, branching logic, error handling, parameters, and data correlation among various recorded business processes. A much more manageable approach to debugging complex and lengthy test scripts is to record portions of the script and debug these portions individually before recording other parts of the test script. After testing individual portions, you can determine how one portion of the test script works with another portion and how data flows from one recorded process to the other. After all sections for a test script have been recorded, one can playback the entire test script and ensure that it properly plays back from the beginning to the end with one or more sets of data.
As an example, I recorded and automated a complex test script that performed the following business processes:
- Check the inventory in the warehouse,
- Carry out an MRP run,
- Replenish inventory,
- Pick items for a delivery and process the delivery,
- Confirm the orders were transferred for the delivery, and
- Verify the delivery items arrived at their destination.
This test script had several lines of codes, parameters, verification points, and data correlation that needed to work as a cohesive unit. First I recorded each individual process and verified they could successfully playback individually. Then I integrated all the recorded processes into a large test script and verified it could playback successfully with multiple sets of data. As previously stated, a key objective is to ensure that each recorded process plays back successfully before proceeding to record the remaining portions of the entire test script. I did not record all the processes mentioned (1 through 6) and stringed them together for playback without first verifying that all the processes could playback successfully as individual processes.
The lesson here is to avoid waiting to debug the script until the entire test script is recorded.
Test Script Synchronization
Test tools can play back recorded test script at rates much faster than an end-user's manual keystrokes. Subsequently this can overwhelm the application under test since the application might not display data or retrieve values from the database fast enough to allow proper test script playback. When the test application cannot respond to the test script, the script execution can terminate abruptly thus requiring user intervention. In order to synchronize applications under test and test scripts during playback, the testing team introduces artificial wait times within the recorded test scripts. Wait times embedded in the test script meant to slow down the test script execution are at best arbitrary and estimated through trial and error. The main problem with wait times is that they either wait too long or not long enough.
For instance, the tester might notice the test script is playing back too fast for the application under test. He might decide to slow it down several times until the test script execution is synchronized with the test application. This technique can backfire--even fail--if the application