This paper walks you through the development of an actual, first-stage automation process. It also explains how to facilitate the formation of an automation team in a lessons-learned study of how to get from here to there, and why you should. It also explains how this newly formed team developed its processes, action plan, and benefits-derived reporting.
The test department began it’s introduction to automation approximately four years ago when two automation tools were compared in an evaluation and a decision was made to select what appeared to be the tool that offered the most value to an as yet undefined process.
The test department at that time had a population of approximately 40-45 test analysts, none of whom had ever been exposed to an automation tool. The introduction to automation continued by presenting each of the test analysts a copy of the software and the documentation offered by the provider.
The test analysts were instructed to begin learning and using the product by using the documentation, the on-line tutorials available, and by taking small segments of test plans and experimenting with creating automation for these segments.
The result was the generation of large numbers of capture-replay created automation. The initial response to these test scripts was excitement, the scripts worked, the automated product permitted improved regression testing of the applications under test, and the test analyst could perform tasks that were not amenable to automation while the test scripts were running.
More and more test scripts were created using the capture-replay capability of the product. They numbered in the hundreds for a single group of products alone. They did however permit an expanded scope of testing and appeared to offer even greater promise.
During this period, market considerations began to impact the appearance and performance of the applications under test. As the user interface for a product or group of products underwent change, massive failures of the capture-replay scripts began.
The test analysts were forced essentially to stop testing until these scripts were repaired, or as was pretty much the case, the test cases were again automated using capture-replay.
The initial excitement generated by the apparent successful use of automation was now tempered by the impact of the necessity to repair or replace hundreds of automated test cases. The management team and the test leads began meeting to address the situation.
Addressing the Issues
The focus of the discussions initially was what to do about the problem and how quickly could it be done. Additionally, how could this type of problem be prevented from re-occurring? The initial approach was to investigate the training programs offered by the tool developer. What were the costs involved, where did the cost go in the budget? Who should be trained? What would the expectations be after the training was completed.
These and several peripheral issues were discussed and within a relatively short period approximately fifteen members of the test staff were provided with a four day training session that consisted of two days in the basic use of the tool and two days introducing the students to scripting using the tool’s proprietary language.
This of course took time, it was done off-site at two different locations and the cost was not trivial. The resources coming back from the courses were better informed of the tool’s capabilities, with methods to access those capabilities, and how to consider and begin to implement something other than capture-replay automation of their test cases.
What became apparent rather quickly that as the resources began to attempt to use what they had learned at the training courses, as they began to search for better ways to automate, was that automation takes time when you are expert in that task. When you are learning how to practically apply classroom lessons and perform your normal testing duties, not only does one of the tasks suffer, it became apparent that both tasks suffered.
There were exceptions within