The Evolution of Test Automation, from Record and Playback to Object Mapping

In this culture of shorter time to market and release-ready sprints, it is vital for QA to keep pace by using test automation practices and tools. This article traces the shift from script-based testing with hard-coded data to automated frameworks, exploring the beginning of test automation and its evolution to where we are today—and possibly to where we will be.

Today’s software market is totally consumer-driven, and to stay relevant, your product has to be in a constant release-ready state. How do you ensure this in the face of ever-changing consumer tastes?

Test automation has played a large role by enabling testers to focus more on maintaining the test plans and ensuring comprehensive test coverage.

As I look back on my experience in the software QA industry, it is interesting to mark the milestones in the journey of test automation, from the early days of record and playback to the current UI object-mapping techniques.

Record and Playback

The record and playback feature has been a major feature of most test automation tools, and it still is today. The software records and logs your manual testing, including every mouse movement, keystroke, and screenshots, and lets you replay them later. It relieves the tedium of manual testing, especially when you have to perform regression tests, but it can be limited in its scope.

In my initial days as a tester, while working on testing solutions for a pharmaceutical company in California, I was using a record-playback tool that saved time because I was able to record the UI actions and replay them to test when needed. Record and playback was a new technique, and I was suitably impressed.

But, with time, some limitations began to surface. Because the test run was a recorded script, it was not programmed to stop if faced with a sudden failed page load. So, it would run—on an erroneous UI object—and this is where the test automation plan would start to go awry.

Many record and playback tools did offer the ability to tweak the recorded test scripts so that I could make the necessary changes based on the previous test run. Understandably, however, this was effective for smaller web applications only. As tests are recorded, the test script is liable to become longer and difficult to maintain, and even more so for larger applications. Not to mention the fact that the test data being hard-coded into the test scripts makes record and playback quite an inflexible method.

When I moved on to projects for enterprise web apps, I realized the need for a tool that could enable the automation test code to be reused so that it could keep pace with the UI changes.

The Data-Driven Approach

Because hard-coded test data was a major contributing factor to the rigidity of test automation with record and playback tools, separating this data from the code was thought to be a viable solution for greater test coverage. This could be achieved with the use of placeholders in the code. You could add or delete data and your test scripts would not be affected. This is referred to as the data-driven approach.

But, again, this was not so simple. The act of simulating user actions while picking the relevant data from the source file in the correct sequence still needed to be defined in the test scripts. This made the code application-specific and, consequently, rigid.

There was so much attention on the actions and navigations that still needed to be picked up manually in order for them to be in the correct sequence that it diluted the focus on testing the application for functionality as a whole. The test script would have to be made application-agnostic.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.