Today’s software market is totally consumer-driven, and to stay relevant, your product has to be in a constant release-ready state. How do you ensure this in the face of ever-changing consumer tastes?
Test automation has played a large role by enabling testers to focus more on maintaining the test plans and ensuring comprehensive test coverage.
As I look back on my experience in the software QA industry, it is interesting to mark the milestones in the journey of test automation, from the early days of record and playback to the current UI object-mapping techniques.
Record and Playback
The record and playback feature has been a major feature of most test automation tools, and it still is today. The software records and logs your manual testing, including every mouse movement, keystroke, and screenshots, and lets you replay them later. It relieves the tedium of manual testing, especially when you have to perform regression tests, but it can be limited in its scope.
In my initial days as a tester, while working on testing solutions for a pharmaceutical company in California, I was using a record-playback tool that saved time because I was able to record the UI actions and replay them to test when needed. Record and playback was a new technique, and I was suitably impressed.
But, with time, some limitations began to surface. Because the test run was a recorded script, it was not programmed to stop if faced with a sudden failed page load. So, it would run—on an erroneous UI object—and this is where the test automation plan would start to go awry.
Many record and playback tools did offer the ability to tweak the recorded test scripts so that I could make the necessary changes based on the previous test run. Understandably, however, this was effective for smaller web applications only. As tests are recorded, the test script is liable to become longer and difficult to maintain, and even more so for larger applications. Not to mention the fact that the test data being hard-coded into the test scripts makes record and playback quite an inflexible method.
When I moved on to projects for enterprise web apps, I realized the need for a tool that could enable the automation test code to be reused so that it could keep pace with the UI changes.
The Data-Driven Approach
Because hard-coded test data was a major contributing factor to the rigidity of test automation with record and playback tools, separating this data from the code was thought to be a viable solution for greater test coverage. This could be achieved with the use of placeholders in the code. You could add or delete data and your test scripts would not be affected. This is referred to as the data-driven approach.
But, again, this was not so simple. The act of simulating user actions while picking the relevant data from the source file in the correct sequence still needed to be defined in the test scripts. This made the code application-specific and, consequently, rigid.
There was so much attention on the actions and navigations that still needed to be picked up manually in order for them to be in the correct sequence that it diluted the focus on testing the application for functionality as a whole. The test script would have to be made application-agnostic.
A test automation framework should be able to abstract away most of the actions and navigations from the test code. For instance, a login page will have user actions such as filling the text boxes for username and password, navigating to the login button, and then the click action. These actions are pretty routine for most applications. The test code for these steps can be maintained in the automation framework—that is, abstracted away from the test script and reused when needed. This makes the test code flexible, light and easily maintainable. It should allow for error-handling, logging, and system recovery: You need your test suite to run to completion, even in cases of tests failing on the way.
In the effort to develop a ubiquitous test code, it was populated with keywords that could “point” to the corresponding scripts during a test run. The test data would be passed to the application under test (AUT) and the actions would be performed as per the script. This allowed testers to decide the sequence of actions via the keywords, so they could now test the functionality of the application as a whole. If you remember, this was one of the major drawbacks of the simple data-driven approach.
Still, there was still a certain inflexibility to the test scripts. The test code remained too tightly bound to the AUT. This meant that any changes to the AUT would mean changes to the test scripts, making their maintenance a cumbersome task.
What was needed was another layer that would abstract the test code away from the application code. This added layer came from wrapper functions, which were placed in the test code in the desired sequence specific to the AUT.
Wrapper functions were different from the pre-existing functions in the application code they replaced because they were the tester’s tool: They included error-checking code for the functions. Now, instead of the function being called repeatedly, the wrapper function for it would be called every time, with different parameters in turn, to check for all possible paths.
For instance, the click action is used differently on separate webpages—it might be for clicking on the Login button or on the Save button. So, using a wrapper function on top of the regular click() function would make it reusable, with different parameters for Login or Save being passed in turn.
And as for maintainability, because wrapper functions were not implementation-specific, they could be reused over applications.
The UI Object-Mapping Framework
Another approach concurrent with the introduction of control wrappers was the use of UI object mapping. The UI elements, such as the Login page, are treated as an object class, and the identifier for this object class is stored in the object map.
The test script shows the logical name for the object. So, when the automation engine of the test framework reads the test data (login and password), it picks up the logical name of the object (e.g., the Login page), zeroes in on the class identifier from the object map, calls the wrapper function for the object class “Login page,” and performs the required action.
The only thing that needs to be maintained is the wrapper functions, leaving the test code untouched.
This object-oriented approach allows test specialists to define the sequence of actions and gives them more control over the test plan.
With the advent of web applications, the requirements totally changed. There were two parts to the web application. The first one was data integration using the web services. There were so many objects, libraries, data sources, browsers, platforms, and devices—not to mention Internet pathways—that needed to be connected. The other aspect was rendering of the UI.
In my projects, I had tools to test the objects behind the UI, but not the look and feel of the UI. I plugged in a few libraries from Java that would let me take a screenshot of a particular area of the screen so that I could then compare this with the corresponding UI object.
In my opinion, as far the server side is concerned, 100 percent automation may seem like an attractive proposition—and a viable one, at that. But, as for the UI, nothing can replace the human eye for testing its look and feel. This is something that cannot be emulated. Manual testing, in these instances, is irreplaceable.