The case for automating the Software Testing Process has been made repeatedly and convincingly by numerous testing professionals. Most people involved in the testing of software will agree that the automation of the testing process is not only desirable, but in fact is a necessity given the demands of the current market. The purpose of this document is to provide the reader with a clear understanding of what is actually required to successfully implement cost-effective automated testing using automated test tools.
The purpose of this document is to provide the reader with a clear understanding of what is actually required to successfully implement cost-effective automated testing using automated Test tool Winrunner. This helps the audience our new approach we followed to eliminate the work pressure and easy to complete to finish before schedule. This approach is useful to incorporate any new changes in the applications with less time.
The case for automating the Software Testing Process has been made repeatedly and convincingly by numerous testing professionals. Most people involved in the testing of software will agree that the automation of the testing process is not only desirable, but in fact is a necessity given the demands of the current market.
What is "Automated Testing"?
The Automated Testing means automating the manual testing process currently in use.
Detailed test cases, including predictable "expected results", which have been developed from Business Functional Specifications and Design documentation. A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases are able to be repeated each time there are modifications made to the application. If your current testing process does not include the above points, you are never going to be able to make any effective use of an automated test tool.
The real use and purpose of automated test tools is to automate regression testing. This means that you must have or must develop a database of detailed test cases that are repeatable, and this suite of tests is run every time there is a change to the application to ensure that the change does not produce accidental cost.
An "automated test script" is a program. Automated script development, to be effective, must be subject to the same rules and standards that are applied to software development. Making effective use of any automated test tool requires at least one trained, technical person in that tool.
Cost-Effective Automated Testing
Automated testing is expensive. It does not replace the need for manual testing. Automated testing is an addition to your testing process, it can take between 3 to 10 times as long (or longer) to develop, verify, and document an automated test case than to create and execute a manual test case. This is especially true if you elect to use the "record/playback" feature (contained in most test tools) as your primary automated testing methodology. Record/Playback is the least cost-effective method of automating test cases.
Automated testing can be made to be cost-effective depends upon the:
- Test tool suites for the requirement
- Tests that needs to be run for every build of the application
- Tests that use multiple data values for the same actions
- Tests that require detailed information from application internals
Avoid using "Record/Playback" as a method of automating testing. This method is fraught with problems, and is the most costly (time consuming) of all methods over the long term. The record/playback feature of the test tool is useful for determining how the tool is trying to process or interact with the application under test, and can give you some ideas about how to develop your test scripts, but beyond that, its usefulness ends quickly. The scripts resulting from this method contain hard-coded values, which must change if anything at all changes in the application.
Adopt a data-driven automated testing methodology. This allows you to develop automated test scripts that are more "generic", requiring only that the input and expected results be updated. There are 2 data-driven methodologies that are useful. I will discuss both of these in detail in this paper.
Viable Automated Testing Methodologies
Now that we’ve eliminated Record/Playback as a reasonable long-term automated testing strategy, let's discuss some methodologies that we have found to be effective for automating functional or system testing for most of the business applications. The two major architecture now widely used in the market are the Functional Decomposition and Totally data driven method. I have listed down the advantages of that architecture. We have combined both the architecture and made a more effective architecture. This architecture has the advanced features of the Totally data driven method. Its success is scripts are driven by the GUI names. So we call it as Total data and GUI driven.
The main concept behind the "Functional Decomposition" script development methodology is to reduce all test cases to their most fundamental tasks, and write User-Defined Functions, Business Function Scripts, which perform these tasks independently of one another. In general, these fundamental areas include.
- Maintenance: Each Business Function requires a script. There may be hundreds of Business functions.
- Changes in Test Cases require updates to several sets of input/verification files for each Test Case
- Format of input/verification records must be strictly adhered to or the tests will fail
- Testers must maintain the input/verification records as well as the Test Case documentation
Totally Data-Driven Method
Testing Activity is broken down into its fundamental actions. Each Testing Action is Associated with a Key Word. Each Key Word is Associated with a Utility Script.
A Spreadsheet can be used for Input to this process:
- Key Words are placed in Column-1
- Parameters or Field/Object names are placed in Column-2
- Data or Field/Object names are placed in Column-3
- Column 4 is used for comments
Perform Initialization (if required), and then call the Controller Script passing it the Test Case file name. Also can be arranged to account for Test Case dependencies (If Test 1 fails, skip to Test 4, etc.)
Calls Utility Scripts associated with Key Words to perform the Test Case actions and verifications
- Perform specific Testing tasks required: Data Entry, Actions, Data Verification, etc.
- Call User-Defined Functions to perform specific actions
Business Function Utility Scripts
- Perform required application-specific tasks, which may be a combination of Testing tasks
- Are associated with application-specific Key-Words and are parameterized within the Spreadsheet
Total data and GUI driven
The main concept behind this architecture is to identify the major business functions and to split up into different functions and load into the compile modules. A main script which loads the necessary scripts in the memory and it drives the application based on the data from the excel sheet.
Perform Initialization (if required), and then call the common scenarios (logging into the application). Which loads the load / unload the GUI and Data driven excel sheet depends on the value in the sheet. Say for an example if the application UI changes depends on the roles, then the corresponding GUI will be loaded.
In this script all the common functions are built, when these functions invoked it returns the value based on the call. Depends on the return value whether PASS / FAIL it will process for the next test case.
In the example we gave using the "Functional Decomposition" method, it was shown that we could use previously created "Test Case" and "Business Function" scripts to create scripts for additional Test Cases and Business Functions. If we have 100 Business Functions to test, this means that we must create a minimum of 200 scripts (100 Test Case scripts, and 100 Business Function scripts).
Using the "Test Plan Driven" method, I currently am using 20 Utility scripts, that I have been able to use in every single automated testing engagement that I have been sent on. Let us examine what it takes on an average for me to walk into a company, and implement Automated Testing:
Normally, I have to create a minimum of 3 application-specific utility scripts (a "Controller" script, a "Start_Up" script, and an "End_Test" script).
I may also have to create application-specific versions of several of the 20 "general" Utility scripts.
It is also usually necessary to develop between 10 and 20 application-specific "functions" (depending on how complex or strange the application under test is). These functions include such things as activating and shutting down the application, logging in, logging out, recovering from unexpected errors ("return to main window"), handling objects that the tool doesn’t recognize, etc.
Depending on the complexity of the application, and how well the test tool works with the application, this process normally takes me no more than 3 days – 5 days is worst-case. With this "proof-of-concept" completed, if we are then contracted to do the job, it usually takes 2 or 3 weeks to develop the majority of the application-specific functions, and application-specific versions of the "utility" scripts. Naturally, if the application is quite complex, this process will take longer. At this point, testers can be trained to create the spreadsheet data (usually takes about a week) and then they are in business. It also takes about a week to train the "test tool technician" to use this methodology, provided that this person is a relatively competent programmer and has already been sufficiently trained by the tool vendor in the use of the tool.
What this demonstrates is that an organization can implement cost-effective automated testing if they go about it the right way.
Preparation is the Key:
The situation described above is pretty much an "ideal scene". It assumes that adequate preparations have been made by the organization before beginning the testing automation process. This is rarely the case, however. If adequate preparations have not been made, then the "ramp-up" time required is increased dramatically. What then, does an organization do to prepare them for this effort?
An adequate Test Environment must exist that accurately replicates the Production Environment. This can be a small-scale replica, but it must consist of the same types of hardware, programs and data.
The Test Environment's database must be able to be restored to a known baseline, otherwise tests performed against this database will not be able to be repeated, as the data will have been altered.
Detailed Test Cases that are able to be converted to an automated format must exist. If they do not, then they will need to be developed, adding to the time required. Data to be entered and verified must be specific data.
The person or persons who are going to be developing and maintaining the automated scripts must be hired and trained. Normally, test tool vendors provide training courses. While it makes sense to hire a consulting firm or a contractor to help you get started, who are they going to turn things over to.
One area that organizations desiring to automate testing seem to consistently miss is the staffing issue. Automated test tools use "scripts" which automatically execute test cases. As I mentioned earlier in this paper, these "test scripts" are programs. They are written in whatever scripting language the tool uses. This might be C++ or Visual Basic, or some language unique to the test tool. Since these are programs, they must be managed in the same way that application code is managed.
To accomplish this, a "Test Tool Specialist" or "Automated Testing Engineer" or some such position must be created and staffed with at least one senior-level programmer. It does not really matter what languages the programmer is proficient in. What does matter, is that this person must be capable of designing, developing, testing, debugging, and documenting code. More importantly, this person must want to do this job – most programmers want nothing to do with the Testing Department. This is not going to be easy, but it is nonetheless absolutely critical.
In addition to developing automated scripts and functions, the person must be responsible for:
- Developing standards and procedures for automated script/function development
- Developing change-management procedures for automated script/function implementation
- Developing the details and infrastructure for a data-driven testing method Testing, implementing, and Managing the test scripts (excel sheet) written by the testers
- Running the automated tests, and providing the testers with the results
It is worth noting that no special "status" should granted to the automation tester(s). The non-technical testers are just as important to the process, and favoritism toward one or the other is counter-productive and should be avoided. Software Testing is a profession, and as such, test engineers should be treated as professionals. It takes just as much creativity, brain power, and expertise to develop effective, detailed test cases from business and design specifications as it does to write code. I have done both, and can speak from experience.