Managing Automated Testing

[article]
Summary:

When a suite of automated tests takes days to run from start to finish, there is value in adding versatility. With the right structure implemented, an automated test suite can be flexible enough to allow a user to dynamically change the tests executed every time the suite is run. Rebecca goes through the design steps of creating an interface-driven tool that gives test execution versatility.

Automated software testing is not for the faint of heart. It requires skill, patience and above all, organization. Organizing a suite of automated tests can be done in a number of different ways. Most tool providers offer methods of organization and execution for scripts. These methods often involve scheduling scripts to run in order, a kind of a batch file execution. These are all well and good for many applications, but offer little versatility when it comes to running only part of a script or test sequence. When it comes to automated test versatility, it may be necessary to design a front end to a test suite that can offer the user an interface driven test selection tool.

Repetitive Tests
Repetitive test cases are common in software testing. It is a drag having to go through hundreds of test cases that are mundane and redundant. But, they offer great test coverage. They also tend to be cut back under tight schedules. When dealing with a repetitive test, if ten of the one hundred trials pass, risk analysis tells us that all the tests will probably pass. These types of tests are great candidates for test automation. Automation can speed up the process of mundane testing while freeing up a tester for more thought provoking and defect sleuthing tests such as exploratory or free form.

There are all types of applications that require some type of repetitive test. Usually any test that has to be run over and over with different parameters set is easily automated with just a few functions. Tests that require different communication settings, display settings, environment settings, etc. are examples of these types of tests. Here are some examples of test specs for these test cases:

Run DisplayTest_1 at 800x600

Run DisplayTest_1 at 1024x768

Run DisplayTest_1 at 1280x1024

Run DisplayTest_1 at 1600x1200

Run DisplayTest_2 at 800x600

Run DisplayTest_2 at 1024x768

And so on.

Run CommunicationTest_1 at 56K

Run CommunicationTest_1 at 9600

Run CommunicationTest_1 at 4800

Run CommunicationTest_2 at 56K

Run CommunicationTest_2 at 9600

And so on.

>

Breaking Down the Test Script
There are different methods for automating such tests. One is the record/playback method that leads to a very long script that can take days to run. It would systematically execute code to set the environment, and code to execute the test, alternating between these until finished with all test cases. This method requires a lot of maintenance and is lacking in error recovery and versatility. Breaking the script down into modular functions makes the code easier to maintain. The above test cases would require a unique function for each test: Test_1, Test_2, etc. There would also need to be a function to set the display resolution for the first set of tests and a function to set the baud rate for the second set. Each of these functions should return a value indicating whether the function was successful and leave the system in a known state. Rather than a long succession of executable lines of code, repeating lines are consolidated into a single function and our automated test becomes a set of function calls. The body of the test would look something like this:

Call Function SetDisplay (800x600);

Call Function RunDisplayTest_1();

Call Function SetDisplay (1024x768);

Call Function RunDisplayTest_1();

And so on….

Call Function SetBaudRate (56K);

Call Function RunComTest_1();

Call Function SetBaudRate (9600);

Call Function RunComTest_1();

And so on….

This makes the script much shorter and more modular. If something must change in the code for running one of the tests, it only has to be changed in one place rather than multiple times throughout the

Pages

About the author

Rebecca Nuesken's picture Rebecca Nuesken

Rebecca Nuesken is a Software Systems Engineer at National Semiconductor specializing in the design, implementation and execution of automated test scripts using a variety of tools and operating systems. Her experience spans PC application as well as PLC and Smart Device development. She is a graduate of Kettering University with a published thesis using the use case approach to design test reporting tools entitled "Reporting Tool Design Spec: A Use Case Approach." Her goal is to streamline and modularize testing for easy reuse and efficient reporting.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Sep 22
Oct 12
Nov 09
Nov 09