Understanding Both Sides of the Test Tool Fence

[article]
Summary:

A natural divide exists between those who develop commercial test tools and those who use them. If you're tasked with finding the right test automation technology and tool for your organization, it helps to understand how high this fence is (or should be). This article explains the nature of this fence, how it varies in different types of tools, and what you can look at to get a view of the other side.

Your company has decided to invest in test automation and you have been asked to decide what tools to buy and to determine which technologies match your organization's needs.

But how do you get beyond the marketing brochures? How do you tell if the tool is really what you need to purchase? For that matter, how do you tell whether the tool vendor really understands the testing being automated?

Looking at the fence separating the prospective user from the tool's developers is a good place to start.

Why is There a Fence?
The first thing to understand is why a fence exists between the prospective user and the tool's developers. To begin with, there is a marked difference between test tool development and usage. At a fundamental level, this partition is necessitated by the inherent difference between defining a problem and generating an algorithm to solve it. The differences are more profound than that, of course, but this basic disparity illustrates the inevitability of a fence.

The fence also defines the relationship between the developer of the tool and the user of the tool, providing a balancing point between the needs of each.

For instance, tools developed in-house have very low fences because the customers and developers are essentially one team. Typically, such tools are highly specialized and have limited scopes of applicability. The specific definition of the problem allows developers to produce something explicitly targeted to the need. Because users of in-house tools commonly have identical needs, such tools may not require a high degree of flexibility. In some cases, when the user and developer is the same person, there is no fence at all.

On the other hand, test automation tools that sell in high volumes ("shrinkwrap" tools) have tall fences and a clear dividing line between the tool's developers and users. There are many customers being served, with many diverse problems being addressed, so the tool's developers must be more circumspect in their approach to an algorithmic solution.

The foundation of a shrinkwrap tool's development is reliance on solid test theory and use of flexible algorithms to solve a range of issues within the problem domain. Such tools are generally designed to perform a common task, following accepted testing practices without incorporating many specialized techniques.

With such high-volume tools, developer-customer communication is often unidirectional-flowing from the developer to the customer. The primary communication medium is the formal documentation that is released with the tool. Customer support, providing clarifications and a modicum of customer-to-developer contact, offers a secondary (often inadequate) communications channel.

Low-volume commercial tools, more often customized according to the needs of the user, should have relatively low fences. Communication surrounding such tools is more bi-directional, with the tool's higher price putting the customer in the driver's seat. Because the user is buying a custom solution to their specific problem, developers must be more responsive to the customer's expressed needs.

Complexity's Role
The complexity of the problem being solved also influences the height of the fence. For instance, consider that test execution automation is of relatively low complexity. The issues being addressed with a test execution automation tool are

  1. Generating test results (called "actual results")
  2. Comparing the actual and expected results
  3. Producing a test report indicating the pass/fail status of the test case(s)

The user must generate and script the tests that the tool will execute.

As an example, let's look at the hypothetical AutoExec tool, which provides a graphical interface for test execution and report preparation. AutoExec allows the user to create a test driver program (using C) in which the test cases, the expected results,

Pages

About the author

Steve Morton's picture Steve Morton

Steve Morton is an automated test tool developer by trade, operating in an arena of low volume and high expectations for the past eight-plus years. He has primarily worked on an automated, low-level structural analysis and test definition tool tailored for safety and mission-critical software development markets such as the aerospace and medical devices industries. His ongoing efforts to bridge his developer’s viewpoint into the viewpoint of a tool’s customers has led him to acutely examine the nature of the relationship between developers and users of automated software testing tools. In addition, Steve has used both commercial and homegrown testing tools throughout his career, enabling him to stand on both sides of the automated software test tool fence.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03