A natural divide exists between those who develop commercial test tools and those who use them. If you're tasked with finding the right test automation technology and tool for your organization, it helps to understand how high this fence is (or should be). This article explains the nature of this fence, how it varies in different types of tools, and what you can look at to get a view of the other side.
Your company has decided to invest in test automation and you have been asked to decide what tools to buy and to determine which technologies match your organization's needs.
But how do you get beyond the marketing brochures? How do you tell if the tool is really what you need to purchase? For that matter, how do you tell whether the tool vendor really understands the testing being automated?
Looking at the fence separating the prospective user from the tool's developers is a good place to start.
Why is There a Fence?
The first thing to understand is why a fence exists between the prospective user and the tool's developers. To begin with, there is a marked difference between test tool development and usage. At a fundamental level, this partition is necessitated by the inherent difference between defining a problem and generating an algorithm to solve it. The differences are more profound than that, of course, but this basic disparity illustrates the inevitability of a fence.
The fence also defines the relationship between the developer of the tool and the user of the tool, providing a balancing point between the needs of each.
For instance, tools developed in-house have very low fences because the customers and developers are essentially one team. Typically, such tools are highly specialized and have limited scopes of applicability. The specific definition of the problem allows developers to produce something explicitly targeted to the need. Because users of in-house tools commonly have identical needs, such tools may not require a high degree of flexibility. In some cases, when the user and developer is the same person, there is no fence at all.
On the other hand, test automation tools that sell in high volumes ("shrinkwrap" tools) have tall fences and a clear dividing line between the tool's developers and users. There are many customers being served, with many diverse problems being addressed, so the tool's developers must be more circumspect in their approach to an algorithmic solution.
The foundation of a shrinkwrap tool's development is reliance on solid test theory and use of flexible algorithms to solve a range of issues within the problem domain. Such tools are generally designed to perform a common task, following accepted testing practices without incorporating many specialized techniques.
With such high-volume tools, developer-customer communication is often unidirectional-flowing from the developer to the customer. The primary communication medium is the formal documentation that is released with the tool. Customer support, providing clarifications and a modicum of customer-to-developer contact, offers a secondary (often inadequate) communications channel.
Low-volume commercial tools, more often customized according to the needs of the user, should have relatively low fences. Communication surrounding such tools is more bi-directional, with the tool's higher price putting the customer in the driver's seat. Because the user is buying a custom solution to their specific problem, developers must be more responsive to the customer's expressed needs.
The complexity of the problem being solved also influences the height of the fence. For instance, consider that test execution automation is of relatively low complexity. The issues being addressed with a test execution automation tool are
- Generating test results (called "actual results")
- Comparing the actual and expected results
- Producing a test report indicating the pass/fail status of the test case(s)
The user must generate and script the tests that the tool will execute.
As an example, let's look at the hypothetical AutoExec tool, which provides a graphical interface for test execution and report preparation. AutoExec allows the user to create a test driver program (using C) in which the test cases, the expected results,