One of the major challenges I have experienced in software development is ensuring that all the software components we need to do integration and end-to-end testing are available in our test environment. Some of these components, such as services, datasets, APIs, etc., may not yet be available at all, they may be undergoing maintenance, or they may be in place but do not contain the right test data to be able to perform the desired test cases.
As a result, test cycles take up too much time or can’t be completed, and test coverage suffers. In turn, this leads to lower product quality and longer product time to market. In voke inc.’s 2015 Market Snapshot Report on Service Virtualization, the more than five hundred people who took the survey responded that before the use of service virtualization, developers and testers waited an average of thirty-two days for everything needed to move forward with work. This shows that the problem at hand affects the whole software development cycle, not just the test team.
In this article, I’ll use a business case for a project I have been working on recently to describe how implementing service virtualization can remove environment setup as a blocking condition—and how this enables project teams to release better software, faster.
Service virtualization is the simulation of the behavior of software components that are unavailable or otherwise restricted during the preproduction stage of the software development lifecycle. These component simulators, also known as virtual assets, reflect the real software components’ behavior as closely as the tests require, but in a functional (think representative test data sets) as well as a nonfunctional manner—for example, through simulating response times of the original software component.
As I write this, almost all major vendors in the application lifecycle management domain offer a service virtualization solution as part of their product portfolios.