With service virtualization (SV), technology stands in for the manual efforts of testers or the simulators companies used to write. SV solutions aren’t simple bits of code that stand in for manual testing processes. They’re surprisingly powerful software tools that are self-learning. Jon Spencer explains how they work.
In 2005 I was working as a performance engineer for Intuit Inc. We were in charge of testing a merchant gateway for credit card authorizations. For each test, the transaction would be routed through a real-world third-party payment processor and on to a major financial institution where we would provide authorization codes, and the transaction would be approved or declined.
We were trying to meet Payment Card Industry compliancy requirements while not affecting the performance of the gateway, and our goal was to process 1.5 million transactions in a twenty-four-hour period while staying PCI compliant. We used more than a dozen servers―running as Windows Services―to perform that testing.
At this time, we could do functional testing, but we couldn’t do performance testing because the third-party payment processor couldn’t accommodate the load―especially with the required authorization codes. Our developers wrote simulators (called stubs now), but the simulators couldn’t simulate the latency or the hops to the third-party payment processor. These challenges made testing inefficient and sometimes incomplete. Today, we can eliminate every one of those problems with service virtualization.
Service Virtualization: The Testing Workhorse
With service virtualization (SV), technology stands in for the manual efforts of testers or the simulators companies used to write. SV solutions aren’t simple bits of code that stand in for manual testing processes. They’re surprisingly powerful software tools that are self-learning. Here’s how they work.
The SV tool connects to a service whenever it is available and records the end-to-end process of a service validating the application’s query and providing the appropriate response. It then intuitively tweaks its own instruction set until it can recognize the parameters of a correct request (the software) and response (the service).
Once the tool has “learned” how to interact with the service, it can replicate the call and response to the service for performance and functionality testing a million times, without intervention. The SV tool doesn’t need to reconnect with the service because it, in essence, becomes the service.
SV tools can handle complex scenarios, too, where they do more than learn to send a message or a transaction and wait for a confirmation or an error message. Users can establish very specific and involved scenarios, such as validating whether an application can access a certain database to ordering or successfully go through a series of steps that confirm a customer’s shipping preference. Once the SV tool learns those steps, it can replicate that service confirmation, as well.
SV tools automatically check with the service to record changing conditions, and when the requirements of an application or a service change, some service virtualization platforms can adapt automatically to the new conditions.
It’s truly amazing to me how much companies can accomplish with SV tools in today’s fast-paced and increasingly rigorous testing environments. It would certainly have made my job at Intuit easier. At Orasi, where I now work, we have helped customers needing to test performance and functionality for hundreds (and sometimes thousands) of services.
In these scenarios, the company would have previously had either in-house developers or third-party consultants write stubs for each of the services. That’s practical when they are dealing with a dozen services, but not when there are hundreds. I’ve heard from many companies who described the pain of dealing with stubs, which take on their own, parallel development lifecycle as the application evolves.
So instead, we help companies transition to service virtualization, which saves a considerable amount of time and resources and ensures a predictable process. It also eliminates time constraints that tend to disrupt schedules, such as when testers on the US East Coast are testing services residing on servers located halfway around the world that are only up after testers have left for the day.
In many cases, our customers are also being hit with fees charged by companies for accessing their services. Hitting that service millions of times during testing is prohibitively expensive. Once the client transitions to SV, the tool can connect to the real service, process it, and archive it for future testing. The only additional fees the customer incurs will be when the SV tool performs a routine process update.
My personal experience with SV is limited to enterprise apps, so I asked Joe Schulz, Orasi’s associate vice president of mobile testing, for his personal insights. “Services are becoming ubiquitous across most applications. Anything that connects to the Internet usually has to handshake with a service, and mobile apps in particular are service-driven,” Joe said.
Latency can be an issue with any web service, he noted, but he says that latency across the transaction lifecycle is often a really impactful consideration with mobile applications, which might be hopping across a number of cellular networks in their efforts to reach a service. Users of SV tools can add time to the response to simulate latency, allowing the tool to reliably reproduce the effect and ensure the application or service won’t time out or disconnect when latency crops up.
Because users control the parameters the SV tool adheres to, Joe said, they also can establish “what if” response times on various service calls and how those service times affect the overall system response time. “For mobile applications, performance is extremely important, and customers will not wait for an application or a business process that is slow due to performance,” he said. “Especially on consumer-based applications, we see fairly high abandonment rates when the performance is slow. The ability to create ‘what if’ response times and to run performance tests early in development really helps our customers minimize the issues that promote user abandonment.”
Not a Panacea, but a Powerful Partner
Is service virtualization as good as full-blown testing, or can it replace real services? No, and I would never recommend either. I always remind customers that SV is a tool that facilitates production and testing, not a replacement for real services. When we help employees at a company adopt and use SV, we remind them to be alert to its shortcomings, performing spot checks against the real service when they introduce code into production, at regular intervals, and if they begin receiving any data validation errors.
In a perfect world, services would be free and available constantly. A lot of the time, they are not. Consequently, SV’s role in reducing the wait for services to be ready, its value in minimizing the cost of accessing services millions of times for quality assurance testing, and its ability to support and test “what if” scenarios make it a powerful tool in any developer’s arsenal.