In the ever more intense battle for customer attention and satisfaction, organizations are continuously looking to increase their flexibility when it comes to software creation in order to react to or even get ahead of market demands. One popular approach to achieve this is by adopting a continuous delivery (CD) model of software development and delivery.
For those of you not familiar with CD, the one-sentence summary of the philosophy behind it might sound something like this:
CD is a software engineering approach where development teams focus on creating software in short cycles, thereby ensuring that this software can be safely released into a production environment at any given point in time.
A cornerstone of the CD philosophy lies in the word safely. In order to ensure that any given version of the software under development can be released safely into production, the development team has to be able to trust the testing procedures and quality gates that are part of the CD pipeline. Often, a big part of these quality measures consists of automated checks, ranging from unit tests all the way to the end-to-end level.
In order to enable true CD, it is critical that these automated checks can be run on demand (known as continuous testing), unattended (i.e., no manual intervention should be required to run them and interpret their results), and as often as necessary. This requires careful engineering of the automated testing suite, ensuring that tests run fast, that they do not generate false negatives (which would unnecessarily stall the CD pipeline) or false positives (which would create a false sense of security), and that they propagate test results to the pipeline engine in a clear and unambiguous manner.
An often overlooked part in creating these automated checks that enable teams to adopt the CD approach, however, is the strain that the ability to test continuously places on test environments. Especially with modern-day distributed and (micro-)services-based applications, you simply cannot test an application or a component in isolation and release it safely into a production environment. You'll need a test environment, complete with all dependencies required, that is just as on-demand as the test suite that exercises it. This means all the dependencies required to run integration and end-to-end tests (unit tests often use mock objects to abstract away dependencies) should be available and in the right state all the time.
Anybody who has ever been involved in testing distributed applications knows that this is not an easy feat. Dependencies that are critical to the completion of integration and end-to-end tests are often hard to access or simply unavailable, for any of the following reasons:
- The dependency itself is under development, making it unavailable or unfit to use for testing
- It is hard—or even impossible—to repeatedly set up the test data required for the tests to be executed
- The dependency is shared between teams, meaning that they can be used only at certain points in time (mainframes in test environments often suffer from this phenomenon)
- The dependency is a third-party component that requires access fees to be paid before one is allowed to use them
One approach that has proven successful in dealing with these test environment restrictions is service virtualization (SV). With SV, the behavior of critical yet hard-to-access dependencies is simulated in virtual assets. These virtual assets allow development teams to regain control over their test environment and, as a result:
- Test earlier: There is no need to wait for dependencies to become available later on (if at all) in the development process
- Test more: With virtual assets under full control of the development team, it is easy to set them up in such a way—using specific test data or specific performance characteristics, for example—as to simulate behavior in edge cases that would have been difficult or even impossible to set up had the team been dealing with the actual dependency
- Test more often: Unrestricted access to the virtual assets and the behavior they exert allows development teams to provision and reset them on demand, essentially creating a fresh test environment easily for every test run
SV has seen an increase in adoption over the last couple of years as an approach that allows organizations to improve their testing efforts through smart and effective dependency behavior simulation. The next step in making development, testing, and deployment ever more flexible and on demand—essentially, driven by the ambition of organizations to implement CD—is by treating these simulated test environments as an artifact in the CD process, similar to what is already regularly done with automated test suites. This means that development of simulated dependencies is treated as a development task and virtual assets are propagated, together with the production code and accompanying automated tests, through the CD pipeline.
To achieve this, several SV solutions now enable you to ship simulated test environments as containers, just like you would do with the application under test itself, possibly accompanied by related automated test suites. A potential cycle in the CD process would then look as follows:
- A developer commits a change to the centralized code repository, such as Git
- The build server triggers a new build and runs unit tests
- After unit tests pass, the build is deployed into a test environment using containers
- Parallel to #3, the virtualized dependencies are also deployed and provisioned, also using containers, with the desired endpoints, test data sets, and performance characteristics
- Integration and end-to-end tests are run, exercising the application under test and the simulated dependencies as desired
- After these tests pass, the application is safely deployed into production
- The simulated test environment is torn down and made ready for the next cycle
Note that the above sequence of actions concerns a change in the application under test. A similar cycle could be completed for an updated version of any of the virtual assets used. Because these are essential parts of the development process (the go/no-go decision for your software to be deployed into production depends on it), they should be tested, just as you would do with your production code.
Here are some of the biggest benefits of using containerized SV as described above:
- Creating the exact same test environment as in previous test runs is a breeze (i.e., a matter of spinning up a new container)
- Virtual assets can easily be treated as an artifact in the software development process, meaning they can be brought under version control, distributed, and reused, just as can be done with production code and automated tests
- Managing test environments and resetting them for the next test run is a matter of simply bringing down the previously spawned container
A range of different SV solution providers are already adopting containerization techniques to make their solutions even more flexible. For example, the open source SV platform Hoverfly (created by SpectoLabs) is available as a Docker container on DockerHub. The Virtualize SV engine from Parasoft is available as a Docker container, too. It is also possible to create a virtual machine on Azure and have it equipped with the Virtualize engine and all other tools to start deploying virtual test environments directly. Other solution providers have followed suit or are currently in the process of doing so.
Adopting SV can provide organizations with a means of achieving more effective software development and testing by removing traditional test environment bottlenecks. Integrating SV within the CD pipeline by means of containerization helps development teams to further approach the level of flexibility required by markets that have ever increasing demands—and by competition that is growing ever more fierce.