A decade ago, continuous integration became a key practice to support the agile process. Now, the hot topic is continuous delivery, and Pini Reznik has noticed many similarities between the adoption of CD today and the implementation of CI. You can learn a lot from past experiences.
During the last decade, agile became one of the leading software development methodologies, and continuous integration (CI) became a key practice to support the agile process.
In the last few years, we have seen two challenges to extending agile practices:
- Shortening the development cycle from weeks to hours or even minutes (continuous delivery)
- Expanding the agile process outside development to operations (DevOps) and other departments
These challenges seem to be new, but in reality we saw very similar trials ten or fifteen years ago in the beginning of the agile revolution.
Agile and Continuous Integration
While working in software configuration management in many different organizations, I have observed a consistent path for the introduction of an effective CI process.
Once the team decides to implement CI, the first stop is always an automated build. It seems to be obvious today, but ten years ago, fully automated builds that produced the entire software system, ready for installation and testing, was a luxury available only in the best software companies. It was especially challenging to achieve Martin Fowler’s ten-minute mark for an effective CI build. Once the build was running on a central CI server, the team would start creating test automation to ensure that the software not only compiled, but also could be installed and shown to be functioning reasonably well.
Once the CI build with some test coverage was running smoothly, teams would start feeling the pain of a long fix cycle. This was due to the fact that automated builds, installers, and tests were developed and maintained by separate specialized teams.
Finding out which team is responsible for fixing a bug could take longer than actually fixing it. This situation led to the introduction of cross-functional teams able to resolve 99 percent of their daily issues without involving any external party. Such teams took full responsibility for the entire development cycle, including build, installation, and automatic testing of the software, which led to the production of more stable and reliable code and faster resolution times for critical, build-breaking issues.
During the transition, tools would change, too. For example, test automation was not invented by agile teams, but the way it was done changed tremendously when it was moved to cross-functional development teams. Recording-based tools gave way to fully programmable alternatives such as xUnit. Later still, even user interface testing became an integral part of the normal development process. With the introduction of test-driven development (TDD), some of the teams recognized that adding tests in later stages is challenging for many development projects, so they moved the testing activities upstream to make sure that nothing is done without proper test coverage.
With TDD, tests are defined before even the first line of code is written.
Continuous Delivery and the Need for DevOps
In recent years we started seeing a strong shift toward continuous delivery (CD), which is nothing more than CI taken all the way to the client. When doing CD, the goal is to deploy fully functional code all the way to the clients within hours or even minutes.
On the way to this goal are exactly the same challenges observed while introducing CI:
- Automation of all parts of the delivery pipeline
- Additional test coverage
- Consolidation of the teams
- A unified toolbox
The first step toward achieving CD must be the introduction of programmable infrastructure, which allows teams to define complex runtime environments and deploy software automatically without human intervention. This step is as essential for CD as the automated build was for CI. In the beginning it is natural to create such automation from within specialized ops teams that would use the tools available to them, such as Puppet or Chef.
Later, when some system tests are available and the teams are able to build environments and deploy functional software, they will hit the same pain point of interteam coordination. Just as with CI ten years ago when the interteam problem caused many companies to merge development and testing, the same problem now needs to be addressed between development and operations. The answer is also the same: a cross-functional team, this time called DevOps.
But today, we are only in the beginning of this change. Organizations have started creating DevOps teams by moving development and operations engineers into the same team, so the next step on the way to achieve a real CD workflow is clear.
Unified Tooling or a Common Language for Dev and Ops
Puppet, Chef, and similar tools currently used to implement programmable infrastructure are conceptually similar to the test automation tools based on action recording. They replace humans by imitating their actions.
Such tools are going to be replaced soon by the new type of tools that are conceptually similar to xUnit. Using those tools, developers will be able to define software deployment and runtime environments as part of the regular development. This will allow unification of the tooling from the beginning of development through the entire lifecycle of the product.
The first big shift that we’ve seen in this direction is software containers and their ecosystem of easily programmable hardware, network, and storage emulation. The recently explosive popularity of Docker clearly shows that containers are now addressing the most pressing obstacle for achieving CD.
In the near future we will see more and more similarities between the transition to CD and our earlier experiences in transitioning to CI. In our company we have already started practicing our own “production first” concept that is very similar to TDD. We are setting up production environments for us and for our clients including live URLs before we write even a single line of code.
The Right Tool for the Right Job
While running a CD transition it is also important to learn from the mistakes we made while introducing CI. One such mistake was to move too much responsibility to the DevOps teams.
Today, dev teams are in charge of writing the product’s code as well as building and testing that code. Ops teams are in charge of the physical infrastructure and clouds as well as deployment and configuration of the products on the production infrastructure.
DevOps teams only need to own the pieces of the infrastructure that re part of the product logic—provisioning functional blocks of the system such as web servers, databases, load balancers, etc. Maintenance of the hardware and the clouds can remain under the responsibility of a separate subteam in case it can provide reliable and consistent APIs to consume its services. A public cloud like Amazon EC2, with its well-defined APIs and high quality of service, in some cases can effectively replace such subteams.
I believe that the most important factor for the successful implementation of CI or CD in an organization is the ability to resolve 99 percent of the daily issues without the help of any external party. There is no feasible way to achieve this when different parts of the delivery pipeline required for normal work are owned by different teams. Cross-functional DevOps teams and programmable build, test, and infrastructure are essential for the constant flow of changes through the pipeline to the customers.