Technology-driven companies, regardless of size and scale, are facing the increasing need to ship better code faster while meeting business requirements. This requires collaboration and interaction among the traditional information technology infrastructure library (ITIL), information technology service management (ITSM), and development teams for a truly agile organization to emerge.
DevOps, or development and operations, has been around for quite some time. Large organizations, such as Amazon, Twitter, Bank of America Merrill Lynch, as well as numerous subject matter experts (SMEs), have realized the potential gains DevOps can offer. DevOps practices streamline the software delivery process, improve cycle time, and allow developers to meet user requirements quickly while delivering high-quality software. According to the “2013 State of DevOps Report” published by PuppetLabs, organizations that have implemented DevOps best practices have:
- Improved quality of software deployments
- Increased frequency of releases
- Improved visibility into process and requirements
- More agile development
- More agile change management process
- Improved quality of code
The Need for DevOps
Technology driven companies, regardless of size and scale, are facing the increasing need to ship better code faster while meeting business requirements. This requires collaboration and interaction between the traditional information technology infrastructure library (ITIL), information technology service management (ITSM), and development teams for a truly agile organization to emerge. In essence, DevOps picks up where the agile methodology and IT standards left off. Some common “best practices” followed are build automation using Maven or Gradle, configuration management using Puppet or Chef, and version controlling infrastructure—configuration in addition to application code.
Managing projects that involve numerous sub-modules, tests, and external dependencies can be a nightmare. Tools like Apache Ant solved this problem but resulted in developers imperatively instructing the tools to compile and organize a project. Although this provided flexibility, it made the project management process subject to human error. Nowadays, build automation tools rely on more of a declarative style to manage dependencies. Two tools that have become quite popular in the industry are Apache Maven and Gradle. The examples I will use in this discussion will be relevant to Maven. Since Maven is a forerunner of Gradle, Gradle also implements these features and adds some desirable features to Maven’s current set.
Maven believes in convention over configuration, a simple concept. Systems, libraries, and frameworks should assume reasonable defaults. Many frameworks, such as Ruby on Rails, also adhere to these principles to combat the growing complexity of project structures. However, these reasonable defaults can be specified by the user. The practice of convention over configuration allows developers to perform project management in a declarative style. What's more is that Maven is IDE independent and can be run directly from the console.
Delegating the tasks of downloading external libraries can be simply achieved by specifying dependencies and their versions. For example, if my project requires the latest scalatest.jar and scalastyle.jar, all I need to do is specify the name and the version in the pom.xml file (Maven defines a “project object model” or pom, the configuration for a particular project/sub; the project is specified in the pom.xml). One might wonder how Maven knows the source of these libraries. The repositories from where Maven picks up these libraries can be specified in a settings.xml file, which makes repository management a breeze.
Maven also allows developers to run different lifecycle phases for a project. For example, the install phase installs the package into the local repository for reuse by other projects whereas the test phase compiles the source code and runs a test using a suitable testing framework such as JUnit.
Ever had to manage multiple development and production boxes? What about boxes running different operating systems? These questions demonstrate a clear need for configuration management.
The ability to replicate development or production environments on different boxes automatically accelerates set up time tremendously. Differences such as operating system can be easily factored in using configuration management tools such as Puppet or Chef. These tools were built with the goal of making the maintenance and configuration of hundreds or thousands of servers exceedingly simple. Both tools allow developers to manage their infrastructure using respective DSLs (domain specific languages built using ruby). Automation and orchestration makes infrastructure management simpler regardless of the organization’s size. Puppet is the most popular choice for configuration management so I will briefly talk about some practices in the context of puppet.
Puppet allows users to replicate production and development server configurations seamlessly. Puppet is great at managing package installations on different operating systems. You can even specify whether you want a specific version or the latest and puppet will ensure that it is installed. This works great if you are dealing with environments running UNIX, Linux, or even Windows. Although there are minor differences in administering processes on Windows and POSIX based operating systems, the thorough documentation provided by PuppetLabs is easy to refer to.
However, tools like Puppet and Chef can do far more than just manage package installations. You can specify which services to run, specify service dependencies, and the many plugins allow you to run maven commands as part of your configuration. Suppose you want a test environment set up with the necessary JARs; in this case Puppet or Chef can ensure you have the JARs in your local repository.
These tools also provide for a subscription to settings and configuration files. If your httpd.conf changes, you can run your configuration management tool periodically to detect these changes and restart your apache service. It is also possible to configure a master and several nodes with the slaves subscribing to the setup of the master server. This means that you can quickly configure one production box and set up similar boxes around the world.
One of the simpler DevOps practices is version controlling as much as possible, which means that you will be covering version infrastructure, development bundles, and databases. This helps provide a single source of truth or one holistic unit. This practice goes hand in hand with configuration management using Puppet and Chef; it even allows you to do build automation using Maven. Maven fetches dependencies such as JARs from remote repositories. These remote repositories should be maintained carefully with different versions (different applications within the organization would use different versions). This helps us come closer to the vision of a single source of truth. Using version controlling and Puppet and Chef setup scripts is another way functionality can be added incrementally and universally (throughout the organization).
Additionally, the use of repository manages like Sonatype Nexus is a great way to manage internally generated libraries and services that can be shared between development teams. Not only does doing so provide version control for different release candidates, but it also integrates wonderfully with Maven.
Remember to start with something small, like a Puppet script, in order to ensure all packages are installed. Create a development tools bundle and distribute it within your organization. The closer environments match and the faster they can be replicated is a significant productivity gain. After becoming familiar with the practice, you can automate production configuration. You can use the simple practices I mentioned throughout this article to start streamlining the build process and centrally managing configuration across different production and development environments.