Software companies with tight-knit agile and robust release management practices have a significant competitive advantage. To realize this advantage, an organization must first optimize its release management process and identify the most appropriate platform and release management tools. In this article, Surinderpal Kumar explains three major software release management trends every software development organization can benefit from.
Due to increasing market pressure, companies are striving to release their software earlier than their competitors. These frequent releases allow teams to validate features earlier and detect bugs easily. Additionally, smaller iteration cycles provide flexibility, making adjustments to unforeseen scope changes less of a burden. As a result, software companies with tight-knit agile and robust release management practices have a significant competitive advantage.
The process of implementing such agile practices, however, is easier said than done as building software becomes increasingly more complex due to factors such as increasing legacy code, resource movements, globally distributed development teams, and the increasing number of supported platforms. In this article, I will explain three major release management trends every software development organization can benefit from.
The first trend is to adopt agile practices at the core, including automation.Simply holding daily standup meetings will not be enough. Yes, sprint and Iteration planning as well as review and retrospection are all essential elements for successful releases. However, in order to gain substantial and meaningful stories or features within the time boundaries of your sprints, companies need to invest in automation.
The lack of automation places an unnecessary burden on your development team and invites human error into your environment adversely affecting team velocity and productivity. Proper automation will establish a process that is consistent and repeatable, therefore reducing errors and release times. Agile practices—be it scrum, XP, or lean—all strongly recommend build automation, test automation, deployment automation, and feedback (or review) automation. Essentially, automation relieves software teams from the overhead of time-consuming manual configuration management tasks and brings the focus back to the code and core product development.
For example, as a developer commits changes to the version control, these changes automatically get integrated with the rest of the modules. Integrated assembles are then automatically deployed to a test environment. If there are changes to the platform or the environment, the environment gets automatically built and deployed on test bed. Next, build acceptance tests are automatically kicked off, which would include capacity tests, performance, and reliability tests. Developers and/or leads are notified only when something fails. Therefore, the focus remains on core development and not just on other overhead activities. Of course, there will be some manual check points that the release management team will have to pass in order to trigger next the phase, but each activity within this deployment pipeline can be more or less automated.
As your software passes all quality checkpoints, product version releases are automatically pushed to the release repository from which new versions can be pulled automatically by systems or downloaded by customers. There are a range of technologies available in the market in each of these areas, including build automation (e.g., Ant, Maven, and Make), continuous integration (e.g., Jenkins, Cruise Control, and Bamboo), test automation (e.g., Silk Test, EggPlant, Test Complete, and Coded UI), and continuous deployment (e.g., Bamboo, Prism, and Jenkins). Implementing these best practices will obviously require strategic planning and an investment of time in the early phases of your project; this will reduce the organizational and change management efforts needed later on.
The second trend is the use of virtualization and cloud platforms as development and test environments. Today, most software products are built to support multiple platforms, be it operating systems, application servers, databases, or Internet browsers. Software development teams need to test their products in all of these environments in-house prior to releasing them to the market.
However, this presents the challenge of creating all of these environments as well as maintaining them. These challenges increase in complexity as development and test teams become more geographically distributed. In these circumstances, the use of virtualization and cloud platforms can be handy, and these platforms have recently been widely adopted in the industry.
In some companies, release management teams have started to bring in deployment automation on virtualized and cloud platforms. In this case, the same way we maintain code and configuration version history for our products, we also maintain the version history of all their supported platforms.
For example, previously we described the process of a developer checking in code, and the product build automatically being run and deployed to the necessary test environments. Similarly, if a build and release engineer changes configurations for the target platform—like the operating system, database, or application server settings—the whole platform can be built and a snapshot of it created and deployed to the relevant target platforms. In other words, if you are using in-house VMware virtualization, the virtual machine (VM) is automatically provisioned from the snapshot of base operating system VM, appropriate configurations are deployed and the rest of the platform and application components are automatically deployed.
The same goes for cloud-based environments if you are using Rackspace as your Infrastructure-as-a-Service (IaaS) and Platform as a Service (PaaS); as new configurations are introduced, a new Rackspace instance is procured, instantiated, and configured as a development and test environment. This is crucial for productivity, otherwise it could take weeks to adopt to configuration changes. With automation, the process becomes repeatable, quick, and there are no hassles of communication across different teams within the software development center.
The third trend is the use of distributed version control systems. Distributed version control systems (DVCS)—like GIT, Mercurial, and Perforce—are becoming more popular due to the flexibility it provides for teams to collaborate at the code level. The fundamental design principle behind DVCS is that each user keeps a self-contained repository with complete version history on one’s local computer. There is no need for a privileged master repository, although most teams designate one as a best practice. DVCS allow developers to work offline and commit changes locally.
As developers complete their changes for an assigned story or feature set, they push their changes to the central repository as a release candidate. DVCS offers a fundamentally new way to collaborate, such that developers can commit their changes frequently without disrupting the main codebase or trunk. This becomes useful when teams are exploring new ideas or experimenting.
DVCS come in handy for software teams that utilize an agile-feature-based branching strategy. This encourages development teams to continue to work on their features (branches) as they get ready, having fully tested their changes locally, to load them into next release train. In this scenario, developers are able to work on and merge their feature branches to a local copy of the repository.
Only after standard reviews and quality checks will the changes then be merged into the main repository. If your software product development involves multiple project teams, distributed teams or feature-based development, DVCS should certainly be taken into consideration.
Adopting these three major trends of software release management make for a great start for software teams struggling with release management issues!