The Potential—and Challenges—of Container-Based Deployment

[article]
Summary:

Containers are taking the virtualization model to greater heights by enabling a flexible way to programmatically provision the resources you need. New technology also means we need new processes and failsafes, though. Containers hold a great deal of promise, but are they really ready to be used in production environments?

Configuration management has always placed a healthy focus on automating application build, package, and deployment. New tools and procedures, starting with automated build and continuous integration servers, have resulted in new techniques that help enable rapid iterative development.

Continuous delivery and deployment are evolving practices that are leading the charge to new and better ways to approach application deployments. Configuration management tools such as Puppet and Chef are also driving the industry to new capabilities that delight customers by provisioning servers, delivering new features, and fixing defects with remarkable speed and agility. This is all great stuff, but the real revolution starts with container-based deployments.

The cloud and virtualization allowed developers to work in robust virtual machines that often resemble the target production environment, while simultaneously benefiting from the savings due to elastic resources and pay-as-you-go service providers. Containers are taking the virtualization model to far greater heights by enabling a flexible way to programmatically provision the resources you need. This is as easy as creating resource descriptor files, which specify the configuration of the container itself.

Package management is another strong capability within the container ecosystem. Never before have we enjoyed such flexibility where we can grab a previously created template container, which flawlessly provides a virtual environment that can, in turn, be customized and persisted as a template to create additional containers with new and exciting features. Containers hold a great deal of promise, but are they really ready to be used in production environments?

Containers give not only the infrastructure, but also the application, leading us to the long-promised headless deploy. Developers can check in their changes to the source code management tools, perform exhaustive automated tests, and then seamlessly deploy to production, realizing the true benefits of continuous delivery.

But containers also have their challenges, which many smart technology professionals are working to address. For one thing, containers may not be completely contained. The vision of container-based systems promises completely isolated packaging and runtime environments, ideal for scalable microservices—another hot trend, focusing on providing small, completely isolated, highly scalable components. However, containers as runtime components will need to interface with other containers. Ideally this would happen through a well-defined application interface, which should limit the risk of components adversely impacting each other, but the chance is still there. Microservices themselves also have not always delivered the promise of highly scalable self-contained components.

There is another, even more serious issue with containers, and it is largely psychological. Developers think of being in isolated sandboxes and configuring and tinkering until things are working. While this might be the goal of containers, it is not always the reality. We risk losing essential repeatability and traceability of how containers are constructed. We also need to consider how they really interface with other containers. From a pragmatic viewpoint, we need to understand the value of containers, but still maintain just enough IT controls to avoid problems and challenges in production implementation.

IT operations needs to get involved with container-based deployments from the beginning of the process, working directly with developers to adopt secure and well-formed template containers that are maintained in a secure repository.

There is a lot of complexity around containers, and we need some in-house expertise to ensure that containers are secure and free from malware. A recent paper on a study concerning container vulnerabilities concluded that 30 percent of publically available containers had significant security flaws, including malware. That underscores the dangers of thinking that you can just put anything into a container because it is somehow isolated.

Container-based deployments are going to happen, and this approach has a tremendous amount of potential. There is real value in being able to download a container with resources already provisioned, from both an infrastructure and an applications perspective. But we need to implement some process around how containers are created, managed, and certified to ensure that security vulnerabilities do not occur. We also need to get some structure around how containers are used.

Developers and operations need to be trained to use sound software methodology around containers and collaborate on related configuration management best practices. Most of all, we need to put in the hard work that can help make sure containers provide the value and capabilities they are capable of.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.