Docker promises to help you Build, Ship, and Run applications. In this post I will describe why Docker was such a breakthrough for those building and running applications through the lens of Cynefin. The previous post introduces using Cynefin to understand systems and make good decisions.

Situational Awareness: Building, Shipping, and Running Applications

What kind of situation or system do most organizations build, ship, and run applications in? In many organizations, separate people or teams perform each of these activities. Software engineers develop an application and use a continuous integration server to create application artifacts such as a jar file or npm package. A build or release engineering team may store these application artifacts in an artifact repository and make them available for deployment. Then an operations team retrieves the packages and installs them. When an operations engineer tries to install the application on Linux (or Windows), they will have many questions such as:

  1. What packages does this application require? Java? NodeJS? Which version?
  2. What application and system configurations do we need to make for each deployment environment?
  3. What memory and cpu resources does the application need?

Through the lens of Cynefin, packaging a new application for distribution by an organization staffed by specialists is a Complex situation. The typical application delivery process creates a complex set of interdependencies between people with specialized application, release, and operational skills and existing processes. Each new application dependency may result in downstream organizational dependencies whose resolution is full of “unknown unknowns.” The deceptively simple task of getting an application deployed can be driven into Chaos as the Complex situation reveals an unpredictable mix of incompatible schedules, missing skills, and mismatched tools.

There are various tools and patterns organizations can use to address these questions and problems. Installation requirements can be described in a README or a configuration management system like Puppet or Chef. A “full-stack” team with representation from each of these roles might be able to ask and answer each of these questions in a few hours or days, improving on a weeks-long ping pong of Q&A between separate Development, Release, and Operations teams or departments.

But isn’t it kind of silly that we still have to create bespoke or at least tailor processes for building, distributing, and running each new application?

How Docker Reduces Complexity

Docker addresses these problems by providing standard and robust application packaging, distribution, and operation application functions that organizations can adopt to deliver applications. These standard building blocks allow you to decouple applications from infrastructure and, more importantly, specialist roles within your organization.

With Docker, software authors can package their applications into a Docker image. Docker images include all of the application dependencies such as libraries and describes system dependencies like ports. Docker uses the data and metadata included in an image to unpack and run an application baked into a Docker image. The Docker image is a standard format that was designed to be easy to handle. It is straightforward to create application image build pipelines that fit your organization using Docker and make those artifacts available for deployment (note: I describe Image Build Pipelines in-depth in Chapter 10 of Docker in Action, 2ed).

Docker uses application ‘containers’ to make running arbitrary applications on Docker-enabled infrastructure easy. With Docker, you can deploy an application onto a Docker host using docker run. Docker will retrieve the image and run that application process in an isolated execution context with its own filesystem, ports, and a limited amount of memory and cpu resources (along with many other options). Containers are created using security and resource management features built of the Linux or Windows kernels that are (very) complicated to use directly. Containers make running different kinds of applications on the same host practical, even easy. This is because containerized applications are isolated from each other and their resource consumption is constrained. Again, Docker transforms what is usually a Complex operational problem into one that is Obvious (or maybe Complicated).

This approach was pioneered at Tech companies like Google, Yahoo, and Heroku that deploy thousands of applications. Docker emerged from the core technology of a company called dotCloud, which was a Platform-as-a-Service like Heroku or CloudFoundry. Docker’s approach to packaging and running applications is generally simpler, more robust, and even more flexible than the custom implementations built by those organizations.

So Docker is a set of technologies you can use to simplify your application delivery and operational processes. The way in which you adopt Docker to transform Complex situations into Obvious/Complicated ones is how you capture the value available from the technology. Keep this at the forefront of your mind when adopting containerized application delivery processes and feel free to reach out with any questions.