Today, we’ll explore what Continuous Integration means for a containerized application and how to do it using Docker.

Avg Reading Time: 5 minutes

Docker images make good release artifacts because the image format was specifically designed as a portable application package. The Docker image format stores:

  • data including application files and enablers necessary to run an application
  • metadata such as the application command to execute, environment variables, and ports an application uses

Docker images can be built and distributed efficiently.  Image consumers can download images from publishers with confidence they were delivered with integrity.  Further, Docker images are easy to run on any host with Docker installed. I will share more on that at a later time. For now, let’s assume you want to use Docker images as a release artifact.

How can a Continuous Integration process be updated to produce them?

Again, our goal is to provide rapid feedback on changes to the development team.

The general CI process for traditional application code is:

  1. merge changes to the main source branch from a short-lived task branch
  2. compile or transpile code to releasable form
  3. run unit tests
  4. package into a releasable artifact
  5. publish artifact to an artifact repository

Steps 1 through 3 do not change. Changes start with step 4, Package Release Artifacts. Let’s explore packaging and testing a release as a Docker image.

Package and Test Release Artifacts as a Docker Image

The Docker image building process is driven by automation that builds an artifact which is easy to inspect. This means we can build and test the Docker image prior to publishing it.

Instructions for packaging an application in a Docker image are usually specified in a Dockerfile, which is a software artifact. Dockerfiles provide a simple shell-script like DSL for automating the build of a Docker image. The Dockerfile will contain instructions that assemble the release artifact. Usually this consists of RUN instructions to install application enablers and COPY instructions to put the application artifacts built in step 2 into the image.

The terraform-infra-dev image used in the previous post on CI for infrastructure code is built from a Dockerfile that:

  1. starts from an image that already contains ruby and serverspec
  2. installs general utilities such as git, python, and awscli
  3. copies over a helper utility for testing terraform (this could be your application code)
  4. retrieves and installs specific versions of terraform, tflint, and terraform-docs (these could be your application enablers)

The CI process for this image is automated with CircleCI (config.yml). Here is the key command from the ‘Build Docker image’ step that uses metadata provided by the version control and CI system (CircleCI) to build the Docker image:

docker image build --cache-from=qualimente/terraform-infra-dev:latest -t qualimente/terraform-infra-dev:latest -t qualimente/terraform-infra-dev:${BUILD_ID} .

The BUILD_ID image tag uniquely identifies this image and is defined by the build’s time in UTC plus the short form of the git commit hash that is being built, e.g. 20190305-1644-6e30f9b. This BUILD_ID will be attached to the image as metadata and can be used in test and deployment steps.

Even though the details of building this image were glossed over a bit, it’s easy to see there’s a fair amount going into this image. To build confidence that this image will work properly, the CI process can test it.

Let’s start by verifying that application enablers and files have been installed properly. One of the tools you can use to do this is the Container Structure Test tool to “check the output of commands in an image, as well as verify metadata and contents of the filesystem.”

The ‘Test Docker image’ step uses the container structure test tool to verify that key Ruby libraries, terraform, and bundle are installed properly (structure-tests.yaml). Since the tool actually executes commands inside a container created from the image and inspects their output, we can be confident that:

  • the tools exist and are executable with the image’s PATH settings
  • they have particular versions

This detects several common problems related to installing files that don’t show up until runtime such as being in the wrong location and having the wrong permissions or ownership.

Now that the release artifact has been built and had some (basic) testing, it’s ready for publishing.

Publishing a Release Artifact as a Docker Image

Publishing images is built-into Docker and is done using the docker image push <image name> command. For example, a recent build of terraform-infra-dev was published to docker hub with:

docker image push qualimente/terraform-infra-dev 20190305-1644-6e30f9b

terraform-infra-dev is hosted on Docker Hub, which is the default artifact repository for Docker images. Artifact repositories are called a registry in Docker terminology. There are many third-party options for publishing Docker images ranging from hosted solutions to commercial artifact repositories such as Artifactory or Nexus that you run yourself.

If you wanted to publish this image to an internal artifact repository instead, you would incorporate your internal registry into the image tag the image in the build step with something like:

docker image build -t${BUILD_ID}

Then the publish command would resulting in a push command like:

docker image tag

Now the updated software is available for further testing or use. The image was tagged with the BUILD_ID and this is a convenient identifier to propagate to delivery pipelines.

Because of Docker’s efficient implementation of image layering and distribution, building and publishing an image usually takes a few minutes. This is heavily dependent on how much you put into the image, but at least that is under your control. Quick build and publish times improves Docker images’ suitability for inclusion in a CI pipeline and adoption as a release artifact.


In this post, we’ve explored how to create a continuous integration process that packages custom software into a Docker image, tests, and publishes that as a releasable artifact. This image can be fed into delivery pipelines that can manage the deployment, testing, and promotion of this software through environments.

I’d love to learn where and how you are building Docker images in your application delivery processes along.  What’s working well for you? What’s painful? What do you wish you could do?



p.s. If you’d like to go deeper into this topic, I wrote a whole ‘Image Pipelines’ chapter for Docker in Action, 2ed