Avg Reading Time: 5 minutes

An application container is an isolated, resource-limited application execution context by host OS that has been unpacked and configured from an image.

That’s a highly technical definition that I’ll break down shortly, but I’d like to make a point first.

If you want people to adopt a practice, commoditize it.

— me, The Future of Information Security is Simpler (I Hope)

That’s what Docker did for “application containers.”

Docker made application containers easy (or easier) to use by providing a vastly simpler, opinionated wrapper around existing Linux distribution, process control, security, and resource management technologies.

Docker’s container technology solved these problems by innovating in two key areas:

  1. application packaging
  2. process isolation and management

Docker helps you package and run applications inside neat little boxes, keeping the hosting computer nice and tidy.

Keep Your Computers Tidy with Containers

Docker solves problems like:

  • missing or incorrect application dependencies such as libraries, interpreters, code/binaries, users; Example: running a Python or Java application with the right interpreter/VM or an ‘legacy’ third party application that relies on an old glibc
  • conflicts between programs running on the same computer such as library dependencies or ports; Example: multiple ruby programs fighting over gems or trying to use port 80
  • limiting the amount of resources such as cpu and memory an application can use; Example: containing a runaway program that consumes all the memory on the machine every once in a while (this is not fine)
  • missing, complicated, or immature scripts to install, start, stop, and uninstall an application; Example: I mean… when was the last time you were handed ‘good’ service management scripts? SystemD?

Let’s see how Docker delivers on its promise to “Build, Ship, and Run” applications easily.

Packaging applications into images

First, Docker helps you package applications and their dependencies into portable application images that are straightforward to distribute to artifact repositories and then onto container hosts that will run them.

Docker images are usually built by giving the docker build command instructions in the form of a Dockerfile (reference). Here’s the Dockerfile for the rando-doggos application:

# our base image
FROM alpine:3.8
# Install python and pip
RUN apk add --update py2-pip
COPY requirements.txt /usr/src/app/ 
RUN pip install --no-cache-dir -r /usr/src/app/requirements.txt
# copy files required for the app to run
COPY app.py /usr/src/app/
COPY templates/index.html /usr/src/app/templates/
# tell the port number the container should expose
EXPOSE 5000
# run the application
CMD ["python", "/usr/src/app/app.py"]

Dockerfiles let you specify the ‘data’ and ‘metadata’ necessary to run the application.

The data is a filesystem purpose-built for the application. This filesystem is constructed incrementally from files that have been copied into the image or created by running commands via Dockerfile instructions in the image building process. Each

The neat part about this is you can put only the files needed by the application into its image: libraries, executables, config.

So if you’re running a Python application, you can include just the interpreter you’ve qualified the application for, e.g. Python 2.7.16. You do not need to worry about other Python apps that might run on that host, let alone ones written in Java or Ruby.

You’ll get just (and only) the filesystem you build up using your Dockerfile.

Docker images also allow you to define metadata that helps operators run the application, including:

  • a default command Docker should run to start the application, e.g. /start.sh
  • ports, environment variables, and the working directory the application expects to use
  • directories (volumes) the application expects to files to be mounted into
  • labels that describe what is inside the image using key-value pairs

Each image is uniquely-identifiable, taggable, and distributable.

Running applications in isolation

Second, Docker’s container engine and command line tool make it simple to retrieve application images and start isolated instances of each application process.

Let’s step through how Docker turns an image into a container:

Running a Container from an Image

When you execute docker container run, the Docker container engine:

  1. pulls the application image from the artifact repository
  2. unpacks the image’s filesystem into a local ‘golden’ copy
  3. uses the OS kernel to create isolated, limited execution context
  4. runs specified program inside context

Step 3 is where the real magic happens. The isolated, limited execution context includes a dedicated filesystem, process, and network namespace. What follows is default Docker behavior, which can be modified many ways but is a nice set of safe defaults.

The filesystem is copied from the one baked-into the image and only processes inside the container can see it. If the application writes to the filesystem, those changes are only present on the container’s filesystem, not the host or any other containers.

The container has its own process namespace where only processes that were started inside the container are visible to each other. An application (or attacker who has gained access to your application) cannot ‘see’ processes running outside the container. A listing of all processes will only show what’s running inside the container. For example, a container running ps -efinside a qualimente/rando-doggos:2018-03-20-1030 container would reveal:

PID   USER     TIME  COMMAND
    1 root      0:00 python /usr/src/app/app.py
    7 root      0:00 ps -ef

i.e. not much

The container also gets its own network adapter so the application can listen on whichever ports it wants to and the operator running the application can choose what ports, if any are routed from the host’s network adapter to the container. So if I wanted to run multiple instances of the rando-doggos web application which listens on port 5000 on the same host, I could map network traffic on the host to the container like so:

  • rando-doggos-1: route traffic from host port 5001 to container port 5000
  • rando-doggos-2: route traffic from host port 5002 to container port 5000

When you run a container you can also set limits for what compute resources (memory, cpu, IO, devices) that container can use. Further, Docker drops a number of privileges most containerized applications don’t need by default and makes it easy to run a program as a non-privileged user. You get all this and (much) more without being a Linux Systems Security Wizard. The capability has been commoditized and is easily accessible to you now.

That’s a ‘quick’ intro into how application containers can help you solve many common packaging, distribution, and operational problems.

I’ll demonstrate and explain some Docker features you can use to improve the security of containerized applications in a future post.

For now, feel free to hit me up if you have a question about containers. Perhaps you’re wondering if a certain use case is appropriate? If “it’s possible” to containerize a particular app? Try me 😉

Stephen

#NoDrama

p.s. if you want to dig deeper into this topic, check out Docker in Action, 2ed, which I co-authored with Jeff Nickoloff