This deep-dive on Continuous Integration started with saying that a project’s CI process should build the canonical release artifacts for a project or product. These release artifacts should archive the application that was built along along with application-specific tools such as scripts required to run the application.
Avg Reading Time: 3 minutes
These artifacts will then be deployed by a continuous delivery pipeline. But before we move on to delivery, let’s check that we’ve got a complete shipment.
Based on release artifacts’ intended usage, we can extract a few requirements to evaluate before calling them ‘good’:
- Build: contains all files needed to deploy the new version of the application as a unit
- Distribute: straightforward to identify and move a release artifact throughout the delivery process
- Run: can deploy the artifact directly into the execution environment, often supplemented by a deployment descriptor that describes what resources the application needs from the platform. Deployment descriptors are essential elements of ‘the build’ and should be versioned and published along with the rest of the release.
Do your release artifacts meet this criteria?
If you’re not sure, ask yourself if you have a tangled mess of custom deployment and configuration scripts sitting nearby. This may indicate your release artifact may not have a well-defined interface that can be dropped onto a deployment platform for execution or the deployment descriptor is missing. If a clean abstraction and handoff doesn’t appear possible, maybe there are problems in the deployment platform.
Release artifacts in the wild
Here’s a quick rundown of how some common compute platforms expect to consume deployment artifacts.
AWS’ Elastic Compute (EC2) is a general purpose compute platform that uses:
- release artifact: Amazon Machine Image (AMI) with the base filesystem contents and kernel of a machine
- deployment descriptor: launch configuration
All of the following platforms run processes in containers.
- release artifact: Docker image(s)
- deployment descriptor: Kubernetes pod spec
- release artifact: a OCI (Docker-compatible) image built from application sources using a build pack
- deployment descriptor: Procfile
AWS ECS (Fargate+EC2) runs processes in containers and uses:
- release artifact: Docker image
- deployment descriptor: ECS task definition
The AWS Lambda services executes functions in response to events and uses:
- release artifact: zip file
- deployment descriptor: Lambda function configuration
Key advice: Don’t forget the deployment descriptor when you’re automating the build of your release artifacts. Also be sure to model deployment across all your target environments.
What about the other stuff
You may be wondering where resources that support the application such as load balancers, queues, and data sources should be defined and that’s a super-important question to answer. The short answer is define application specific ‘stateless’ resources along with the rest of the application code to enable continuous delivery, but manage stateful resources such as databases on a separate, highly-scrutinized path. We’ll dig into that later.
I will wrap with a couple of questions:
- Knowing what you now know about release artifact and deployment descriptors, would you expect clean deployments of your application?
- If you’re thinking of improving your release artifacts, are you thinking about switching to a new deployment model? i.e. switching from host to container based deployment or containers to functions