To escape The DevOps Organization Build Trap, I suggested that you minimize the distinct sets of components your organization uses to deliver and operate software. Those components represent a dependency tree and are integrated, enhanced, and worked-around by your team to meet your organization’s requirements on your dime.

While all delivery pipelines and deployment platforms may look the same on the surface, they can differ significantly in obvious or subtle ways that matter a lot in practice as you vary technology components.

This means when an engineer switches from one delivery pipeline and deployment platform implementation to another, those differences may make them feel like a salt water fish dropped into Lake Michigan (or maybe just swimming upstream).

By component, I mean the circles on this Wardley Map showing what’s required to delivery software and operate it in production:

Figure: Logical Delivery Pipeline and Operational Landscape

A common response to this problem is to hire SREs or DevOps engineers to specialize in that knowledge and be responsible for configuring or building pipelines, deployment descriptors, and more — at least until the path to production is sufficiently productized and boring (in a good way). The point is…

You should recognize this work exists and plan for the people, effort, and time to:

  • learn how those components work and build what the organization needs to adopt the component
  • maintain the custom integrations that are now delivering value to your customers

There’s a good chance you’ll need at least 2 engineers to build and maintain each unique combination of delivery pipeline and operational platform.

Example: Serverless

Here’s a near and dear example of what I mean. I’m building a SaaS that uses:

  • Application language: Python 3
  • Application framework: Serverless Framework
  • Build tooling: Make, Python Ecosystem, Serverless Framework
  • CI/CD tooling: built and deployed via CircleCI
  • Compute API and Platform: AWS Lambda
  • Infrastructure as Code tool: CloudFormation (determined by the Serverless Framework)
Figure: A Serverless Delivery Pipeline and Operational Landscape

Overall, I think this setup is great. I feel very productive when developing, testing, and delivering application code.

What does this setup have in common with my expertise in Java, Maven, Docker, Terraform, Jenkins, and AWS Elastic Container Service? Very little:

  1. Concepts: The concepts and general mechanics of build tools, delivery pipelines, infrastructure as code, and operations
  2. Implementation: Basic Docker bits for build tooling

Since I expected a lot of learning and and budgeted for implementation of this new-to-me delivery process, the effort mostly classified to ‘fun.’ However, I’ve worked with a number of organizations that might overlook this effort to support a new delivery target, which was implemented over the course of 15-20 user stories and a duration of maybe 6 weeks.

Having a dependency graph for application delivery and operations is useful for many what-if scenarios.

For example, let’s consider switching from the Serverless Framework to the AWS CDK or SAM frameworks but continue to run on the AWS Lambda Compute API. That change would require changes to the following components:

  • Updates: Delivery Pipeline, Integration & Functional Tests, Compute Platform, Observability Tools
  • Replacement: Packaging & Deployment, Infrastructure Service Catalog

A switch to Elastic Container Service or Elastic Kubernetes Service would invalidate nearly the entire dependency graph, because that is a change to the Compute API component.

Which brings us back to where we started: each component in your delivery value chain matters and most every variation results in a unique set of delivery pipeline technology that you need to learn, develop, and maintain.

So keep this in mind when exercising freedom to choose tools — you’ll be responsible for that choice in the future.

Stephen

#NoDrama