When you adopt Continuous Delivery, you’ll probably be deploying and testing your software about 10x more than in your old delivery model. Today, we’ll discuss how to verify a fast moving stream of changes and discuss the realities of bottlenecks.

Verifying the completeness and correctness of a build can entail a lot of things:

Let’s start by answering a critical question: Can we release this build?

Determine if the build is acceptable

Each delivery process needs an acceptance test suite that is capable of determining whether a given release candidate may be deployed to customers (without a bunch of meetings). If you’ve already delivered software to your customers, then have an acceptance test suite, though it may be ill-defined and messy. You will need to clarify and automate as much of this as possible. Codifying acceptance brings precision to your delivery’s acceptance criteria in addition to speed.

The names for different kinds of tests in the tech world are imprecise, so I’ll define what I mean by acceptance test suite.

The acceptance test suite is a collection of tests (or test suites) that verifies the system behaves well-enough that customers and the business will accept and use it safely.

Hopefully they’ll love it, but the bar usually isn’t set that high at the start.

The acceptance test suite focuses on externally visible properties such as:

  • functionality
  • usability
  • performance
  • security

Sometimes I think of the acceptance test suite as the “voice of the customer.”

The diagram organizes verification processes into two major groups: automated and manual. There is a secondary level of organization, too. Three of the most common types of testing are represented in the diagram as horizontal layers: Functional, User Experience, and Security.

Your organization may currently use none, some, or all of the test processes that are depicted. These test processes may be spread and/or repeated across multiple stages of delivery with names like dev, test, uat, perf, stage, prod.

Assess your current verification process

A lot of organizations have a serious bottleneck in the verification phase of their delivery processes. It’s common for teams to not see the bottleneck or feel like they can’t do anything about it. However, to succeed with continuous delivery, you’ll want to make all of this work visible and measure about how long it takes to complete each step. This should help get changes rolling.

Consider a delivery process that transits four environments, dev, perf, stage, and prod:

I don’t think this is exactly like any delivery process I’ve ever seen, but it sure looks like a lot of them at the start of an improvement effort.

For this exercise, assume the full manual functional test suite takes three days to execute, which is on the low end of what I have observed for real systems. Assume all the automated steps take one hour or less. Let’s ask and answer some questions about this process:

How many of the current steps add little or no value (no matter how nice the people doing it are)?

I think all the steps in this process add some value, though of course your mileage will vary.

One potential problem here is that the ‘UAT’ steps occur late in the process, far from when functional tests have passed in dev. I would double-check that whatever assurance is being performed in ‘UAT’ is included in the regular functional test suite. This will help avoid rejecting the build in a late-stage. It also highlights an important set of tests that should be automated so they can run frequently.

How many manual test processes gating release are executed before all the automated ones have passed?

Two. The automated Load and Performance and Security tests execute after a lot of manual functional testing has been done. This is bad because:

  1. since the performance and security tests may discard a release candidate, the manual effort of a bottleneck will be thrown away from time to time
  2. the manual verification takes 3 days, which takes longer and costs more than running the automated performance or security tests, so a higher cost test is being run ahead of a lower cost test

Solution: Run automated tests before manual tests.

The ideal process depicted at the beginning implies running automated tests in parallel and then proceeding to manual tests. Let the delay in feedback to the delivery team be your guide in whether it is important to parallelize a given test step. Serial is fine if steps happen quickly and are not performed by a bottleneck.

What is the fewest number of environments you could deploy into and still run all of your tests?

The example process tests security and performance in separate environments from dev. Is there a good reason for this (any more)?

Continuous Delivery’s promotion of a single set of release artifacts through environments eliminates the classic reason of ‘the code is different’ in each environment. So the security and performance characteristics of the code won’t be changing by environment.

Maybe we could simplify to dev, stage, and prod environments with the security and performance testing moving into dev.

It’s true that you may have environment specific configuration artifacts that accompany a generic release artifact. Would it be possible to use the ‘performant’ and ‘secure’ configuration in the development environment, at least as a default?


In order to provide high throughput of changes to customers and quick feedback to the delivery team, you will need to automate the bulk of whatever constitutes your acceptance test suite.

Use automation to verify that core functional and security-related functionality is working with both positive and negative use cases. That is, don’t just test you can signin a user, also verify you can’t sign-in with an incorrect password.

If you don’t have any functional test automation right now, you can start with a functional test suite that builds upon the Sanity Test suite used in the deployment process. Verify the ten next-most important things your customers care about and increase coverage over time.

The really big changes are:

  • test things in the right place and avoid wasting people’s time
  • make is to make a long-term commitment to shift the team’s investment in functional testing from manual to automated verification


Every organization is likely to have different answers to these questions and your approach to delivery will evolve over time.

If you are thinking of adopting continuous delivery, now is the time to evaluate the effectiveness of your verification process and identify opportunities for improvement in safety, efficiency, and throughput.

It is very important to develop an acceptance test suite that helps ensure your customers will be pleased by speedier releases. Simplify and automate verification where you can improve safety and accelerate delivery. Do not eliminate checks that provide valuable amounts of additional safety merely because they are manual.

Over time, this should have a few effects:

  • quicker verification and more predictable verification times
  • lower cost to verify a release
  • people will be freed to focus on what only people can do