Reading Time: 2.5 minutes

The Heartbleed vulnerability in OpenSSL was publicized 5 years ago, Sunday (April 7, 2014). Heartbleed allowed anyone on the Internet to read the memory of systems using 2 years’ worth of vulnerable versions of the OpenSSL library. This impacted webservers such as Apache and NGINX and many dedicated load balancer products including HAProxy and commercial offerings. Because OpenSSL works with secrets, attackers were able to read and retrieve secrets fairly easily.

At the time I was an application and platform Architect working to improve the speed and security with which my organization delivered application services to customers. My first thought was:

Brace for impact.

I knew:

  1. How serious this vulnerability was: a huge portion of the TLS/SSL certificates responsible for keeping communications confidential between our customers and services were at risk of being stolen
  2. How long it would take to update the networking and application infrastructure I knew about, which was 50%, at most. Many of the components were managed only via GUIs (which I knew because I had been trying to integrate via APIs), which would make delivery process changes moot.
  3. There were many components I didn’t know about.

Honestly, I felt a bit helpless. I supported the security, release, and operations teams in surveying the problems, I couldn’t do much more than that. The delivery processes for our services were what they were and that wasn’t going to change during this incident.

Heartbleed did bang into my head that every team needs to be able to go fast, safely with the capability to release at-will, possibly Right Now. This accounts for a good chunk of my motivation to help teams adopt continuous delivery and standardized application delivery systems like containers.

The story played out differently at AWS…

How AWS handled Heartbleed

Colm MacCárthaigh, a/the Principal Engineer leading the AWS Elastic Load Balancer team at the time recounted how AWS responded to Heartbleed on Twitter over the weekend. The most amazing part to me:

Within about an hour, deployments with the hot patch were in progress, and it went out quicker than I’ve seen anything. Within a matter of hours, AWS was 100% patched. Even 5 years ago, this was millions of deployments. Amazingly, there were no reports of customer impact either.

The superpowers AWS leveraged that day were the abilities to:

  • Recognize the severity of the problem and marshal the organization to respond proportionately
  • Update millions of customer and internal network connection endpoints in hours — without customer impact.

It turns out that because AWS paused all other deployments in order to deploy the Heartbleed fixes, that their deployment system serviced fewer deployments than a normal day, via Mark Mansour, who was running deployment systems during the incident.

The ‘Normal’ path should be Quick

The super-critical security fixes were delivered via the normal delivery path.

And this is a great example of why investing in robust delivery processes to lower the minimum time for an idea to go to concept to customer is not at odds with security, ‘stability’, and customer happiness. In fact, a robust delivery pipeline is a mechanism by which you can achieve all of these.

I’ll part with a couple questions:

  • How long would it take for you to deliver a critical security update or even ‘simply’ rotate a leaked secret?
  • Is it faster than 5 years ago?