Avg Reading Time: 4 minutes
Say your applications running in the Cloud each have an identity managed by the Cloud provider, avoiding the First Secret Problem. Those applications can access that Cloud’s managed services securely.
Those Cloud applications may (ahem, will) still need to access other services in another security domain, e.g.:
- applications running in a datacenter managed by another provider, e.g. your ‘classic’ datacenter
- a SaaS application that implements an application function such as machine learning or an ‘infrastructure’ concern like logging or monitoring
This cross-security domain access situation is the default during a migration to the Cloud and even after. Unless you migrate to an ecosystem that is authenticated entirely with a single Cloud provider’s IAM, you’re going to have ‘third-party’ secrets in the long run.
And that means you need to manage those secrets and deliver the right secret to the application at the right time. This is quite similar to non-sensitive environment-specific configuration, so it would be nice if we could integrate this into the application’s automated delivery process.
Let’s explore what it might look like to do that in a safe and highly available way for an application delivered via automation .
Logically, we can start by adding a ‘Distribute Secrets’ step to the application’s delivery process (circled in orange in figure). First the delivery process can distribute (publish) generic and environment specific artifacts like virtual machine or Docker images to the Cloud provider. Then the process can distribute secrets by copying them from the system of record into the Cloud’s application runtime environment. And finally, update the application’s deployment descriptor to use the new artifacts and materialize changes in the application.
Distributing secrets from within an automated delivery pipeline
Let’s explore the key requirements and features of the ‘Distribute Secrets’ step in a common scenario where secrets are stored in secure vault and those secrets need to be available to the application at runtime.
The source vault might be implemented by a:
- hardware security module (HSM) from vendors such as Luna, Thycotic, or nCipher (formerly Thales)
- software secret store such as Hashicorp Vault or an existing CMDB
- file encrypted using
openssland stored in the application’s source repository
You might even have multiple source vaults that form the systems of record for an application’s secrets.
An automated delivery process needs to know what secrets an application needs and the system of record for that secret. For example:
- The Datadog and SumoLogic api keys are stored in an encrypted yaml file inside the application source repository
- The TLS client certificate for
App Ato authenticate to
App Bis in the Thycotic Secret Server named by the key
The secret delivery process also needs to know which Target Secret vaults the secrets should be distributed to and the names of those secrets.
Some target vaults might be:
- another instance of your Source vault, that doesn’t support replication or a ‘copy-on-deploy’ delivery model for organizing secrets
- AWS SSM Parameter Store
- an Object Store: AWS S3, GCP Cloud Storage, Azure Blob Storage
- Kubernetes Secrets
Suppose you have settled on using AWS’ SSM Parameter Store to store secrets as
SecureStrings. You need to define:
- names of the keys that an application will read when from parameter store when it starts up
- the AWS region those secrets will be deployed to
The Secret Manifest in the delivery process diagram contains this mapping of secrets from source to target vault. AppA’s manifest could contain entries that look like:
# the Secrets Manifest contains a mapping of secrets from source to runtime target in each delivery stage stage: prod: - name: "client-cert-AppB" source: vault: "luna-safenet-prod" keyring: "AppA/client-cert-AppB" target: vault: "ssm-us-west-2"
Now, we can build a
secret-mgr command-line tool that:
- reads the mapping of an application’s secrets from source to target systems
- retrieve secrets from source vaults using credentials provided by the software delivery system, e.g. Jenkins/GitLab/Bamboo
- record what secrets it is about to distribute in an audit log
- syncrhonize secrets to the target vault(s) using credentials provided by the software delivery system
With this general flow, you can orchestrate the delivery of secrets within the application’s (automated) delivery lifecycle.
Of course, there are still complications such as:
- the CI/CD system needs to (safely) provide secrets to read and write secrets from source and to target vaults
- secrets must be organized carefully in the Target vault(s) to avoid collisions between applications and even different versions of the same application
- automating access control policies for applications to read their secrets from the target vault
- how to detect when secrets are changed outside of this delivery process
- how to clean up unused secrets
But, overall, I think this approach is tractable. A lot of teams have implemented some portion of this with custom tooling or adopted a tool like Segment’s Chamber to implement a portion of it.
What do you think?
Do you have problems delivering secrets from a source vault to applications in their runtime environment via an automated pipeline?
Do you want a tool to help with this problem?
I’d love to discuss this problem with you!
I’m in the early stages of building a tool to distribute secrets from within continuous delivery pipelines. Support for multiple source vaults, particularly HSMs, is planned. The
secret-mgr tool at the center of the delivery process will be released as open source once ready.
In particular, I’d like to learn:
- What have I missed that’s painful in your environment?
- Which secret vaults are most important for you? Both source and target
- About your audit and governance matters like knowing which secrets are in use and who/what set the value of a secret in the Target vault.
Hit reply and let’s chat.
KEEP IN TOUCH
Receive #NoDrama articles in your inbox whenever they are published. Reply to Stephen and the QualiMente team when you want to dig deeper into a topic.