Avg Reading Time: 4 minutes
Secrets, secrets, yum, yum, yum…
Secrets, secrets, give me some!
Once your application’s secrets are available in highly-available vault near your application and using a Cloud managed identity, you can turn to the task of reading and using those secrets safely.
There are a few options for getting secrets into an application from a vault. Each option exists on a continuum of safety and convenience. The three most popular options for an application to obtain its secrets are:
- Environment Variables
- Vault API
Let’s explore the how you implement each of these along with their advantages and disadvantages.
Read from Environment variables
One popular method for providing secrets to an application is via environment variables. In this approach, the application’s startup script retrieves the application’s secrets from the vault and exports environment variables with the secrets. Then the script starts the application.
The main benefits of this approach are that:
- the lifetime of the secret is ephemeral: the life of the process
- the process may already consume configuration via environment variables in ’12-factor style’
This blog post demonstrates using chamber to populate secrets for a containerized application running on AWS’ Elastic Container Service.
That said, providing secrets to apps via environment variables makes them susceptible to various forms of leakage. Diogo Monica, designer of security systems at Docker/Square/Anchorage, wrote a great explanation of ‘Why you shouldn’t use ENV variables for secret data‘. The most important and common problems with using environment variables as secret transfer mechanisms are:
1.You can’t assign access control mechanisms to an env var.
This means any process executed by the application will likely have access to those env vars. To illustrate this, think about what it might mean for an application that does image resizing via ImageMagick to execute resizing operations with untrusted input in environment containing the parent application’s secrets. Some languages and libs will help you prepare a safe process execution environment, but your mileage will vary.
2. Many applications will print all of their environment variables to standard out when issued a debugging command or when they crash.
This means you may expose secrets in your logs on a regular basis.
A safer variation of this approach is to echo the secret into the application via standard input as in this example of configuring the password of a MySQL user from an SSM parameter. Reading secrets via standard input will probably be difficult when you have more than one secret.
(Perhaps) the safest version of this approach is to use a tool such as chamber or vault to read secrets from the appropriate place and
exec the application with the full set of the application’s secrets as environment variables. The main benefit of using a specialized tool to retrieve secrets and start the application is that these tools have been designed to handle secrets safely. These tools are usually distributed as static binaries to simplify integration with your deployments.
Read from a file
Another option is for the application to read its secrets from a file. This file should:
- have a narrowly-specified set of permissions that ensures only the user which the application runs-as can read that file
- be written to an in-memory filesystem provisioned specifically for that process, i.e. a tmpfs mounted only into that application’s container
The main benefits of this approach are that secrets are:
- (still) only written to ephemeral storage
- filesystem access controls can be limit access to the secret
- the secret files can be deleted after startup
This pattern is supported directly by Docker Swarm and Kubernetes, where the orchestrator places secrets into files under
/run/secrets/, which take care of setting up the tmpfs in the application container in addition to delivering the secret. Tools like
chamber can also generate files in a number of formats convenient for application consumption, but you will need to bring your own ephemeral filesystem.
Here is a (simple) example of code that reads a secret from a file on startup.
Read from API
The application could also read its secrets directly from the Vault using the Vault’s API. This should generally be the most secure option. For example, Spring Cloud Config supports reading properties from several places, including AWS Parameter Store.
The main advantages of this approach are:
- a number of potential places to leak secrets are avoided entirely: env variables, accidental exposure with running a startup script with
- the startup script does not need to be aware of the application’s secrets, which may be an advantage, especially when the script is controlled by someone else
The greatest challenge for adopting this approach is that your application (framework) must take responsibility for retrieving secrets from the target vault. This may be an advantage or disadvantage depending on your point of view and whether the application already has a dynamic configuration system. Using your vault’s APIs via an SDK is often straightforward, especially if you’re using a vault managed by your Cloud provider. Don’t be ‘afraid’ to use it. Querying the API directly can be the safest and simplest route.
I hope this post has helped you understand some of the options and considerations for consuming secrets in the last mile of application delivery.
Ping me if you have questions or comments!
Receive #NoDrama articles in your inbox whenever they are published. Reply to Stephen and the QualiMente team when you want to dig deeper into a topic.