
Today, I’d like to describe a useful pattern for many Cloud-hosted applications, the Secure Inbox.
Problem: Organization A needs to publish a work product stored in potentially large files to a consumer in organization B. Organization A wants to maintain a high degree of control and ensure confidentiality of its data.
Operational Context: Operations in Amazon Web Services
Solution: Organization A generates the work product and stores it in a Repository
S3 bucket in its AWS account encrypted with a KMS encryption key owned by organization B. Organization A then copies the object from their repository to a Secure Inbox
S3 bucket in organization B’s account.
Examples:
- generate and deliver a nightly report
- transform and deliver a batch of images
- ad-hoc file deliveries between organizations
Description: The Secure Inbox pattern uses the AWS Simple Storage Service (S3) and Key Management Service (KMS) to store files generated by organization A and publish those files to organization B. Organization B maintains a high degree of control of this data by providing organization A with permission to use an encryption key that it controls. Azure and GCP offer similar object storage and encryption capabilities, so the pattern applies there after adjusting for API details.
Details
First, org A creates a Repository
bucket and IAM roles and policies that permit:
- the
generator
to write to theRepository
bucket and encrypt objects using org B’s encryption key; notice that thegenerator
role does not need to be able to decrypt data - the
publisher
to copy objects in theRepository
bucket to other buckets usings3:CopyObject
Org B creates a Secure Inbox
bucket and KMS encryption key, configures bucket and kms policies to permit org A to use those resources, and shares the ARNs of the bucket and key with org A.
Now org A generates data for org B.
When org A stores that data as an object in its Repository
S3 bucket using the s3:PutObject
API, it configures that API call to encrypt the object using org B’s key. If you’re using Python and the boto3 library, this looks like:
s3_client.put_object(ACL='private',
ServerSideEncryption='aws:kms',
SSEKMSKeyId=kms_encryption_key_id,
Bucket=bucket_name,
Key=key,
Body=body_bytes)
The generator
will need an IAM policy that permits it to write to the Repository
bucket using s3:PutObject
. The IAM policy must also permit thekms:Encrypt
and kms:GenerateDataKey
APIs so that it can use org B’s encryption key to encrypt the object. Notice that the generator
role does not need to be able to decrypt data. This may be a very useful security property if the generator
runs in a dangerous execution context or you are keen to compartmentalize responsibilities.
Next, you’ll need to trigger the publishing process with the location of the object that was stored. This could be done via an SNS event, S3 event subscription, or some other mechanism.
When the publisher
receives the notification that a new object was stored, it extracts the location of the newly stored object from the event and copies the object to organization B’s Secure Inbox. The publisher
copies the object to the Secure Inbox
bucket using the s3:copyObject
api action. The Python and boto3 code looks like:
s3_client.copy_object(CopySource=source,
Bucket=destination['Bucket'],
Key=destination['Key'],
MetadataDirective='REPLACE',
ACL='bucket-owner-full-control',
ServerSideEncryption='aws:kms',
SSEKMSKeyId=destination['SSEKMSKeyId'])
The copy_object
method call is much more interesting than the put. This request is copying objects across accounts and there are few things that may hang you up.
SSEKMSKeyId
must be set to the full ARN of organization B’s encryption key so that it resolves properly- The
publisher
‘s IAM role must have permission to callkms:Decrypt
so that S3 can read the currently encrypted data, generate a new data key withkms:GenerateDataKey*
, and finallykms:Encrypt
the data on thepublisher
‘s behalf - Notably, the key policy for org B’s encryption key, must also permit the
publisher
role tokms:Decrypt
,kms:GenerateDataKey*
, andkms:Encrypt
with the key. Thepublisher
role never actually sees the data in this case because the decryption and encryption process is handled by S3 as part ofs3:CopyObject
- The S3 object’s ACL must configured so that org B can read the data; the bucket owner gets full control of the object in this case. Perhaps counterintuitively, the
publisher
‘s AWS account would own the object, by default.
Results
With the Secure Inbox pattern:
- Organization A can store and deliver objects of arbitrary size to partners while narrowly scoping what partner data its own teams and applications have access to
- Organization B can receive work generated by partners reliably and securely while maintaining tight control over who has access to that data via KMS encryption key policy and bucket policy
Stephen
#NoDrama