I’m researching how engineers assess the security posture of their Cloud deployments and evaluate risk to those deployments so they can improve it.

Reading Time: 10 minutes

The research starts with these questions:

  1. What’s the hardest part about assessing and improving the security of your Cloud deployments?
  2. When was the last time you tried or wanted to assess your Cloud deployment security or reduce its risk position?
  3. How did that go? What problems did you encounter?

I promised to share some answers to these questions and ideas about what some of the fundamental challenges are for assessing cloud security posture with current tools.

This is intended as a constructive analysis of problems users of these tools experience and what they say they need. These assessment tools are very useful and I and the user community are thankful they exist and the effort that goes into them. Here we go…

NoDrama DevOps

No responses to those questions from the NoDrama mailing list yet. That’s interesting data that could be interpreted in at least a few ways, but I’ll refrain from too much speculation for now. I’d (still) love to learn your answers to the questions above.

What users say in public

My team surveyed GitHub issues and Google for problems the user community is experiencing and requests they are making to improve Scout Suite and CloudMapper.

The problems and feature requests that jumped out to us in that analysis were:

Pace – Cloud Providers Grow Quickly

Merely scanning the open issue lists reveals an endless stream of feature requests to support new features, new regions, and new threats. Building and maintaining a general purpose security assessment tool for one Cloud takes many resources. Supporting multiple Clouds clearly takes much more than that. If you’re considering supporting multiple cloud providers, I suggest reviewing issue lists for tools that support cross-cutting concerns like Security to get a sense of what it takes. There’s good data there.

Context – Integration with AWS Console

Some people want reports to link to/integrate with the AWS Console (Scout Suite issue SS16 and SS25). While Scout Suite now includes a copy of most relevant information related to the resource, there are situations when you’re missing context or you’re wondering if the resource is still configured that way and want to go look.

Socializing Findings – Export & Publish Findings

There is a request to export Scout Suite findings to additional file formats (SS438).

To me, this issue may address a problem around socializing security findings safely; unfortunately, it lacks context so I’ll speculate a bit. You may need to share a ‘final’ report with your team, manager, or auditor. That use case is difficult to support when the information is served from local interactive website. Of course, implementing ‘export’ features is generally difficult for complex systems.

There are issues filed to publish findings to AWS Security Hub for both Scout Suite (SS92) and Cloud Mapper (CM238). I find it interesting that the CloudMapper lead (Scott Piper) entered the issue to publish findings to SecurityHub, but then closed it back in Feb 2019 because he wasn’t sure about the value since you couldn’t configure alerts on those findings at the time.

To me, exporting to Security Hub addresses the need to easily present findings in an existing, access-controlled user interface. I can see this being especially useful for “low-touch” automated analysis processes.

Scalability

Users have requested help scaling-up their assessment processes. Examples include assessing all AWS accounts in an Organization (SS249) and release dedicated to ‘very large’ clouds (SS 6.x.0).

This information should give you some nice context on what kind of problems active users of these tools have. However, this analysis suffers from a bit of survivorship bias.

What does getting started with one of these tools look like? Are we losing non-experts or those without a lot of time to get going?

Personal Experience with Scout Suite

Yesterday, I assessed the security of a couple of AWS accounts using the latest version of my favorite tool in this space, Scout Suite. I thought an hour would be sufficient to run the tool and generate the reports for subsequent analysis since I’d already done this several times before.

Here’s a summary of the hardest parts of that process divided into setup, execution, and analysis.

Setup

Installing the latest version of Scout Suite held some unexpected challenges. The wiki says:

Not for me. [1]

The installation process failed for a clean virtual environment using Python 3.6 on OS X (brew):

ERROR: google-auth 1.10.1 has requirement setuptools>=40.3.0, but you'll have setuptools 28.8.0 which is incompatible.
ERROR: google-api-core 1.16.0 has requirement setuptools>=34.0.0, but you'll have setuptools 28.8.0 which is incompatible.

It blew up when installing google apis (which I wouldn’t be using). This sort of error is a classic Python ecosystem dependency management problem, even with virtual environments. Solving it could take an hour, a day, or more — while risking changes to my main Python development toolchain. So I looked for a way to run the tool using Docker with a virtual machine as my next choice ([2] for update).

I could not find up-to-date official Docker images, but discovered a brief description of how to build and run a Docker image on the wiki. The Docker path led to success.

I built a Docker image for Scout Suite with the provided Dockerfile and ran it via make. The container run command provides credentials to the container via the AWS_OPTS environment variable:

# run scout using version image built from git commit e784fc27
.PHONY: scout-assessment
scout-assessment:
	docker container run --rm -it \
	$(AWS_OPTS) \
	-v "$(PWD)/results:/opt/scoutsuite-report" \
	scoutsuite:e784fc27 \
	aws

Getting to this point took about 45 minutes of active debugging with ‘should be trivial’ knocking around my head.

Execution

Setting up an IAM user or role to perform an audit is a necessary dependency, and was completed previously in my case.

However, it can or should be tricky to get the credentials to perform the audit. This is because the auditor role will have wide permissions to read at least metadata from the account.

How-to: Use aws-vault to assume a role in an AWS account will help with the mechanics of executing commands with the right credentials.

However, the tricky bit is in understanding the auditor role’s Trust policy and your possible chains of access so the right parameters are specified to the assume role command.

One of the accounts involved here is very locked-down so figuring out the proper incantation took 30 minutes, even with legitimate access already granted.

Finally, executing the assessment process took a few minutes per account and succeeded without incident. Woohoo!

Analysis

Now I’m to the magic moment where I actually get to review and analyze the results. Most of the time I allocated yesterday for this activity was consumed by the previous steps. However, I started by reviewing the findings labelled Danger for IAM, Audit, Data, and EC2 services.

The findings are ok, but I’d say they’re missing Context. Context about:

  • Use Case: the type of activities this account is used for, e.g. sandbox, application delivery, shared services, security
  • Environment: the stage(s) of delivery supported by this account, e.g. dev, test prod
  • Data Classification: the intended classification of data’s confidentiality, integrity, and availability – should the s3 bucket be public? maybe it’s a public website, maybe it’s your credit applications

The tool can’t provide this context on its own. But if users could provide this context, the analyzer could leverage it characterizing findings and help you focus.

Some fresh impressions:

First, some of the explanations are very technical. For example, if you’re looking examining the S3 bucket analysis you better know what Bucket ACLs, Bucket Policies, and IAM policies are.

Second, access is often reported in a kind of binary fashion that triggers many deep investigations. For example if an IAM role has any access to an S3 bucket then you need to go look at policies to determine whether the principal can read data, write data, delete data, administer the bucket, etc. This consumes a lot of time and energy, and may only result in a first pass if you need to account for Service Control Policy and KMS key policy.

Integration

Once we get past setup, execution, and interpretation of the first assessment, I’d like security and risk assessments to be done frequently, and possibly after every deployment. The idea is to detect changes in risk to the deployment close to when it occurs.

CloudMapper supports this via its Continuous Auditing feature, which ‘runs CloudMapper’s collection and audit capabilities nightly, across multiple accounts, sending any audit findings to a Slack channel and keeping a copy of the collected metadata in an S3 bucket.’

Hopefully that gives you a better idea of what it might be like to assess the security of your deployment for the first time.

Takeaways

My intent in sharing this research is to try and be I’m sharing this research to help you use these tools more effectively and manage expectations for you and your team. My takeaways from this research are:

  • if you’re looking to adopt one of these tools, I suggest allocating a few days to select a tool whose reports look understandable and actionable then set it up, execute the tool, and analyze the results
  • some of the findings categorized in the highest level of risk will be understandable and maybe actionable by everyone with basic cloud skills, but much of the information requires deeper security expertise; Building Security Skills for an AWS Cloud Migration should be help AWS users
  • the risk assessment process leans heavily on human expertise right now, which I attribute to a gap in context — this is a fundamental challenge with many deployments that reduces the maximum effectiveness of our tools
  • scalability and frequency of the risk assessment process may be constrained in practice by the the reliance on human expertise
    • I think CloudMapper provides at least a partial solution by sending alerts to Slack for review and enabling you to mute those via configuration — a feedback loop

I would love to discuss any or all of this with you. I’m sure there are problems I’ve missed, solutions I’ve overlooked, and incorrect parts of this analysis that are incorrect. Reply!

Stephen

#NoDrama

[1] Even for someone who develops security tools for AWS in Python 3, is familiar with common Python dependency management problems, and knows when and how to solve those dependency problems in Docker instead.

[2] I resolved this issue today with pip3 install --upgrade setuptools in the venv and installation succeeded. I’ll try to work out a PR upstream.