Skip to main content

How to secure AWS S3 leaky buckets

(Image credit: Image Credit: Gil C / Shutterstock)

Welcome to 2017, the year where almost every American has been affected by the dozens of high-profile data breaches and leakage events. While the Equifax breach commanded the bulk of people's attention, there were several other hacks and data leakage incidents that occurred out of the spotlight, which could have been easily prevented. Specifically, incidents that were downloaded by unauthorised individuals by way of misconfigured AWS S3 buckets. Surprisingly, large corporations were affected by the AWS S3 data leakage, many of which are mature in their information security organisation strategy.

For those not familiar with AWS S3, this is Amazon Web Services' Simple Storage Service (S3). S3 allows anyone with an AWS account to store data, share files or host simple websites. S3 became popular for a number of reasons ranging from its low cost (as low as $0.023 per GB on standard storage) to high durability (99.999999999% durability) to user-configurable lifecycle management features. However, with its benefits comes several drawbacks, one of which is companies that use it may not have had the time to properly assess and integrate best security practices to the cloud.

How does it work?

Setting up AWS S3 bucket is extremely simple. Here are the steps:

●        Login to the AWS Console, navigate to S3, and click "Create Bucket" (Screenshot 1)
●        Give it a bucket name and select region (Screenshot 2)
●        Set the bucket properties such as logging settings (Screenshot 3)
●        Set permissions (Screenshot 4) and you’re finished!

The only caveat is if you set "read" on the bucket in Step 4, everyone will have access to list directory contents or worse, access files that are later uploaded to the bucket. Note, setting “read access” during bucket creation means everyone can list directory contents, as well (Screenshot 5).

Screenshot 1:

Screenshot 2:

Screenshot 3:

Screenshot 4:

Screenshot 5:

Access control options

Once a bucket is created, it is secure by default. Only the bucket and object owners have access to the S3 resources they create (Screenshot 6). Additionally, if the users of the bucket are internal, then creating ACL-based access controls to buckets is fairly trivial. There are four methods for controlling access to S3 buckets and objects: bucket policies, ACLs, IAM policies, and query string automation. Of the four, bucket policies and ACLs are the simplest to understand, while setting granular policies using IAMs require deeper understanding of AWS authentication and authorisation processes.

Security problems

If S3 buckets are secure by default, then why are data leaks still occurring? While one can speculate the root causes, here's a simple best practice to make it more difficult for someone to make the inadvertent mistake of clicking on “disable directory listings.” Translating this to AWS-speak, when one creates a bucket, ensure "Everyone" does not have permission to "List Objects." This setting is set at the bucket level and if someone happened to set "Grant public read access to this bucket" when creating the bucket, or later added this permission for Everyone, then any subsequent files created in this bucket will be visible (see screenshot below). 

Following the swiss-cheese model, in which the risk of a threat materialising into reality is mitigated by the implementation of several types of defenses "layered" behind each other, one is able to view the directory structure. All it takes is for someone to make a file readable by Everyone for an opportunistic person to append the filename to the URI and easily download the publicly accessible file. Even a script kiddie could figure this one out, folks!


Now the question usually follows around how to prevent data leakage from a S3 bucket. AWS offers a solution for identifying and alerting when sensitive data is accessed in a manner that is deemed anomalous in the form of AWS Macie. Macie leverages machine learning and a bit of magic to identify sensitive data and baseline normal activity. However, this approach may be less effective for data that Macie is unable to properly baseline as normal activity or identify as sensitive information.

A second approach, which can be used in addition to or in lieu of Macie, comes in the form of security monitoring. When we created the bucket and uploaded objects, S3 offered the ability to turn on Server access logging and Object-level logging. If one or both of these options are enabled, then the security practitioner will have access to logs. Log data contains a ton of useful information that can be used to monitor if, when, and where access policy violations are made. As an example, server access logs will contain pieces of information like bucket owner, bucket, time, remote IP, request ID, operation, key, request URI, and HTTP status, among others. Object-level logging goes several steps further and provides more granular information such as when objects are accessed, created or uploaded, or deleted among other things.

By monitoring these logs, security teams can be alerted when a publicly accessible bucket or object is created and intervene before data is lost. Further, security teams become more proactive and hunt for misconfigured systems as well as gain valuable insights such as who is accessing various objects within the organisation's S3 or more broadly, gain greater situation awareness to activities across AWS various product lines.

How to upload files to S3 bucket without sharing enabled

After one creates an S3 bucket securely, follow these instructions to upload files so it's not shared to the general public.

●        Go to the newly created bucket and click on "Upload." This brings up the upload file wizard that'll walk through adding files and setting permissions (Screenshot: Overview of newly created bucket).
●        Select the files to be uploaded (Screenshot: How to upload files 1 [select file]). 
●        Once the file(s) is uploaded, the second step is to set permissions for the file (Screenshot: How to upload files 2 [set permissions]). You'll be able to set permission for identified users as well as set public permissions. Take note of the "Manage Public Permissions" setting (important).
●        Set storage class settings on the file such as encryption and storage options (Screenshot: How to upload files 3 (set properties).
●        Review the settings, make sure the permissions are set correctly, before completing the process (Screenshot: How to upload files 4 (review permissions)) and that’s it!

Screenshot: Overview of newly created bucket

Screenshot: How to upload files 1 (select file)

Screenshot: How to upload files 2 (set permissions)

Screenshot: How to upload files 3 (set properties)

Screenshot: How to upload files 4 (review permissions)

Tohru Watanabe, director of sales engineering, LogicHub
Image Credit: Gil C / Shutterstock

Tohru Watanabe, CISSP-ISSAP, CISM, GREM, GXPN is the director of sales engineering at security automation platform LogicHub. Tohru has extensive experience in the systems engineering, information system and cybersecurity space at FireEye and Cybereason, which he left to join LogicHub.