AWS S3 Security – Immutable S3 Buckets

In this article we are going to explain how you can enhance your AWS S3 security by creating an immutable S3 bucket using cross account access.  The result is that, if implemented correctly, the account using the bucket cannot modify its permissions and cannot permanently delete any data – even if the root level user is being used.   Converse this with a standard S3 bucket where the root user will always have full access and you have the potential to create a much stronger security posture.

From the point of view of security threat management creating an immutable S3 bucket will help protect against a compromise of the root or administration level user account that, under  normal circumstances, could result in the deletion of all S3 objects and their previous versions along with any glacier archives.

To highlight why we may want an immutable S3 bucket consider the following use cases:

  • We want to use the bucket as a centralised log respository and want to make the logs impossible to alter even by administrators (i.e. in order to meet compliance with PCI-DSS 10.5.3)
  • We want to capture billing information from one or more accounts
  • We want a secure bucket to deposit cloud trail logs from multiple accounts
  • We want to create an “off site” disaster recovery solution in the event that our account is compromised and all data and AWS components are deleted

As you’ll no doubt agree each of the above would proves a strong case for creating an immutable S3 bucket.

High Level Steps

OK, so now we’ve made the case lets detail the high level steps required to make this happen

  1. We need to create a new AWS account (which we shall refer to as Account A going forward) that will be used only for the sharing of the bucket
  2. In Account A we will secure the root user and create a single administration user that will be used for creating the S3 bucket and setting the appropriate policy
  3. In Account A we will create an S3 bucket, enable versioning and set a simple bucket policy that will allow another account (which will shall refer to as Account B going forward) to write to the bucket and list objects
  4. From Account B (which is the AWS account we use for the running of our applications) we will write to the bucket in Account A we’ve created
  5. Once we’ve proven we can write to the bucket we will simulate some compromise events that prove that our bucket and the data can never be lost

Creating the S3 Bucket

Firstly we need to create a new account (Account A) that we will use to create and store the S3 bucket.  I’m going to skip over the details of how to create a new AWS account as it’s covered in the AWS documentation.  However you should ensure the following to provide a high level of security (note that the steps below are not obligatory to the success of this however they do ensure that any possible threat to Account A is reduced which is extremely important)

  • Secure the root level user (the user you used to create the AWS account) by:
    • Setting a 16 character password consisting of alphanumerics and special characters
    • Setting a hardware multi-factor authentication (MFA) device on the account
    • Locking the MFA device in a safe and the password in another and ensure nobody has access to both safes
  • Create a single IAM user with the Administrator policy
  • Ensure the IAM user is configured with an MFA device
  • Ensure both root and the IAM user do not have access keys assigned
  • Enable CloudTrail
  • Set Cloudwatch to alert each time one of the above users logs in

Once you have created the above account and secured it appropriately login as the IAM user, create a new S3 bucket and provide it with an adequate name.

In the example below I have called mine “Hydras-Immutable-Bucket”

Ensure versioning is enabled and enable logging if you want to capture all access to this bucket

Now we need to assign a bucket policy that allows Account B access to this bucket.  Account B, in our example, is the account used for day-to-day running of AWS and we will require the account number to create bucket policy.

You can be as flexible as you like regarding the permissions you assign to the bucket policy but bear in mind that if you are too open with your permissions  then you could still leave yourself open to data loss so ensure you have good understanding of S3 permissions before configuring your own policies.  For instance don’t provide full access or allow the bucket policy or object versions to be modified.

In the example below I have:

  • Explicitly denied Account B the ability to delete the bucket (S3:DeleteBucket) or delete an object version (S3:DeleteObjectVersion)
  • Allowed Account B to write objects  (S3:PutObject) and list the contents of the bucket (S3:ListBucket) including all previous versions (S3:ListBucketVersions)
  • Assigned the access to “AWS”: “arn:aws:iam::ACCOUNTB-ACCNUMBER:root that effectively grants the access to the root user (in fact any administrative user) within Account B.  An administrator of Account B can, if desired, further delegate this access to its IAM users through the use of an IAM policy.  See here for an example

This policy can be assigned via the console by selecting the bucket, clicking properties>permissons>assign bucket policy

NB be sure to change ACCOUNTB-ACCNUMBER in your policy

We have now created an immutable S3 bucket from the point of view of Account B. Lets test it

Testing Access

The following examples are performed using the root account in Account-B (hydrastest) using the command line

NB: In a real-life situation DO NOT assign access keys to a root account and use the CLI, instead use an IAM user. The following is performed for illustrative purposes only

Lets create a file called “helloworld” with the content “Hello World”

And upload it to the bucket created in Account A

Great thats worked.  Lets list the contents of the bucket to prove the object is there

Again this has worked which is good as the permissions are working.  Now we want to pretend that the root account on Account-B has been compromised so lets run some simulations

Potential Compromise

The following examples could simulate the compromise of the root or an administrator account, a rogue employee or an administrator who has mistakenly run a command.

Lets try and alter the bucket policy.  I’ve created a local version of the bucket policy that changes the explicit “Deny” to “Allow”. What happens if I try to upload it?

Oopps – that’s failed as I don’t have access.  How about we try to delete the “hello world” object

Thats failed also.   OK so I can’t modify the policy and I can’t delete the object but I can overwrite the object.   So lets update the helloworld file to say “Hello Dolly” and upload that

Thats worked!  So I’ve effectively overwritten the original with corrupt data.  But the original could be recoverable so lets see if we have versioning enabled

Versioning is enabled.  I can see this because we have two versions of the file; one being my corrupt current version and the other being the original.  So lets try and delete the original

This has failed as well.  I can’t delete any of the older versions so I’m at a dead end.  The most I can do is continually overwrite the files that already exist.  I can’t delete objects or their versions and I can’t modify the bucket policy so I’m limited by the restrictions imposed on me from Account A

Data Recovery

In the examples above we saw that a file could potentially be overwritten.  In this situation the original could easy be restored as we have versioning enabled.  We could simply download the original, using the correct version id, and re-upload it again like shown below thus simulating data recovery

Just to Prove It

Just to prove that Account-A does not have these restrictions and is therefore susceptible (which is why we lock down the authentication to this account as much as possible) I repeat the commands using the root account

Note: I’m using the “default” command line profile here so there is no –profile option in the command line arguments

Update account policy with a hacked version:

List all versions of the file including the live one (marked as “True”) in the list below

Delete all versions of the file

Verify all versions are deleted

Remove the bucket

So we can see that with the equivalent user in Account A (root) much more damage can be done to the objects.  Thus it is very important to protect the credentials used on Account A by assigning a small number of administrators and securing root.

Conclusion

In summary I have shown that you can enhance your S3 security by creating an immutable S3 bucket using cross account access with versioning enabled and a restrictive bucket policy.  The result is that not even the root user of the account using the bucket can alter its permissions or permanently remove data.

Furthermore I have demonstrated this using several worked examples that could simulate a real-world compromise.   Finally I have shown that data is recoverable using S3 versioning and proven that, had the bucket been created without cross-account access, it is susceptible to compromise or accidental data loss

I hope this simple demonstration has given you food for thought and that you’ll find inventive uses for Cross Account Immutable S3 Buckets.  If you do I’d love to know how you are using them so please let me know

Alternatively if you’d like us to architect Cross Account Immutable S3 Buckets into your environment please contact us on the link below and we’d love to help.

More Information

The following links will provide you with more information related to what we have discussed here:

About Hydras

Hydras are a team of cloud consulting experts that excel in architecting and operating secure, automated cloud based solutions built on Amazon Web Services (AWS) with a particular focus on web and mobile.  Contact us for help with your AWS projects.  We’d love to work with you.

If you liked this please spread the word:
TwitterFacebookGoogle+LinkedInshare

Leave a Comment