AWS SSE-S3 vs. SSE-KMS: How can Organizations Avoid a False Sense of Security in the Cloud?
/0 TLDR
“Encrypted” does not necessarily mean “protected” or “secure,” at least not in the way we might think, especially when it comes to S3 server-side encryption (“SSE-S3”) and other similar encryption configurations in AWS.
SSE-KMS provides both encryption and a layer of access control for S3 objects in a way that SSE-S3 cannot and was not designed to do.
/1 Intro
One of the major issues we see in the cloud security industry is that organizations can potentially have a false sense of security in their production infrastructure and environments.
This is similar to firmly believing that the front door of a house is locked, when in fact it is either not locked at all, or simply not using a proper lock. At that point, no one will pay attention to the situation, which will only make things worse in the future.
It is quite easy to have a false sense of security in the cloud for many reasons. To name a few:
- Misunderstanding how different AWS services and technologies actually work and behave from a technical security perspective;
- The sheer number of different AWS service features and configurations available;
- The explicit refusal and unwillingness of certain teams to learn about and adapt to cloud technologies;
- The unrealistic, inadequate, and largely inaccurate promises that many cloud security tools and vendors advertise and promote;
- Lack of thorough testing of the impact of the various AWS service configurations on security;
- Enforcing security controls to meet commonly seen compliance checklists that are outdated, irrelevant, or insufficient;
- and much more.
Cloud misconfiguration is one of the top 3 causes of cloud security incidents. It should be clear that data in the cloud is only as secure as it is configured to be.
Therefore, it is the responsibility of cloud users and customers to fully understand and evaluate how to properly use and implement encryption in the cloud. It is also AWS’s responsibility to explain this topic very clearly through AWS documentation and AWS security blogs, so that customers can make the right decisions and avoid a false sense of security within critical AWS services and infrastructure.
/2 SSE-S3 encryption versus Security Risks
To better present and explain the topic of this article, we will use S3’s SSE-S3 encryption offering (known as server-side encryption) and compare it to AWS’s S3 offering with KMS (known as SSE-KMS).
SSE-S3 is designed to protect against physical theft of data in AWS datacenters. That is, if someone manages to get in and physically take a bunch of SSDs + HDDs home without necessarily knowing which AWS customer that data belongs to.
In most cases, however, I believe that the most important cloud security risks are not always the physical theft of data from datacenters (although that is still an important factor), but also protecting access to that data over the Internet and internally within an organization.
Encrypting data with AWS-KMS does all of this: it works directly with the IAM layer, providing what we rightfully expect from encryption, while also protecting data from the physical risk aspect. To clearly demonstrate the difference between SSE-S3 and AWS-KMS, we will go through a short demo to see things in action.
/3 Demo-Lab: SSE-S3 vs SSE-KMS
Throughout this lab, we will assume that an AWS S3 bucket was made publicly available on the Internet due to a configuration error.
The sample S3 bucket already had SSE-S3 encryption enabled when it was created. All S3 objects were subsequently uploaded to the bucket, meaning that all S3 objects were automatically encrypted by AWS using SSE-S3:
When we try to access one of the S3 objects through its S3 URL link, it works from anywhere on the Internet as if no encryption was ever enabled:
Moving on to SSE-KMS, it can be implemented using 2 methods: aws/s3 (AWS KMS keys managed by AWS), or kms-cmk (customer-managed KMS keys). Both behave very similarly for the purposes of this lab, though both are different from a technical perspective.
Let’s leave all S3 public access settings exactly as they are in our example bucket, but encrypt only the S3 bucket itself with an AWS KMS key:
A few important points to keep in mind:
- If we then try to access the same S3 object URL (the car picture) again, it will still be publicly accessible from anywhere on the Internet. But why is that? Because the S3 object was created before AWS-KMS encryption was enabled at the bucket level, and therefore the S3 bucket never had a chance to encrypt the object with AWS-KMS. S3 will not retroactively encrypt an existing object when AWS-KMS encryption is enabled at the bucket level. Please keep this detail in mind as it is important when considering S3 bucket encryption strategies and expected behavior.
- However, if AWS-KMS encryption was first enabled on the bucket and then the S3 object was uploaded, the bucket will automatically encrypt the S3 object with AWS-KMS and display an access denied message (even if all other S3 public access settings are exactly the same):
- If we had implemented an S3 bucket with SSE-S3 encryption at the bucket level and then manually encrypted the specific S3 object(s) with AWS-KMS, accessing the objects directly from the Internet would also result in an access denial.
In short, with AWS KMS enabled at the bucket and object level, only AWS resources and principals with access to the AWS KMS key(s) would be able to view the S3 objects, giving us that last line of defense even if all other public settings restrictions on S3 buckets and objects were removed. This is a very good example of how encryption works hand-in-hand with IAM to protect customer data in the cloud.
Finally, AWS KMS encryption also protects against the physical theft scenario, so it is not a compromise we have to make.
/4 What about public S3 objects?
My professional opinion is that since these objects are meant to be public in the first place, and we are explicitly and willingly giving read-only access to the entire Internet, there is no point in using SSE-KMS, nor is there any point in using SSE-S3. We should instead focus on making sure that the IAM layer is properly configured (bucket settings, resource policies, ACLs, IAM policies, etc.) and worry less about what type of encryption is being used to access public data. This goes back to avoiding a false sense of security and focusing our security efforts on what really matters.
Since SSE-S3 is now offered by default, we might as well leave it in, but we should know that with or without it, there is no security impact for this particular use case.
/5 Conclusion
Bad actors, both internal and external, completely disregard the amount of effort organizations put into securing their cloud infrastructures, services, and data. This is true regardless of all the business decisions and requirements defined internally by the organization, the complexity and size of those infrastructures, the errors and misconfigurations that will occur, the lack of budgets and resources, the multitude of security risks accepted by management, or any other factor we can think of. Ultimately, data in the cloud is only as secure as it is configured to be, and attackers will always look for a way to achieve their goals.
Cloud security compliance and governance are both important. But for cloud security compliance to be truly effective and respected by other teams, it really needs to be written by teams and resources that are cloud-fluent and have the technical understanding of how AWS cloud security actually works in practice. Let’s push to make compliance an enabler of security, rather than a hindrance and blocker of security.
Cloud customers should fully understand the implications of the configurations they want to implement before pushing them into production, and whether or not those configuration choices meet their security needs and expectations.
Finally, a word of advice before encrypting everything with SSE-KMS: please take the time to test and validate the various workloads in test environments before enabling SSE-KMS encryption in production. This is because it is common that some customer applications and on-premises workloads may not even be able to natively support SSE-KMS. This may even include some native AWS services.
Thanks for reading and I hope you enjoyed this article!