-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[cli] deploy cannot specify S3 SSE for asset upload #11265
Comments
That is correct, it does rely on default encryption. It seems convenient and logical to me to separate the upload action from the encryption action, which we can easily do in this way. Can you tell me why you need it to work differently? |
I think it should be consistent with other AWS tooling like Without this option it may make CDK unusable in highly regulated financial services organizations like the one I work for. |
I probably just don't have enough domain knowledge here. Can you explain to me why a default encryption setting on the bucket is not good enough to achieve your encryption at rest requirements? |
the issue is that we have organizational service control policies in place that Deny PutObject requests that do not explicitly specify the we need the option of specifying the encryption method and key in the PutObject request sent by CDK. for example the AWS CLI command |
So if I'm understanding correctly, it's not because of any functional reason, right? It's not because with default encryption you couldn't achieve the desired end result of files being encrypted with a particular KMS key. It's because your current set of security policies misses and rejects this particular kind of configuration, even though it would be a valid setup that would achieve the desired end result. |
that is correct. this preventive security control (SCP) is on the PutObject request to S3, not on the state of the bucket itself. |
I reproduced this behaviour and we have the same issue. We need the same SCP settings for the staging bucket, which we modified via the bootstrap template, but we cannot deploy the defined stack because of an "Access Denied" failure. |
I've just come across the same issue today. Any workarounds on this one that people have tried? |
Adding onto this, this is a strange feature to leave out considering it exists in other AWS-provided tools. This is a fairly common SCP in larger environments. |
We have the same problem - enterprise level requirement to specify the header(s) as a part of the approach to securing all S3 buckets. There is no way to enforce that all S3 buckets in an account are created with SSE switched on so this approach of enforcing headers on the s3:PutObject call is Amazon's recommended approach for organisations wanting to enforce encryption at rest. End result is that CDK is unable to deploy assets. We'd like to move from Terraform to CDK, but this is stopping us. |
Are there any plans on adding an argument to |
@CaseyBurnsSv It would be great if someone with the infrastructure setup could evaluate it. |
I tried with the new style bootstrapping. There is a support for SSM parameter for bootstrapping but the issue is the deployment ( I didn't take a deeper look but potentially the problem is the S3 upload here does not support the encryption flag. |
You can customize the bootstrap that refers to the asset bucket (where assets go to) (docs)
Then modify the bucket policy to fit your organization SCP A (untested) example would be something like below. StagingBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: StagingBucket
PolicyDocument:
Id: AccessControl
Version: "2012-10-17"
Statement:
- Sid: AllowSSLRequestsOnly
Action: s3:*
Effect: Deny
Resource:
- Fn::Sub: ${StagingBucket.Arn}
- Fn::Sub: ${StagingBucket.Arn}/*
Condition:
StringNotEquals:
"s3:x-amz-server-side-encryption": "aws:kms"
Principal: "*" I don't have an org with the SCP set up so I would double check that the yml works with your respective org SCPs. This type of modification shouldn't break the bootstraping contract. So then you can deploy via EDIT: The example above uses aws:kms, which means it could potentially break the file publising role if that role does not have access to the respective KMS key......so you may require additional modification if you go down that path. If not needed to meet your orgs SCP. It may be easier to go down the SSE-S3 path as mentioned in the earlier linked blog |
I evaluated this approach by adding the following Staging bucket policy and boostraping with
While this makes sure that the S3 cdk staging bucket does have the right policies, the |
We need this as well. In our enterprise environment we require every file uploaded to S3 to be encrypted. In terraform this is solved by defining the encryption on the backend:
This sets the |
…ncryption. Added s3:GetEncryptionConfiguration to bootstrap-template to be able to read the s3 bucket encryption with file-publishing role. (aws#11265)
…nfluence the currently bootstrapped cdk projects. (aws#11265)
…sers without using bootstrap are not impacted.(aws#11265)
…tion SCP (introduces bootstrap stack v9) (#17668) Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted. CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality. Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present. This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9. If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header). Fixes #11265. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
|
…tion SCP (introduces bootstrap stack v9) (aws#17668) Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted. CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality. Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present. This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9. If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header). Fixes aws#11265. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…tion SCP (introduces bootstrap stack v9) (aws#17668) Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted. CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality. Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present. This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9. If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header). Fixes aws#11265. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…tion SCP (introduces bootstrap stack v9) (aws#17668) Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted. CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality. Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present. This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9. If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header). Fixes aws#11265. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Reproduction Steps
What did you expect to happen?
i expect CDK deploy to explicitly use the kms key i specified in the bootstrap when uploading assets.
What actually happened?
cdk deploy does not provide SSE and the deploy fails with
Access Denied
.It appears to be relying on the S3 default encryption instead of specifying the SSE options to the S3 put object request.
Environment
Other
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: