Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cli] deploy cannot specify S3 SSE for asset upload #11265

Closed
CaseyBurnsSv opened this issue Nov 3, 2020 · 17 comments · Fixed by #17668
Closed

[cli] deploy cannot specify S3 SSE for asset upload #11265

CaseyBurnsSv opened this issue Nov 3, 2020 · 17 comments · Fixed by #17668
Labels
effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p2 package/tools Related to AWS CDK Tools or CLI

Comments

@CaseyBurnsSv
Copy link

CaseyBurnsSv commented Nov 3, 2020

Reproduction Steps

  1. cdk bootstrap with legacy bootstrap, provide kms key id as param
  2. have a SCP setup that denies s3:PutObject if s3:x-amz-server-side-encryption is missing
  3. create a CDK app that provisions a lambda asset
  4. execute cdk deploy
  5. deploy fails and receive Access Denied error

What did you expect to happen?

i expect CDK deploy to explicitly use the kms key i specified in the bootstrap when uploading assets.

What actually happened?

cdk deploy does not provide SSE and the deploy fails with Access Denied.
It appears to be relying on the S3 default encryption instead of specifying the SSE options to the S3 put object request.

Environment

  • CLI Version : 1.71.0
  • Framework Version:
  • Node.js Version: v12.16.1
  • OS : Windows 10
  • Language (Version): Python 3.8.5

Other


This is 🐛 Bug Report

@CaseyBurnsSv CaseyBurnsSv added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Nov 3, 2020
@CaseyBurnsSv CaseyBurnsSv changed the title CDK CLI Deploy cannot specify S3 SSE for asset upload [cli] deploy cannot specify S3 SSE for asset upload Nov 3, 2020
@github-actions github-actions bot added the package/tools Related to AWS CDK Tools or CLI label Nov 3, 2020
@rix0rrr
Copy link
Contributor

rix0rrr commented Nov 9, 2020

It appears to be relying on the S3 default encryption instead of specifying the SSE options to the S3 put object request.

That is correct, it does rely on default encryption. It seems convenient and logical to me to separate the upload action from the encryption action, which we can easily do in this way.

Can you tell me why you need it to work differently?

@rix0rrr rix0rrr added feature-request A feature should be added or improved. response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. and removed bug This issue is a bug. labels Nov 9, 2020
@SomayaB SomayaB removed the needs-triage This issue or PR still needs to be triaged. label Nov 9, 2020
@CaseyBurnsSv
Copy link
Author

It appears to be relying on the S3 default encryption instead of specifying the SSE options to the S3 put object request.

That is correct, it does rely on default encryption. It seems convenient and logical to me to separate the upload action from the encryption action, which we can easily do in this way.

Can you tell me why you need it to work differently?

I think it should be consistent with other AWS tooling like sam deploy and aws cloudformation deploy which allow the option of specifying the encryption config used by files uploaded to s3.

Without this option it may make CDK unusable in highly regulated financial services organizations like the one I work for.

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Nov 10, 2020
@rix0rrr
Copy link
Contributor

rix0rrr commented Nov 10, 2020

I probably just don't have enough domain knowledge here.

Can you explain to me why a default encryption setting on the bucket is not good enough to achieve your encryption at rest requirements?

@rix0rrr rix0rrr added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Nov 10, 2020
@CaseyBurnsSv
Copy link
Author

the issue is that we have organizational service control policies in place that Deny PutObject requests that do not explicitly specify the x-amz-server-side-encryption header. this is a security control that is commonly applied to enforce the use of encryption across an AWS Organization. relying on default s3 encryption does not specify the required headers.

we need the option of specifying the encryption method and key in the PutObject request sent by CDK.

for example the AWS CLI command aws cloudformation deploy allows us to specify this with the --kms-key-id parameter. and sam deploy provides the same parameter.

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Nov 16, 2020
@rix0rrr
Copy link
Contributor

rix0rrr commented Nov 23, 2020

So if I'm understanding correctly, it's not because of any functional reason, right?

It's not because with default encryption you couldn't achieve the desired end result of files being encrypted with a particular KMS key.

It's because your current set of security policies misses and rejects this particular kind of configuration, even though it would be a valid setup that would achieve the desired end result.

@rix0rrr rix0rrr added effort/medium Medium work item – several days of effort p2 labels Nov 23, 2020
@CaseyBurnsSv
Copy link
Author

that is correct. this preventive security control (SCP) is on the PutObject request to S3, not on the state of the bucket itself.

@JanWe92
Copy link

JanWe92 commented Nov 24, 2020

I reproduced this behaviour and we have the same issue. We need the same SCP settings for the staging bucket, which we modified via the bootstrap template, but we cannot deploy the defined stack because of an "Access Denied" failure.

@hmcmanus
Copy link

I've just come across the same issue today. Any workarounds on this one that people have tried?

@openfinch
Copy link

Adding onto this, this is a strange feature to leave out considering it exists in other AWS-provided tools. This is a fairly common SCP in larger environments.

@rix0rrr rix0rrr removed their assignment Jun 3, 2021
@Ettery
Copy link

Ettery commented Aug 17, 2021

We have the same problem - enterprise level requirement to specify the header(s) as a part of the approach to securing all S3 buckets. There is no way to enforce that all S3 buckets in an account are created with SSE switched on so this approach of enforcing headers on the s3:PutObject call is Amazon's recommended approach for organisations wanting to enforce encryption at rest.

End result is that CDK is unable to deploy assets. We'd like to move from Terraform to CDK, but this is stopping us.

@cagdas-carbon
Copy link

Are there any plans on adding an argument to cdk deploy to fix this?

@ArlindNocaj
Copy link
Contributor

@CaseyBurnsSv
It seems that CDK 2.0 has additional features regarding bootstrapping (Amazon S3 bucket, AWS KMS key, SSM parameter for versioning). Could this be helpful in resolving the above issue? https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html#bootstrapping-templates

It would be great if someone with the infrastructure setup could evaluate it.
@rix0rrr Maybe someone from your team could investigate if there is a workaround with CDK 2.0 ?

@cagdas-carbon
Copy link

cagdas-carbon commented Oct 8, 2021

@ArlindNocaj

I tried with the new style bootstrapping. There is a support for SSM parameter for bootstrapping but the issue is the deployment (deploy command) doesn't support the same. The problem occurs when it tries to upload the template to S3 before the deployment (When it's larger than 50kb).

I didn't take a deeper look but potentially the problem is the S3 upload here does not support the encryption flag.

@NukaCody
Copy link

NukaCody commented Oct 15, 2021

You can customize the bootstrap that refers to the asset bucket (where assets go to) (docs)

cdk bootstrap --show-template > bootstrap.yml

Then modify the bucket policy to fit your organization SCP

A (untested) example would be something like below.

  StagingBucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket:
        Ref: StagingBucket
      PolicyDocument:
        Id: AccessControl
        Version: "2012-10-17"
        Statement:
          - Sid: AllowSSLRequestsOnly
            Action: s3:*
            Effect: Deny
            Resource:
              - Fn::Sub: ${StagingBucket.Arn}
              - Fn::Sub: ${StagingBucket.Arn}/*
            Condition:
              StringNotEquals:
                "s3:x-amz-server-side-encryption": "aws:kms"
            Principal: "*"

I don't have an org with the SCP set up so I would double check that the yml works with your respective org SCPs.

This type of modification shouldn't break the bootstraping contract.

So then you can deploy via cdk bootstrap --template bootstrap.yaml

EDIT:

The example above uses aws:kms, which means it could potentially break the file publising role if that role does not have access to the respective KMS key......so you may require additional modification if you go down that path. If not needed to meet your orgs SCP. It may be easier to go down the SSE-S3 path as mentioned in the earlier linked blog

@ArlindNocaj
Copy link
Contributor

I evaluated this approach by adding the following Staging bucket policy and boostraping with cdk bootstrap --bootstrap-kms-key-id 1234key --template bootstrap.yml

  StagingBucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket:
        Ref: StagingBucket
      PolicyDocument:
        Id: AccessControl
        Version: "2012-10-17"
        Statement:
          - Sid: DenyIncorrectEncryptionHeader
            Action: s3:PutObject
            Effect: Deny
            Resource:
              - Fn::Sub: ${StagingBucket.Arn}
              - Fn::Sub: ${StagingBucket.Arn}/*
            Condition:
              StringNotEquals:
                s3:x-amz-server-side-encryption: "aws:kms"
            Principal: "*"
          - Sid: DenyUnEncryptedObjectUploads
            Action: s3:PutObject
            Effect: Deny
            Resource:
              - Fn::Sub: ${StagingBucket.Arn}
              - Fn::Sub: ${StagingBucket.Arn}/*
            Condition:
              "Null":
                  "s3:x-amz-server-side-encryption": "true"              
            Principal: "*"

While this makes sure that the S3 cdk staging bucket does have the right policies, the cdk deploy command does not seem to support uploading to S3 using a kms-key and the problem still persists.

@bartdevriendt
Copy link

We need this as well. In our enterprise environment we require every file uploaded to S3 to be encrypted. In terraform this is solved by defining the encryption on the backend:

backend "s3" {
    bucket = "..."
    region = "eu-west-1"
    key = "..."
    kms_key_id = "...."
    encrypt = true
  }

This sets the x-amz-server-side-encryption header.

ArlindNocaj added a commit to ArlindNocaj/aws-cdk that referenced this issue Nov 24, 2021
…ncryption. Added s3:GetEncryptionConfiguration to bootstrap-template to be able to read the s3 bucket encryption with file-publishing role. (aws#11265)
ArlindNocaj added a commit to ArlindNocaj/aws-cdk that referenced this issue Nov 24, 2021
…nfluence the currently bootstrapped cdk projects. (aws#11265)
ArlindNocaj added a commit to ArlindNocaj/aws-cdk that referenced this issue Nov 24, 2021
@mergify mergify bot closed this as completed in #17668 Nov 26, 2021
mergify bot pushed a commit that referenced this issue Nov 26, 2021
…tion SCP (introduces bootstrap stack v9) (#17668)

Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted.

CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality.

Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present.

This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9.

If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header).

Fixes #11265.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

beezly pushed a commit to beezly/aws-cdk that referenced this issue Nov 29, 2021
…tion SCP (introduces bootstrap stack v9) (aws#17668)

Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted.

CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality.

Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present.

This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9.

If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header).

Fixes aws#11265.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
pedrosola pushed a commit to pedrosola/aws-cdk that referenced this issue Dec 1, 2021
…tion SCP (introduces bootstrap stack v9) (aws#17668)

Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted.

CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality.

Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present.

This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9.

If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header).

Fixes aws#11265.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
TikiTDO pushed a commit to TikiTDO/aws-cdk that referenced this issue Feb 21, 2022
…tion SCP (introduces bootstrap stack v9) (aws#17668)

Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted.

CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality.

Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present.

This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9.

If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header).

Fixes aws#11265.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p2 package/tools Related to AWS CDK Tools or CLI
Projects
None yet