-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating a BucketDeployment's memoryLimit causes stack update failure #7128
Comments
Hi @tedinski - thanks for reporting this. Writing down my initial investigation for reference to when we work on the resolution: What happens is that we use the lambda function arn as the Proposed solution: Incorporate lambda config in the custom resource id so that a new custom resource will be created, with a new service token. |
Whats a proposed workaround for this? I dont want to go blow up my stack, only to go update bucket deployments. |
@Khalian Apologies for the delayed response. You can change the |
I suspect this lambda does not detect when the awscli command terminates due to OOM error. The problem mainly occurs when trying to BucketDeploy 161 local files totalling 133MB using the default memoryLimit |
i dont understand why uploading files through the CDK is so painful in compare with the SDK. SDK just works and faster. |
1 similar comment
i dont understand why uploading files through the CDK is so painful in compare with the SDK. SDK just works and faster. |
This just bit us today. We needed to bump the memory limit on the S3Deployment lambda, and when we tried to change it, it created a new resource and deleted the old one. The problem being, it deleted the files it just updated. I'm baffled as to why the memory limit is encoded in the CDK-generated resource ID. Lambda memory changes are no-replacement updates. |
+1 |
+1 we just got bitten by this |
+1 same here |
1 similar comment
+1 same here |
Dudes you have to increase the memoryLimit to fix the problem. |
+1 same issue |
…te failure (#17530) This fixes the issue where updating the memoryLimit or vpc of the BucketDeployment resource would result in a stack update failure. In order to fix this the ID of the BucketDeployment CustomResource includes info on the memoryLimit and vpc in the same way that the Lambda function does. This means that when either of these values are updated, the BucketDeployment CustomResource will be recreated along with the Lambda function. If anyone is setting `retainOnDelete=false` (default is `true`) then this change would result in the data in the bucket being deleted. In order to avoid this scenario, this PR introduces a bucket tag that controls whether or not a BucketDeployment resource can delete data from the bucket. The BucketDeployment resource will now add a tag to the deployment bucket with a format like `aws-cdk:cr-owned:{keyPrefix}:{uniqueHash}`. For example: ``` { Key: 'aws-cdk:cr-owned:deploy/here/:240D17B3', Value: 'true', } ``` Each bucket+keyPrefix can be "owned" by 1 or more BucketDeployment resources. Since there are some scenarios where multiple BucketDeployment resources can deploy to the same bucket and key prefix (e.g. using include/exclude) we also append part of the id to make the key unique. As long as a bucket+keyPrefix is "owned" by a BucketDeployment resource, another CR cannot delete data. There are a couple of scenarios where this comes into play. 1. If the LogicalResourceId of the CustomResource changes (e.g. memoryLimit is updated) CloudFormation will first issue a 'Create' to create the new CustomResource and will update the Tag on the bucket. CloudFormation will then issue a 'Delete' on the old CustomResource and since the new CR "owns" the Bucket+keyPrefix (indicated by the presence of the tag), the old CR will not delete the contents of the bucket 2. If the BucketDeployment resource is deleted _and_ it is the only CR for that bucket+keyPrefix then CloudFormation will first remove the tag from the bucket and then issue a "Delete" to the CR. Since there are no tags indicating that this bucket+keyPrefix is "owned" then it will delete the contents. 3. If the BucketDeployment resource is deleted _and_ it is *not* the only CR for that bucket:keyPrefix then CloudFormation will first remove the tag from the bucket and then issue a "Delete" to the CR. Since there are other CRs that also "own" that bucket+keyPrefix (indicated by the presence of tags), there will still be a tag on the bucket and the contents will not be removed. The contents will only be removed after _all_ BucketDeployment resource that "own" the bucket+keyPrefix have been removed. 4. If the BucketDeployment resource _and_ the S3 Bucket are both removed, then CloudFormation will first issue a "Delete" to the CR and since there is a tag on the bucket the contents will not be removed. If you want the contents of the bucket to be removed on bucket deletion, then `autoDeleteObjects` property should be set to true on the Bucket. Scenario 3 & 4 are changes to the existing functionality in that they now do *not* delete data in some scenarios when they did previously. fixes #7128 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
|
…te failure (aws#17530) This fixes the issue where updating the memoryLimit or vpc of the BucketDeployment resource would result in a stack update failure. In order to fix this the ID of the BucketDeployment CustomResource includes info on the memoryLimit and vpc in the same way that the Lambda function does. This means that when either of these values are updated, the BucketDeployment CustomResource will be recreated along with the Lambda function. If anyone is setting `retainOnDelete=false` (default is `true`) then this change would result in the data in the bucket being deleted. In order to avoid this scenario, this PR introduces a bucket tag that controls whether or not a BucketDeployment resource can delete data from the bucket. The BucketDeployment resource will now add a tag to the deployment bucket with a format like `aws-cdk:cr-owned:{keyPrefix}:{uniqueHash}`. For example: ``` { Key: 'aws-cdk:cr-owned:deploy/here/:240D17B3', Value: 'true', } ``` Each bucket+keyPrefix can be "owned" by 1 or more BucketDeployment resources. Since there are some scenarios where multiple BucketDeployment resources can deploy to the same bucket and key prefix (e.g. using include/exclude) we also append part of the id to make the key unique. As long as a bucket+keyPrefix is "owned" by a BucketDeployment resource, another CR cannot delete data. There are a couple of scenarios where this comes into play. 1. If the LogicalResourceId of the CustomResource changes (e.g. memoryLimit is updated) CloudFormation will first issue a 'Create' to create the new CustomResource and will update the Tag on the bucket. CloudFormation will then issue a 'Delete' on the old CustomResource and since the new CR "owns" the Bucket+keyPrefix (indicated by the presence of the tag), the old CR will not delete the contents of the bucket 2. If the BucketDeployment resource is deleted _and_ it is the only CR for that bucket+keyPrefix then CloudFormation will first remove the tag from the bucket and then issue a "Delete" to the CR. Since there are no tags indicating that this bucket+keyPrefix is "owned" then it will delete the contents. 3. If the BucketDeployment resource is deleted _and_ it is *not* the only CR for that bucket:keyPrefix then CloudFormation will first remove the tag from the bucket and then issue a "Delete" to the CR. Since there are other CRs that also "own" that bucket+keyPrefix (indicated by the presence of tags), there will still be a tag on the bucket and the contents will not be removed. The contents will only be removed after _all_ BucketDeployment resource that "own" the bucket+keyPrefix have been removed. 4. If the BucketDeployment resource _and_ the S3 Bucket are both removed, then CloudFormation will first issue a "Delete" to the CR and since there is a tag on the bucket the contents will not be removed. If you want the contents of the bucket to be removed on bucket deletion, then `autoDeleteObjects` property should be set to true on the Bucket. Scenario 3 & 4 are changes to the existing functionality in that they now do *not* delete data in some scenarios when they did previously. fixes aws#7128 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…te failure (aws#17530) This fixes the issue where updating the memoryLimit or vpc of the BucketDeployment resource would result in a stack update failure. In order to fix this the ID of the BucketDeployment CustomResource includes info on the memoryLimit and vpc in the same way that the Lambda function does. This means that when either of these values are updated, the BucketDeployment CustomResource will be recreated along with the Lambda function. If anyone is setting `retainOnDelete=false` (default is `true`) then this change would result in the data in the bucket being deleted. In order to avoid this scenario, this PR introduces a bucket tag that controls whether or not a BucketDeployment resource can delete data from the bucket. The BucketDeployment resource will now add a tag to the deployment bucket with a format like `aws-cdk:cr-owned:{keyPrefix}:{uniqueHash}`. For example: ``` { Key: 'aws-cdk:cr-owned:deploy/here/:240D17B3', Value: 'true', } ``` Each bucket+keyPrefix can be "owned" by 1 or more BucketDeployment resources. Since there are some scenarios where multiple BucketDeployment resources can deploy to the same bucket and key prefix (e.g. using include/exclude) we also append part of the id to make the key unique. As long as a bucket+keyPrefix is "owned" by a BucketDeployment resource, another CR cannot delete data. There are a couple of scenarios where this comes into play. 1. If the LogicalResourceId of the CustomResource changes (e.g. memoryLimit is updated) CloudFormation will first issue a 'Create' to create the new CustomResource and will update the Tag on the bucket. CloudFormation will then issue a 'Delete' on the old CustomResource and since the new CR "owns" the Bucket+keyPrefix (indicated by the presence of the tag), the old CR will not delete the contents of the bucket 2. If the BucketDeployment resource is deleted _and_ it is the only CR for that bucket+keyPrefix then CloudFormation will first remove the tag from the bucket and then issue a "Delete" to the CR. Since there are no tags indicating that this bucket+keyPrefix is "owned" then it will delete the contents. 3. If the BucketDeployment resource is deleted _and_ it is *not* the only CR for that bucket:keyPrefix then CloudFormation will first remove the tag from the bucket and then issue a "Delete" to the CR. Since there are other CRs that also "own" that bucket+keyPrefix (indicated by the presence of tags), there will still be a tag on the bucket and the contents will not be removed. The contents will only be removed after _all_ BucketDeployment resource that "own" the bucket+keyPrefix have been removed. 4. If the BucketDeployment resource _and_ the S3 Bucket are both removed, then CloudFormation will first issue a "Delete" to the CR and since there is a tag on the bucket the contents will not be removed. If you want the contents of the bucket to be removed on bucket deletion, then `autoDeleteObjects` property should be set to true on the Bucket. Scenario 3 & 4 are changes to the existing functionality in that they now do *not* delete data in some scenarios when they did previously. fixes aws#7128 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Changing the
memoryLimit
of an existingBucketDeployment
(@aws-cdk/aws-s3-deployment
) causes the CloudFormation deploy of the update to fail with the error "Modifying service token is not allowed."Reproduction Steps
Create a bucket deployment:
Then add
memoryLimit: 1792,
(for example) to its properties and redeploy. The CFN changeset will fail to deploy and rollback.Error Log
CloudFormation reports the error "Modifying service token is not allowed." for the "BucketDeploymentCustomResource."
Environment
Workaround
For anyone struggling with this issue, here's how I worked around the problem:
memoryLimit
.In retrospect, just changing the construct name (add
2
to the end or something) to destroy the old and create the new would probably also have worked. I haven't tested that.Other
I noticed this was noted on the original pull request by someone else as well: #4204 (comment)
Additional request
I strongly believe the default
memoryLimit
should be set to 1792, which is the documented number for a full vCPU for a Lambda.Our experience with even a small 2MB zip file showed we had a 40s deploy at 128 MB and a 3s deploy at 1792 MB. That's 13X faster at 14X resources, a basically linear speedup, which means it's nearly free in terms of cost. I strongly believe there's no reason whatsoever for this lambda to ever have a smaller memory limit, which means this is a very bad default.
Of course, to safely update that default, this bug needs fixing first. Otherwise everyone's deploy will fail.
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: