Skip to content

Releases: cloudposse/terraform-aws-s3-bucket

v4.2.0

04 Mar 03:23
d8bc15d
Compare
Choose a tag to compare
Added IP-based statement in bucket policy @soya-miyoshi (#216)

what

  • Allows users to specify a list of source IP addresses from which access to the S3 bucket is allowed.
  • Adds dynamic statement that uses the NotIpAddress condition to deny access from any IP address not listed in the source_ip_allow_list variable.

why

Use cases:

  • Restricting access to specific physical locations, such as an office or home network

references

v4.1.0

03 Mar 20:06
b497874
Compare
Choose a tag to compare

🚀 Enhancements

fix: use for_each instead of count in aws_s3_bucket_logging @wadhah101 (#212)

what

Replaced the count with a for_each inside aws_s3_bucket_logging.default

there's no point in the try since the type is clearly defined as list

why

When the bucket_name within logging attribute is dynamically defined, like in the case of referencing a bucket created by terraform for logging

  logging = [
    {
      bucket_name = module.logging_bucket.bucket_id
      prefix      = "data/"
    }
  ]

we get this error
Screenshot 2024-02-05 at 12 50 30

For each can work better in this case and will solve the previous error

references

🤖 Automatic Updates

Update README.md and docs @cloudpossebot (#214)

what

This is an auto-generated PR that updates the README.md and docs

why

To have most recent changes of README.md and doc from origin templates

Update README.md and docs @cloudpossebot (#213)

what

This is an auto-generated PR that updates the README.md and docs

why

To have most recent changes of README.md and doc from origin templates

Update README.md and docs @cloudpossebot (#209)

what

This is an auto-generated PR that updates the README.md and docs

why

To have most recent changes of README.md and doc from origin templates

v4.0.1

15 Nov 21:57
eaaee29
Compare
Choose a tag to compare

🐛 Bug Fixes

Fix bug in setting dynamic `encryption_configuration` value @LawrenceWarren (#206)

what

  • When trying to create an S3 bucket, the following error is encountered:
Error: Invalid dynamic for_each value

  on .terraform/main.tf line 225, in resource "aws_s3_bucket_replication_configuration" "default":
 225:           for_each = try(compact(concat(
 226:             [try(rule.value.destination.encryption_configuration.replica_kms_key_id, "")],
 227:             [try(rule.value.destination.replica_kms_key_id, "")]
 228:           ))[0], [])
    ├────────────────
    │ rule.value.destination.encryption_configuration is null
    │ rule.value.destination.replica_kms_key_id is "arn:aws:kms:my-region:my-account-id:my-key-alias"

Cannot use a string value in for_each. An iterable collection is required.
  • This is caused in my case by having s3_replication_rules.destination.encryption_configuration.replica_kms_key_id set.

why

  • There is a bug when trying to create an S3 bucket, which causes an error that stops the bucket being created

    • Basically, there are two attributes that do the same thing (for backwards compatability)
      • s3_replication_rules.destination.encryption_configuration.replica_kms_key_id (newer)
      • s3_replication_rules.destination.replica_kms_key_id (older)
    • There is logic to:
      • A) use the newer of these two attributes
      • B) fall back to the older of the attributes if it is set and the newer is not
      • C) fall back to an empty array if nothing is set
    • There is a bug in steps A/B, where by selecting one or the other, we end up with the string value, and not an iterable
    • The simplest solution, which I have tested successfully on existing buckets, is to wrap the output of that logic in a list
  • This error is easily replicable by trying compact(concat([try("string", "")], [try("string", "")]))[0] in the Terraform console, which is a simplified version of the existing logic used above

  • The table below demonstrates the possible values of the existing code - you can see the outputs for value 2, value 3, and value 4 are not lists:

Key Value 1 Value 2 Value 3 Value 4
newer null "string1" null "string1"
older null null "string2" "string2"
output [] "string1" "string2" "string1"

v4.0.0

26 Aug 04:45
97ef30c
Compare
Choose a tag to compare
Bug fixes and enhancements combined into a single breaking release @aknysh (#202)

Breaking Changes

Terraform version 1.3.0 or later is now required.

policy input removed

The deprecated policy input has been removed. Use source_policy_documents instead.

Convert from

policy = data.aws_iam_policy_document.log_delivery.json

to

source_policy_documents = [data.aws_iam_policy_document.log_delivery.json]

Do not use list modifiers like sort, compact, or distinct on the list, or it will trigger an Error: Invalid count argument. The length of the list must be known at plan time.

Logging configuration converted to list

To fix #182, the logging input has been converted to a list. If you have a logging configuration, simply surround it with brackets.

Replication rules brought into alignment with Terraform resource

Previously, the s3_replication_rules input had some deviations from the aws_s3_bucket_replication_configuration Terraform resource. Via the use of optional attributes, the input now closely matches the resource while providing backward compatibility, with a few exceptions.

  • Replication source_selection_criteria.sse_kms_encrypted_objects was documented as an object with one member, enabled, of type bool. However, it only worked when set to the string "Enabled". It has been replaced with the resource's choice of status of type String.
  • Previously, Replication Time Control could not be set directly. It was implicitly enabled by enabling Replication Metrics. We preserve that behavior even though we now add a configuration block for replication_time. To enable Metrics without Replication Time Control, you must set replication_time.status = "Disabled".

These are not changes, just continued deviations from the resources:

  • existing_object_replication cannot be set.
  • token to allow replication to be enabled on an Object Lock-enabled bucket cannot be set.

what

  • Remove local local.source_policy_documents and deprecated variable policy (because of that, pump the module to a major version)
  • Convert lifecycle_configuration_rules and s3_replication_rules from loosely typed objects to fully typed objects with optional attributes.
  • Use local bucket_id variable
  • Remove comments suppressing Bridgecrew rules
  • Update tests to Golang 1.20

why

  • The number of policy documents needs to be known at plan time. Default value of policy was empty, meaning it had to be removed based on content, which would not be known at plan time if the policy input was being generated.
  • Closes #167, supersedes and closes #163, and generally makes these inputs easier to deal with, since they now have type checking and partial defaults, meaning the inputs can be much smaller.
  • Incorporates and closes #197. Thank you @nikpivkin
  • Suppressing Bridgecrew rules Cloud Posse does not like should be done via external configuration so that users of this module can have the option of having those rules enforced.
  • Security and bug fixes

explanation

Any list manipulation functions should not be used in count since it can lead to the error:

│ Error: Invalid count argument

│   on ./modules/s3_bucket/main.tf line 462, in resource "aws_s3_bucket_policy" "default":
│  462:   count      = local.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || length(var.s3_replication_source_roles) > 0 || length(var.privileged_principal_arns) > 0 || length(local.source_policy_documents) > 0) ? 1 : 0

│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to
│ first apply only the resources that the count depends on.

Using the local like this

source_policy_documents = var.policy != "" && var.policy != null ? concat([var.policy], var.source_policy_documents) : var.source_policy_documents

would not work either if var.policy depends on apply-time resources from other TF modules.

General rules:

  • When using for_each, the map keys have to be known at plan time (the map values are not required to be know at plan time)

  • When using count, the length of the list must be know at plan time, the items inside the list are not. That does not mean that the list must be static with the length known in advance, the list can be dynamic and come from a remote state or data sources which Terraform evaluates first during plan, it just can’t come from other resources (which are only known after apply)

  • When using count, no list manipulating functions can be used in count - it will lead to the The "count" value depends on resource attributes that cannot be determined until apply error in some cases

v3.1.3

03 Aug 21:40
d7a4943
Compare
Choose a tag to compare

Unfortunately, this change makes count unknown at plan time in certain situations. In general, you cannot use the output of compact() in count.

The solution is to stop using the deprecated policy input and revert to 3.1.2 or upgrade to 4.0.

🚀 Enhancements

Fix `source_policy_documents` combined with `var.policy` being ignored @johncblandii (#201)

what

  • Changed var.source_policy_documents to local.source_policy_documents so var.policy usage was still supported

why

  • The ternary check uses var,source_policy_documents so var.policy being combined with var.source_policy_documents into local.source_policy_documents does not provide true for the ternary to execute

references

v3.1.2 Fix Public Bucket Creation

03 Jun 03:03
7030cbd
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 3.1.1...3.1.2

v3.1.1

09 May 03:37
eaea302
Compare
Choose a tag to compare

🐛 Bug Fixes

Revert change to Transfer Acceleration from #178 @Nuru (#180)

what

  • Revert change to Transfer Acceleration from #178

why

  • Transfer Acceleration is not available in every region, and the change in #178 (meant to detect and correct drift) does not work (throws API errors) in regions where Transfer Acceleration is not supported

v3.1.0 Support new AWS S3 defaults (ACL prohibited)

06 May 03:31
8e44ce1
Compare
Choose a tag to compare

Note: this version introduced drift detection and correction for Transfer Acceleration. Unfortunately, that change prevents deployment of buckets in regions that do not support Transfer Acceleration. Version 3.1.1 reverts that change so that S3 buckets can be deployed by this module in all regions. It does, however, mean that when var.transfer_acceleration_enabled is false, Terraform does not track or revert changes to Transfer Acceleration made outside of this module.

Make compatible with new S3 defaults. Add user permissions boundary. @Nuru (#178)

what

  • Make compatible with new S3 defaults by setting S3 Object Ownership before setting ACL and disabling ACL if Ownership is "BucketOwnerEnforced"
  • Add optional permissions boundary input for IAM user created by this module
  • Create aws_s3_bucket_accelerate_configuration and aws_s3_bucket_versioning resources even when the feature is disabled, to enable drift detection

why

  • S3 buckets with ACLs were failing to be provisioned because the ACL was set before the bucket ownership was changed
  • Requested feature
  • See #171

references

  • Closes #174
  • Supersedes and closes #175
  • Supersedes and closes #176
Always include `aws_s3_bucket_versioning` resource @mviamari (#172)

what

  • Always create an aws_s3_bucket_versioning resource to track changes made to bucket versioning configuration

why

  • When there is no aws_s3_bucket_versioning, the expectation is that the bucket versioning is disabled/suspend for the bucket. If bucket versioning is turned on outside of terraform (e.g. through the console), the change is not detected by terraform unless the aws_s3_bucket_versioning resource exists.

references

Add support for permission boundaries on replication IAM role @mchristopher (#170)

what

why

  • Our AWS environment enforces permission boundaries on all IAM roles to follow AWS best practices with security.

references

🤖 Automatic Updates

Update README.md and docs @cloudpossebot (#164)

what

This is an auto-generated PR that updates the README.md and docs

why

To have most recent changes of README.md and doc from origin templates

v3.0.0 Static Website Support, remove awsutils provider

07 Sep 22:41
6837ed7
Compare
Choose a tag to compare

Breaking changes

This release has what can be considered breaking changes, but mostly because it either reverts breaking changes introduced in v2.0.2 or fixes features that were previously broken and unusable.

  • If an IAM user and access key is created by this module, the AWS Access Key does not expire, restoring the behavior in and prior to v2.0.1. In v2.0.2 and v2.0.3, keys expired in 30 days. If you are upgrading from v2.0.1 or earlier, this is not a breaking change.
  • The website_inputs input is replaced by website_configuration and website_redirect_all_requests_to. The cors_rule_inputs input is replaced by cors_configuration. Thanks to @jurgen-weber-deltatre for helping with this. If you were not using these inputs, then this is not a breaking change.

If neither of the above issues affects you, then there are no breaking changes between v2.0.0 and this release and you can safely upgrade without making any changes to your code.

New Features

  • The breaking change introduced in v2.0.2 that required you to initialize the cloudposse/awsutils Terraform provider with the AWS region and been reverted. This module no longer uses that provider.
  • Support for S3 static websites is greatly improved. Configure with website_configuration and cors_configuration, or with website_redirect_all_requests_to. The website endpoint and base domain are now available as outputs.
  • You can now store the IAM user's access key in SSM via store_access_key_in_ssm. When stored in SSM, the secret key is not output by this module as a Terraform output, preventing it from being stored unencrypted in the Terraform state file.
  • You can now create a user but not create an access key by setting access_key_enabled = false. You can also use this feature to rotate an access key by setting it to false and applying to delete the key, then setting it to true and applying to create a new one.

Note that in general we now recommend against creating an IAM user, and recommend using AWS OIDC to create an authentication path for users and systems that do not have native IAM credentials. Also note that you can assign permissions to existing AWS users and roles via grants or privileged_principal_arns.

what && why

  • Update terraform-aws-s3-user to v1.0.0 and add inputs access_key_enabled, store_access_key_in_ssm, and ssm_base_path in order to
    • Make creating an IAM key for the S3 user optional
    • Enable saving the IAM key in SSM Parmeter store and omitting it from Terraform state
    • Remove dependency on cloudposse/awsutils Terraform provider. See terraform-aws-iam-system-user v1.0.0 Release Notes for further details and justification.
  • Replace input website_inputs (which never worked) with website_configuration and website_redirect_all_requests_to. See #142 for further details and justification.
  • Replace input cors_rule_inputs with cors_configuration to match resource name.

references

  • Implements and closes #3
  • Fixes #141
  • Supersedes and closes #142
  • Obsoletes and closes #151
  • Supersedes and closes #154
  • Obsoletes and closes #155
  • Supersedes and closes #157

v2.0.3

02 Jul 02:07
caf2af9
Compare
Choose a tag to compare
v2.0.3 Pre-release
Pre-release

Deprecated

The changes introduce in v2.0.2 were problematic and have been removed in v3.0.0. It is not recommended to use this version or version 2.0.2.

🤖 Automatic Updates

Update Terraform cloudposse/iam-s3-user/aws to v0.15.10 @renovate (#153)

This PR contains the following updates:

Package Type Update Change
cloudposse/iam-s3-user/aws (source) module patch 0.15.9 -> 0.15.10