Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.2.0 errors out with "Module is incompatible with count, for_each, and depends_on" #31081

Closed
syphernl opened this issue May 19, 2022 · 32 comments · Fixed by #31091
Closed

1.2.0 errors out with "Module is incompatible with count, for_each, and depends_on" #31081

syphernl opened this issue May 19, 2022 · 32 comments · Fixed by #31091
Labels
bug confirmed a Terraform Core team member has reproduced this issue explained a Terraform Core team member has described the root cause of this issue in code v1.2 Issues (primarily bugs) reported against v1.2 releases

Comments

@syphernl
Copy link

Terraform Version

Terraform v1.2.0
on linux_amd64

Terraform Configuration Files

module "iam_task_policy_aggregated" {
  source = "git::https://github.com/cloudposse/terraform-aws-iam-policy-document-aggregator.git?ref=0.8.0"

  count = local.enable_task_policy ? 1 : 0

  source_documents = [
    # Allow to READ, WRITE and DELETE objects in the S3 buckets
    local.s3_enable_read ? join("", module.iam_task_policy.*.grant_s3_bucket_read_access) : "",
    local.s3_enable_write ? join("", module.iam_task_policy.*.grant_s3_bucket_write_access) : "",
    local.s3_enable_delete ? join("", module.iam_task_policy.*.grant_s3_bucket_delete_access) : "",

    # Allow to Invoke given Lambda Functions
    length(var.lambda_function_names) > 0 ? join("", module.iam_task_policy.*.grant_lambda_invoke_access) : "",
  ]
}

Debug Output

Expected Behavior

Actual Behavior

An error is shown

[31m[0mThere are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.[0m[0m
[31m[31m╷[0m[0m
[31m│[0m [0m[1m[31mError: [0m[0m[1mModule is incompatible with count, for_each, and depends_on[0m
[31m│[0m [0m
[31m│[0m [0m[0m  on ecs_task_policy.tf line 54, in module "iam_task_policy_aggregated":
[31m│[0m [0m  54:   count = [4mlocal.enable_task_policy ? 1 : 0[0m[0m
[31m│[0m [0m
[31m│[0m [0mThe module at module.app.module.ecs_codepipeline.module.github_webhooks is
[31m│[0m [0ma legacy module which contains its own local provider configurations, and
[31m│[0m [0mso calls to it may not use the count, for_each, or depends_on arguments.
[31m│[0m [0m
[31m│[0m [0mIf you also control the module
[31m│[0m [0m"registry.terraform.io/cloudposse/repository-webhooks/github", consider
[31m│[0m [0mupdating this module to instead expect provider configurations to be passed
[31m│[0m [0mby its caller.
[31m╵[0m[0m
[0m[0m
time=2022-05-19T13:33:16+02:00 level=error msg=1 error occurred:
	* exit status 1

The changelog however mentions nothing about changes in regards to using count on a module?

Steps to Reproduce

  1. terraform init
  2. terraform plan

Additional Context

References

@syphernl syphernl added bug new new issue not yet triaged labels May 19, 2022
@Flydiverny
Copy link

We are also seeing this error in a lot of places. For us it seems a bit random. We have multiple deployment targets that re-use the same terraform but we get variations of this error depending on the target? 😅

Where none of the errors are on terraform related to the actual module that has the provider inside? 😕

  │ Error: Module is incompatible with count, for_each, and depends_on
  │ 
  │   on cluster-v2.tf line 35, in module "kiwi":
  │   35:   for_each = { for cert in var.cluster_kiwi : kiwi.id => kiwi }
  │ 
  │ The module at module.cluster_v2 is a legacy module which contains its own
  │ local provider configurations, and so calls to it may not use the count,
  │ for_each, or depends_on arguments.
  │ 
  │ If you also control the module "../../../modules/eks-v2", consider updating
  │ this module to instead expect provider configurations to be passed by its
  │ caller.
│ Error: Module is incompatible with count, for_each, and depends_on
│ 
│   on banana.tf line 24, in module "banana":
│   24:   count  = var.banana_enabled ? 1 : 0
│ 
│ The module at module.cluster_v2 is a legacy module which contains its own
│ local provider configurations, and so calls to it may not use the count,
│ for_each, or depends_on arguments.
│ 
│ If you also control the module "../../../modules/eks-v2", consider updating
│ this module to instead expect provider configurations to be passed by its
│ caller.
│ Error: Module is incompatible with count, for_each, and depends_on
│ 
│   on apple.tf line 53, in module "apple":
│   53:   count  = length(var.apple) > 0 ? 1 : 0
│ 
│ The module at module.cluster_v2 is a legacy module which contains its own
│ local provider configurations, and so calls to it may not use the count,
│ for_each, or depends_on arguments.
│ 
│ If you also control the module "../../../modules/eks-v2", consider updating
│ this module to instead expect provider configurations to be passed by its
│ caller.

etc

@michelefa1988
Copy link

michelefa1988 commented May 19, 2022

i have the same problem.The following code works well when using the older terraform version.

The error seems to show up in all places where I have used a count, which I use in order to control if a module should be deployed depending on a variable.

main.tf

module "aks_association" {
  source = "../../azurerm_subnet_nat_gateway_association"
  count  = length(regexall("userAssignedNATGateway$", var.outbound_type)) > 0 ? 1 : 0

  subnet_id      = var.default_node_pool_vnet_subnet_id
  nat_gateway_id = var.nat_gateway_id
}

azurerm_subnet_nat_gateway_association.tf

variable "subnet_id" {}
variable "nat_gateway_id" {}


resource "azurerm_subnet_nat_gateway_association" "subnet_nat_gateway_association" {
  subnet_id      = var.subnet_id
  nat_gateway_id = var.nat_gateway_id
}

@apparentlymart
Copy link
Member

Hi all! Thanks for reporting this.

Terraform v1.2 included changes to the text of this error message to try to be clearer about what it was describing (since the equivalent message in prior versions often caused confusion) but the situations where the error appears are supposed to be exactly the same, and so if you see this appear on v1.2 and not with v1.1 even though the configuration is identical then this is indeed a regression which we will fix by restoring the previous criteria.

The situation this error is referring to its when the child module contains provider blocks, which should only appear in a root module. It sounds like for at least some of you your child module (and all of its children) does not contain any provider blocks, in which case this error is definitely incorrect. Can you each confirm that you don't have provider blocks in your child modules? If not, I think the next step here would be to understand what exactly Terraform is incorrectly detecting as a provider configuration so that we can reproduce in order to write a test case and fix it.

Thanks again!

@apparentlymart apparentlymart added waiting for reproduction unable to reproduce issue without further information v1.2 Issues (primarily bugs) reported against v1.2 releases labels May 19, 2022
@Flydiverny
Copy link

We do have a provider block in our module.cluster_v2 module. Which we know is deprecated, but did not expect to see 1.2 to break in other random places :)

@apparentlymart
Copy link
Member

Provider blocks in nested modules have never been compatible with module count and for_each, going back to the introduction of module count and for_each many versions ago. The error message about it is what changed in v1.2.

I think the question here is: were there modules that worked with count and for_each in v1.1 (meaning: modules that don't have their own provider blocks) that no longer do in v1.2? If so, I would like to learn more about them because it seems like Terraform v1.2 is incorrectly detecting some other language construct as if it were a provider block. 🤔

With that said, if you had a module with a provider block working with count or for_each in v1.1 then I'd like to learn more about that too. I don't really know how it could work (we blocked this combination specifically because Terraform Core cannot correctly support it), but that of course doesn't mean there wasn't a bug in earlier versions that we weren't aware of until now, which we may need to "unfix" in order to stay compatible.

@Flydiverny
Copy link

To clarify we do not use for each or count etc in relation to the module that has a provider defined inside.
The errors we get are on terraform code that's on other modules that do not have a provider block, but do have either count or for each.

I tried to create a small reproduction but so far no luck 😅 I'll see if I can get rid of the error by removing the module that has the provider inside or figure out how it's connected when I'm back at a computer.

@apparentlymart
Copy link
Member

Thanks for clarifying, @Flydiverny!

In one of your previous comments I saw a set of errors that were talking about module "kiwi" with for_each set, module "banana" with count set, and module "apple" with count set.

Given that, I'm understanding your recent comment as suggesting that although you do have some modules that use count and for_each, and you have other modules which have nested provider blocks, those modules are separate from one another and Terraform seems to be incorrectly saying that a provider block in one module is making an entirely-unrelated module "incompatible with count, for_each, and depends_on". Does that seem right?

@apparentlymart apparentlymart removed the new new issue not yet triaged label May 19, 2022
@dcarbone
Copy link

dcarbone commented May 19, 2022

having just gone from 1.1.9 to 1.2.0, tf config that was working now does not. child modules do have provider configuration in them, and this has never presented a problem anywhere until this release.

will have to stay with 1.1.9 until we receive clarification on why this is a problem / this once again becomes not a problem.

@michelefa1988
Copy link

michelefa1988 commented May 19, 2022

@dcarbone has explained it very well. His case is exactly my case too.

To back up his claim if you check my post above, the snippet of code in my case is extremely simple - just adding a snet/gateway association. No extra providers were used, the only provider which was used was the azure one.

All this was working perfectly in 1.1.9. :)

@knbnnate
Copy link

I can reproduce this as a case of the error message being incorrect. I have code that initializes and plans successfully in 1.1.9, and fails with this error message in 1.2.0, that specifically references a child module that has no provider configuration or provider keywords in it. The child module DOES take parameter input that is derived from a data lookup that does use a provider alias.

@nwsparks
Copy link

To clarify we do not use for each or count etc in relation to the module that has a provider defined inside. The errors we get are on terraform code that's on other modules that do not have a provider block, but do have either count or for each.

I tried to create a small reproduction but so far no luck 😅 I'll see if I can get rid of the error by removing the module that has the provider inside or figure out how it's connected when I'm back at a computer.

We are getting the same problem.

I should note that the use case for defining a provider in a module for us is that we are overriding the aws default tags in a specific module. The easiest way to do that is define the provider and default tags config again rather than individually tagging 30+ resources.

@apparentlymart
Copy link
Member

Hi all!

In order to move forward here we need to determine what all of you have in common that is causing this to occur.

So far I've heard that some are using count, for_each, on depends_on on a module that doesn't have a provider block inside of it, but that alone doesn't seem to be sufficient because that simple case works in my local testing.

If someone would be willing to share more details about exactly how their modules make use of count, for_each, depends_on and nested provider blocks then I would love to try to put together a reproduction case so I can observe this behavior for myself and understand why it's happening. Another way to proceed would be for someone to run Terraform with the TF_LOG=trace environment variable set and then share the resulting trace log in a GitHub Gist, which may allow us to infer from the trace logs where the problem appeared.

We acknowledge that the problem exists, so there's no need to add comments restating what's already been said. You can add 👍 reactions to the issue if you want to express that it affects you too, but in this particular case this issue is a high priority regardless, and what we need to move forward is more information about what's happening in order to determine a root cause.

Thanks!

@ryanwholey
Copy link

ryanwholey commented May 19, 2022

We have a number of workspaces that provision an AWS RDS instance with a common shared module. Within that module we provision the database then use the postgresql provider to create some basic users. The user module is a sub-module to the RDS module.

root workspace > AWS RDS module (dynamic postgresql provider created here) > user module

The user module uses a for_each and is passed the postgresql provider from the calling parent module (the RDS module)

module "users" {
  source = "./modules/iam-user"

  for_each = {
    (var.iam_database_users.read)  = [postgresql_role.read.name],
    (var.iam_database_users.write) = [postgresql_role.read_write.name]
  }

  providers = {
    postgresql = postgresql
  }
...

The error message indicates that the RDS module is the one that should not be called with for_each, when its the users submodule within that is being called with for_each

The module at module.db is a legacy module which contains its own local provider configurations, and so calls to it may not use the count, for_each, or depends_on arguments.

I can send you a trace log if that would help any further, but I'm hesitant to post it here.

@Flydiverny
Copy link

Flydiverny commented May 19, 2022

@apparentlymart Yes! That's what I'm seeing 😄

Every time I run terraform validate it will complain about one of the modules that have either a count or

This is a simplified reality but I have these files (and others) and when I run terraform validate the result is not consistent it can either;

  • validation passes
  • validation fails on a module using count (which module will vary with each run)
  • validation fails on a module using for_each
    (we don't have any depends_on in this one)

All these files are in the root module.

sns.tf

provider "aws" {
  alias = "sns"

  region = local.sns_region

  assume_role {
    session_name = "terraform-sns-${local.sns_region}"
    role_arn     = "arn:aws:iam::${var.aws_account}:role/${var.aws_role}"
  }
}

module "sns" {
  source = "../modules/sns"  # This module does not have a provider inside
  count = var.sns_enabled ? 1 : 0
  (...params..) # none of these are connected to the EKS module
  
  providers = {
    aws = aws.sns
  }
}

sqs.tf

module "sqs" {
  source = "../modules/sqs"   # This module does not have a provider inside
  count = length(var.sqs_subscribers) > 0 ? 1 : 0
  (...params..) # none of these are connected to the EKS module
}

eks.tf

module "eks" {
  source = "../modules/eks"   # This module DOES have a provider inside
  (...params...) # none of these are connected to the modules above
}

Still trying to reproduce it 😄
If I removed a lot of our code I could hit the error with just sns.tf above + eks.tf removing passing the providers to sns fixes it in this case. But if I remove that providers parameter in the original code it does not address the problem

@Flydiverny
Copy link

Running with trace doesn't seem to provide a lot of details 😅

2022-05-20T00:42:12.396+0200 [INFO]  Terraform version: 1.2.0
2022-05-20T00:42:12.396+0200 [DEBUG] using github.com/hashicorp/go-tfe v1.0.0
2022-05-20T00:42:12.396+0200 [DEBUG] using github.com/hashicorp/hcl/v2 v2.12.0
2022-05-20T00:42:12.396+0200 [DEBUG] using github.com/hashicorp/terraform-config-inspect v0.0.0-20210209133302-4fd17a0faac2
2022-05-20T00:42:12.396+0200 [DEBUG] using github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734
2022-05-20T00:42:12.396+0200 [DEBUG] using github.com/zclconf/go-cty v1.10.0
2022-05-20T00:42:12.396+0200 [INFO]  Go runtime version: go1.18.1
2022-05-20T00:42:12.396+0200 [INFO]  CLI args: []string{"/home/linuxbrew/.linuxbrew/Cellar/tfenv/2.2.2/versions/1.2.0/terraform", "validate"}
2022-05-20T00:42:12.396+0200 [TRACE] Stdout is a terminal of width 207
2022-05-20T00:42:12.396+0200 [TRACE] Stderr is a terminal of width 207
2022-05-20T00:42:12.396+0200 [TRACE] Stdin is a terminal
2022-05-20T00:42:12.396+0200 [DEBUG] Attempting to open CLI config file: /home/flydiverny/.terraformrc
2022-05-20T00:42:12.396+0200 [INFO]  Loading CLI configuration from /home/flydiverny/.terraformrc
2022-05-20T00:42:12.396+0200 [DEBUG] ignoring non-existing provider search directory terraform.d/plugins
2022-05-20T00:42:12.396+0200 [DEBUG] ignoring non-existing provider search directory /home/flydiverny/.terraform.d/plugins
2022-05-20T00:42:12.396+0200 [DEBUG] ignoring non-existing provider search directory /home/flydiverny/.local/share/terraform/plugins
2022-05-20T00:42:12.396+0200 [DEBUG] ignoring non-existing provider search directory /usr/local/share/terraform/plugins
2022-05-20T00:42:12.396+0200 [DEBUG] ignoring non-existing provider search directory /usr/share/terraform/plugins
2022-05-20T00:42:12.396+0200 [INFO]  CLI command args: []string{"validate"}
╷
│ Error: Module is incompatible with count, for_each, and depends_on
│
│   on mobile_push.tf line 24, in module "mobile_push":
│   24:   count  = var.mobile_push_enabled ? 1 : 0
│
│ The module at module.cluster_v2 is a legacy module which contains its own local provider configurations, and so calls to it may not use the count, for_each, or depends_on arguments.
│
│ If you also control the module "../../../modules/eks-v2", consider updating this module to instead expect provider configurations to be passed by its caller.
╵

Removing the module that has the nested provider makes validation pass consistently. So at least something in combination with that 😄

@ryanwholey
Copy link

Here's a small repo using the kubernetes provider and minikube to reproduce the issue
https://github.com/ryanwholey/terraform-invalid-submodule-repro

@apparentlymart
Copy link
Member

apparentlymart commented May 19, 2022

Thanks for that extra info, everyone! I've run out of working hours for the day today, but I'll dig into this more tomorrow morning if one of my colleagues in an earlier timezone doesn't get there first.

In the meantime, if you are seeing this error where you didn't before then I suggest staying on v1.1.x releases until we release a v1.2.1 with a fix. Thanks again!

@knbnnate
Copy link

contrived.zip

1.2.0 init

nate@DESKTOP:/tmp/contrived$ rm -rf .terraform ; /tmp/terraform-1.2.0 init
Initializing modules...
- child in barren_child
- child.grandchild in grandchild
There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
╷
│ Error: Module is incompatible with count, for_each, and depends_on
│
│   on barren_child/main.tf line 9, in module "grandchild":
│    9:   for_each = toset(["bar","baz"])
│
│ The module at module.child is a legacy module which contains its own local provider configurations, and so calls to it may not use the count, for_each, or depends_on arguments.
│
│ If you also control the module "./barren_child", consider updating this module to instead expect provider configurations to be passed by its caller.
╵

1.1.9 init

nate@DESKTOP:/tmp/contrived$ rm -rf .terraform ; /tmp/terraform-1.1.9 init
Initializing modules...
- child in barren_child
- child.grandchild in grandchild

Initializing the backend...

Successfully configured the backend "local"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Installing hashicorp/azurerm v2.99.0...
- Installed hashicorp/azurerm v2.99.0 (signed by HashiCorp)
- Installing hashicorp/local v2.2.3...
- Installed hashicorp/local v2.2.3 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

1.1.9 plan

nate@DESKTOP-4380GPE:/tmp/contrived$ /tmp/terraform-1.1.9 plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.child.module.grandchild["bar"].local_file.foo will be created
  + resource "local_file" "foo" {
      + content              = "\"foo\""
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "/dev/null"
      + id                   = (known after apply)
    }

  # module.child.module.grandchild["baz"].local_file.foo will be created
  + resource "local_file" "foo" {
      + content              = "\"foo\""
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "/dev/null"
      + id                   = (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
nate@DESKTOP:/tmp/contrived$

Output from an attached contrived example.

Scenario is you're given a root module, its child, and its grandchild. The child has provider configuration in it and calls the grandchild with a for_each statement.

In 1.1.9 that works.

In 1.2.0 that does not work, and the error message line number refers to the for_each statement calling the grandchild module, while the error message text implies that the root module is using a for_each statement to call the child.

@kmoe
Copy link
Member

kmoe commented May 20, 2022

Good morning. I'm @apparentlymart's colleague in an earlier timezone. I believe I've found the root cause of this bug, which was accidentally introduced in v1.2.0 without intending to change any behaviour. PR to fix: #31091.

Thanks to all who provided reproduction cases.

@kmoe kmoe added explained a Terraform Core team member has described the root cause of this issue in code confirmed a Terraform Core team member has reproduced this issue and removed waiting for reproduction unable to reproduce issue without further information labels May 20, 2022
@jbvsmo
Copy link

jbvsmo commented May 20, 2022

I'm interested in this bugfix. Do we have a timeline for 1.2.1?

apparentlymart added a commit that referenced this issue May 20, 2022
5417975 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.

This is a regression test to catch if a similar bug arises in the future.

Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.

I've verified that this new test fails without the change made in the
earlier commit.
apparentlymart added a commit that referenced this issue May 20, 2022
5417975 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.

This is a regression test to catch if a similar bug arises in the future.

Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.

I've verified that this new test fails without the change made in the
earlier commit.
apparentlymart added a commit that referenced this issue May 20, 2022
5417975 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.

This is a regression test to catch if a similar bug arises in the future.

Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.

I've verified that this new test fails without the change made in the
earlier commit.
@apparentlymart
Copy link
Member

Thanks for reporting this and sharing all of the details to help solve it, all!

This is now fixed and the fix is backported into the v1.2 branch. For those who are affected by this problem, the best path would be to wait on v1.1.9 for now and look out for the forthcoming v1.2.1 release, at which point you should be able to upgrade and see this work as expected.

apparentlymart added a commit that referenced this issue May 20, 2022
5417975 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.

This is a regression test to catch if a similar bug arises in the future.

Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.

I've verified that this new test fails without the change made in the
earlier commit.
apparentlymart added a commit that referenced this issue May 20, 2022
5417975 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.

This is a regression test to catch if a similar bug arises in the future.

Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.

I've verified that this new test fails without the change made in the
earlier commit.
apparentlymart added a commit that referenced this issue May 20, 2022
5417975 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.

This is a regression test to catch if a similar bug arises in the future.

Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.

I've verified that this new test fails without the change made in the
earlier commit.
apparentlymart added a commit that referenced this issue May 20, 2022
5417975 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.

This is a regression test to catch if a similar bug arises in the future.

Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.

I've verified that this new test fails without the change made in the
earlier commit.
@Flydiverny
Copy link

Can confirm it validates and plans as expected with the artifacts from latest commit on the v1.2 branch (build)!

@jbvsmo
Copy link

jbvsmo commented May 21, 2022

@apparentlymart I rely on a system (which I can change for the time being) that is autoupdating to latest minor version of terraform and it is blocking one of the projects from being auto deployed. I don't know much about the terraform release schedule, but can't this bug fix be released more quickly?

@apparentlymart
Copy link
Member

Because Terraform is software you download and run on your own system, rather than hosted software which gets upgraded outside of your control, we typically expect users to remain on older versions until the next patch release if they find problems with a new release. Although of course we aim for there to be no problems when upgrading to a new release, there are sometimes less common situations like this one which affect only a particular combination of features that we learn in retrospect was not covered by our automated tests.

For that reason, I would not suggest using automation which automatically uses the very latest release of Terraform, or at least if you do so to design it so that you can override back to an earlier version temporarily in situations like this one.

With all of that said, we do intend to make a patch release earlier than our usual two week cadence, once some other pending fixes are ready to release too.

@lucjross
Copy link

i'm temporarily using this in my configs for automation, while retaining version tolerance (will definitely remove when a patch comes out)

terraform {
  // ...
  required_version = "<= 1.1.9, ~> 1.1.0"
}

@lizthegrey
Copy link

Can confirm the fix works in Terraform Cloud latest version.

@sandangel
Copy link

sandangel commented Jun 2, 2022

I still have this issue with 1.2.1

│ Error: Module is incompatible with count, for_each, and depends_on
│
│   on main.tf line 22, in module "aurora":
│   22:   count                 = local.stage == "dev" ? 1 : 0
│
│ The module at module.aurora is a legacy module which contains its own local provider configurations, and so calls to it may not use the count, for_each, or depends_on arguments.
│
│ If you also control the module "../../../../modules/aws/aurora/v2", consider updating this module to instead expect provider configurations to be passed by its caller.

In module aurora

terraform {
  experiments      = [module_variable_optional_attrs]
  required_version = "~> 1.2.1"
  required_providers {
    postgresql = {
      source  = "cyrilgdn/postgresql"
      version = "~> 1.15"
    }
  }
}

provider "postgresql" {
  scheme    = "awspostgres"
  host      = aws_rds_cluster.this.endpoint
  port      = 5432
  username  = var.master_username
  password  = random_password.master.result
  sslmode   = "require"
  superuser = false
}

UPDATED:

module "aurora" {
  count                 = local.stage == "dev" ? 1 : 0
  source                = "../../../../modules/aws/aurora/v2"
}

@Flydiverny
Copy link

Flydiverny commented Jun 2, 2022

I still have this issue with 1.2.1

│ Error: Module is incompatible with count, for_each, and depends_on
│
│   on main.tf line 22, in module "aurora":
│   22:   count                 = local.stage == "dev" ? 1 : 0
│
│ The module at module.aurora is a legacy module which contains its own local provider configurations, and so calls to it may not use the count, for_each, or depends_on arguments.
│
│ If you also control the module "../../../../modules/aws/aurora/v2", consider updating this module to instead expect provider configurations to be passed by its caller.

In module aurora

terraform {
  experiments      = [module_variable_optional_attrs]
  required_version = "~> 1.2.1"
  required_providers {
    postgresql = {
      source  = "cyrilgdn/postgresql"
      version = "~> 1.15"
    }
  }
}

provider "postgresql" {
  scheme    = "awspostgres"
  host      = aws_rds_cluster.this.endpoint
  port      = 5432
  username  = var.master_username
  password  = random_password.master.result
  sslmode   = "require"
  superuser = false
}

From your error message here it looks like its working as intended? Its complaining about module.aurora the errors is on module "aurora" where the included code snippet includes a count which is not allowed, and in your shared code snippet you have a provider.

@sandangel
Copy link

sandangel commented Jun 2, 2022

From your error message here it looks like its working as intended? Its complaining about module.aurora the errors is on module "aurora" where the included code snippet includes a count which is not allowed, and in your shared code snippet you have a provider.

It was working before though. Why do we not allow this in 1.2? the provider postgresql need the information from resources declared in the module itself. I should not output the master random password in the aurora module right?

@sandangel
Copy link

I tried with 1.2.2 and it's still giving errors

@apparentlymart
Copy link
Member

Modules with local provider configurations have been incompatible with count and for_each on modules ever since the introduction of those module repetition features, so the configuration shown most recently above is invalid and thus this error message seems correct.

This issue was about that message showing up in a different, incorrect situation where the provider configuration was a sibling of the module with repetition, rather than nested inside it. That has now been fixed, but that fix does not affect your invalid configuration.

If you are noticing a change in behavior for this configuration compared to Terraform v1.1 or Terraform v1.0 then please open a new issue about it, and we can investigate to understand what was working before that isn't working now. Thanks!

@github-actions
Copy link

github-actions bot commented Jul 3, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 3, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug confirmed a Terraform Core team member has reproduced this issue explained a Terraform Core team member has described the root cause of this issue in code v1.2 Issues (primarily bugs) reported against v1.2 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.