New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.2.0 errors out with "Module is incompatible with count, for_each, and depends_on" #31081
Comments
We are also seeing this error in a lot of places. For us it seems a bit random. We have multiple deployment targets that re-use the same terraform but we get variations of this error depending on the target? 😅 Where none of the errors are on terraform related to the actual module that has the provider inside? 😕
etc |
i have the same problem.The following code works well when using the older terraform version. The error seems to show up in all places where I have used a count, which I use in order to control if a module should be deployed depending on a variable. main.tf
azurerm_subnet_nat_gateway_association.tf
|
Hi all! Thanks for reporting this. Terraform v1.2 included changes to the text of this error message to try to be clearer about what it was describing (since the equivalent message in prior versions often caused confusion) but the situations where the error appears are supposed to be exactly the same, and so if you see this appear on v1.2 and not with v1.1 even though the configuration is identical then this is indeed a regression which we will fix by restoring the previous criteria. The situation this error is referring to its when the child module contains Thanks again! |
We do have a |
Provider blocks in nested modules have never been compatible with module count and for_each, going back to the introduction of module count and for_each many versions ago. The error message about it is what changed in v1.2. I think the question here is: were there modules that worked with count and for_each in v1.1 (meaning: modules that don't have their own provider blocks) that no longer do in v1.2? If so, I would like to learn more about them because it seems like Terraform v1.2 is incorrectly detecting some other language construct as if it were a provider block. 🤔 With that said, if you had a module with a provider block working with count or for_each in v1.1 then I'd like to learn more about that too. I don't really know how it could work (we blocked this combination specifically because Terraform Core cannot correctly support it), but that of course doesn't mean there wasn't a bug in earlier versions that we weren't aware of until now, which we may need to "unfix" in order to stay compatible. |
To clarify we do not use for each or count etc in relation to the module that has a provider defined inside. I tried to create a small reproduction but so far no luck 😅 I'll see if I can get rid of the error by removing the module that has the provider inside or figure out how it's connected when I'm back at a computer. |
Thanks for clarifying, @Flydiverny! In one of your previous comments I saw a set of errors that were talking about Given that, I'm understanding your recent comment as suggesting that although you do have some modules that use |
having just gone from 1.1.9 to 1.2.0, tf config that was working now does not. child modules do have provider configuration in them, and this has never presented a problem anywhere until this release. will have to stay with 1.1.9 until we receive clarification on why this is a problem / this once again becomes not a problem. |
@dcarbone has explained it very well. His case is exactly my case too. To back up his claim if you check my post above, the snippet of code in my case is extremely simple - just adding a snet/gateway association. No extra providers were used, the only provider which was used was the azure one. All this was working perfectly in 1.1.9. :) |
I can reproduce this as a case of the error message being incorrect. I have code that initializes and plans successfully in 1.1.9, and fails with this error message in 1.2.0, that specifically references a child module that has no provider configuration or provider keywords in it. The child module DOES take parameter input that is derived from a data lookup that does use a provider alias. |
We are getting the same problem. I should note that the use case for defining a provider in a module for us is that we are overriding the aws default tags in a specific module. The easiest way to do that is define the provider and default tags config again rather than individually tagging 30+ resources. |
Hi all! In order to move forward here we need to determine what all of you have in common that is causing this to occur. So far I've heard that some are using If someone would be willing to share more details about exactly how their modules make use of We acknowledge that the problem exists, so there's no need to add comments restating what's already been said. You can add 👍 reactions to the issue if you want to express that it affects you too, but in this particular case this issue is a high priority regardless, and what we need to move forward is more information about what's happening in order to determine a root cause. Thanks! |
We have a number of workspaces that provision an AWS RDS instance with a common shared module. Within that module we provision the database then use the
The user module uses a for_each and is passed the postgresql provider from the calling parent module (the RDS module) module "users" {
source = "./modules/iam-user"
for_each = {
(var.iam_database_users.read) = [postgresql_role.read.name],
(var.iam_database_users.write) = [postgresql_role.read_write.name]
}
providers = {
postgresql = postgresql
}
... The error message indicates that the RDS module is the one that should not be called with
I can send you a trace log if that would help any further, but I'm hesitant to post it here. |
@apparentlymart Yes! That's what I'm seeing 😄 Every time I run This is a simplified reality but I have these files (and others) and when I run
All these files are in the root module.
provider "aws" {
alias = "sns"
region = local.sns_region
assume_role {
session_name = "terraform-sns-${local.sns_region}"
role_arn = "arn:aws:iam::${var.aws_account}:role/${var.aws_role}"
}
}
module "sns" {
source = "../modules/sns" # This module does not have a provider inside
count = var.sns_enabled ? 1 : 0
(...params..) # none of these are connected to the EKS module
providers = {
aws = aws.sns
}
}
module "sqs" {
source = "../modules/sqs" # This module does not have a provider inside
count = length(var.sqs_subscribers) > 0 ? 1 : 0
(...params..) # none of these are connected to the EKS module
}
module "eks" {
source = "../modules/eks" # This module DOES have a provider inside
(...params...) # none of these are connected to the modules above
} Still trying to reproduce it 😄 |
Running with trace doesn't seem to provide a lot of details 😅
Removing the module that has the nested provider makes validation pass consistently. So at least something in combination with that 😄 |
Here's a small repo using the kubernetes provider and minikube to reproduce the issue |
Thanks for that extra info, everyone! I've run out of working hours for the day today, but I'll dig into this more tomorrow morning if one of my colleagues in an earlier timezone doesn't get there first. In the meantime, if you are seeing this error where you didn't before then I suggest staying on v1.1.x releases until we release a v1.2.1 with a fix. Thanks again! |
1.2.0 init
1.1.9 init
1.1.9 plan
Output from an attached contrived example. Scenario is you're given a root module, its child, and its grandchild. The child has provider configuration in it and calls the grandchild with a for_each statement. In 1.1.9 that works. In 1.2.0 that does not work, and the error message line number refers to the for_each statement calling the grandchild module, while the error message text implies that the root module is using a for_each statement to call the child. |
Good morning. I'm @apparentlymart's colleague in an earlier timezone. I believe I've found the root cause of this bug, which was accidentally introduced in v1.2.0 without intending to change any behaviour. PR to fix: #31091. Thanks to all who provided reproduction cases. |
I'm interested in this bugfix. Do we have a timeline for 1.2.1? |
5417975 addressed a regression in the logic for catching when the newer module meta-arguments are used in conjunction with the legacy practice of including provider configurations inside modules, where the check started incorrectly catching situations where the legacy nested provider configuration was in the same module as the child call using one of those meta-arguments. This is a regression test to catch if a similar bug arises in the future. Since it's testing validation rules that apply to an entire configuration tree, it ended up in a rather idiosyncratic location under the "configload" package, rather than directly in "configs". The "configs" package only knows how to load one module at a time, so it's harder to write a test like this in that context. Due to it being further removed from the code it is testing, I included a test for the correct error too in order to increase the chance that we'll learn if future changes in the "configs" package invalidate this regression test. I've verified that this new test fails without the change made in the earlier commit.
5417975 addressed a regression in the logic for catching when the newer module meta-arguments are used in conjunction with the legacy practice of including provider configurations inside modules, where the check started incorrectly catching situations where the legacy nested provider configuration was in the same module as the child call using one of those meta-arguments. This is a regression test to catch if a similar bug arises in the future. Since it's testing validation rules that apply to an entire configuration tree, it ended up in a rather idiosyncratic location under the "configload" package, rather than directly in "configs". The "configs" package only knows how to load one module at a time, so it's harder to write a test like this in that context. Due to it being further removed from the code it is testing, I included a test for the correct error too in order to increase the chance that we'll learn if future changes in the "configs" package invalidate this regression test. I've verified that this new test fails without the change made in the earlier commit.
5417975 addressed a regression in the logic for catching when the newer module meta-arguments are used in conjunction with the legacy practice of including provider configurations inside modules, where the check started incorrectly catching situations where the legacy nested provider configuration was in the same module as the child call using one of those meta-arguments. This is a regression test to catch if a similar bug arises in the future. Since it's testing validation rules that apply to an entire configuration tree, it ended up in a rather idiosyncratic location under the "configload" package, rather than directly in "configs". The "configs" package only knows how to load one module at a time, so it's harder to write a test like this in that context. Due to it being further removed from the code it is testing, I included a test for the correct error too in order to increase the chance that we'll learn if future changes in the "configs" package invalidate this regression test. I've verified that this new test fails without the change made in the earlier commit.
Thanks for reporting this and sharing all of the details to help solve it, all! This is now fixed and the fix is backported into the v1.2 branch. For those who are affected by this problem, the best path would be to wait on v1.1.9 for now and look out for the forthcoming v1.2.1 release, at which point you should be able to upgrade and see this work as expected. |
5417975 addressed a regression in the logic for catching when the newer module meta-arguments are used in conjunction with the legacy practice of including provider configurations inside modules, where the check started incorrectly catching situations where the legacy nested provider configuration was in the same module as the child call using one of those meta-arguments. This is a regression test to catch if a similar bug arises in the future. Since it's testing validation rules that apply to an entire configuration tree, it ended up in a rather idiosyncratic location under the "configload" package, rather than directly in "configs". The "configs" package only knows how to load one module at a time, so it's harder to write a test like this in that context. Due to it being further removed from the code it is testing, I included a test for the correct error too in order to increase the chance that we'll learn if future changes in the "configs" package invalidate this regression test. I've verified that this new test fails without the change made in the earlier commit.
5417975 addressed a regression in the logic for catching when the newer module meta-arguments are used in conjunction with the legacy practice of including provider configurations inside modules, where the check started incorrectly catching situations where the legacy nested provider configuration was in the same module as the child call using one of those meta-arguments. This is a regression test to catch if a similar bug arises in the future. Since it's testing validation rules that apply to an entire configuration tree, it ended up in a rather idiosyncratic location under the "configload" package, rather than directly in "configs". The "configs" package only knows how to load one module at a time, so it's harder to write a test like this in that context. Due to it being further removed from the code it is testing, I included a test for the correct error too in order to increase the chance that we'll learn if future changes in the "configs" package invalidate this regression test. I've verified that this new test fails without the change made in the earlier commit.
5417975 addressed a regression in the logic for catching when the newer module meta-arguments are used in conjunction with the legacy practice of including provider configurations inside modules, where the check started incorrectly catching situations where the legacy nested provider configuration was in the same module as the child call using one of those meta-arguments. This is a regression test to catch if a similar bug arises in the future. Since it's testing validation rules that apply to an entire configuration tree, it ended up in a rather idiosyncratic location under the "configload" package, rather than directly in "configs". The "configs" package only knows how to load one module at a time, so it's harder to write a test like this in that context. Due to it being further removed from the code it is testing, I included a test for the correct error too in order to increase the chance that we'll learn if future changes in the "configs" package invalidate this regression test. I've verified that this new test fails without the change made in the earlier commit.
5417975 addressed a regression in the logic for catching when the newer module meta-arguments are used in conjunction with the legacy practice of including provider configurations inside modules, where the check started incorrectly catching situations where the legacy nested provider configuration was in the same module as the child call using one of those meta-arguments. This is a regression test to catch if a similar bug arises in the future. Since it's testing validation rules that apply to an entire configuration tree, it ended up in a rather idiosyncratic location under the "configload" package, rather than directly in "configs". The "configs" package only knows how to load one module at a time, so it's harder to write a test like this in that context. Due to it being further removed from the code it is testing, I included a test for the correct error too in order to increase the chance that we'll learn if future changes in the "configs" package invalidate this regression test. I've verified that this new test fails without the change made in the earlier commit.
Can confirm it validates and plans as expected with the artifacts from latest commit on the v1.2 branch (build)! |
@apparentlymart I rely on a system (which I can change for the time being) that is autoupdating to latest minor version of terraform and it is blocking one of the projects from being auto deployed. I don't know much about the terraform release schedule, but can't this bug fix be released more quickly? |
Because Terraform is software you download and run on your own system, rather than hosted software which gets upgraded outside of your control, we typically expect users to remain on older versions until the next patch release if they find problems with a new release. Although of course we aim for there to be no problems when upgrading to a new release, there are sometimes less common situations like this one which affect only a particular combination of features that we learn in retrospect was not covered by our automated tests. For that reason, I would not suggest using automation which automatically uses the very latest release of Terraform, or at least if you do so to design it so that you can override back to an earlier version temporarily in situations like this one. With all of that said, we do intend to make a patch release earlier than our usual two week cadence, once some other pending fixes are ready to release too. |
i'm temporarily using this in my configs for automation, while retaining version tolerance (will definitely remove when a patch comes out)
|
Can confirm the fix works in Terraform Cloud latest version. |
I still have this issue with 1.2.1 │ Error: Module is incompatible with count, for_each, and depends_on
│
│ on main.tf line 22, in module "aurora":
│ 22: count = local.stage == "dev" ? 1 : 0
│
│ The module at module.aurora is a legacy module which contains its own local provider configurations, and so calls to it may not use the count, for_each, or depends_on arguments.
│
│ If you also control the module "../../../../modules/aws/aurora/v2", consider updating this module to instead expect provider configurations to be passed by its caller. In module aurora terraform {
experiments = [module_variable_optional_attrs]
required_version = "~> 1.2.1"
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
version = "~> 1.15"
}
}
}
provider "postgresql" {
scheme = "awspostgres"
host = aws_rds_cluster.this.endpoint
port = 5432
username = var.master_username
password = random_password.master.result
sslmode = "require"
superuser = false
}
UPDATED: module "aurora" {
count = local.stage == "dev" ? 1 : 0
source = "../../../../modules/aws/aurora/v2"
}
|
From your error message here it looks like its working as intended? Its complaining about |
It was working before though. Why do we not allow this in 1.2? the provider postgresql need the information from resources declared in the module itself. I should not output the master random password in the aurora module right? |
I tried with 1.2.2 and it's still giving errors |
Modules with local provider configurations have been incompatible with This issue was about that message showing up in a different, incorrect situation where the provider configuration was a sibling of the module with repetition, rather than nested inside it. That has now been fixed, but that fix does not affect your invalid configuration. If you are noticing a change in behavior for this configuration compared to Terraform v1.1 or Terraform v1.0 then please open a new issue about it, and we can investigate to understand what was working before that isn't working now. Thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Terraform Version
Terraform Configuration Files
Debug Output
Expected Behavior
Actual Behavior
An error is shown
The changelog however mentions nothing about changes in regards to using
count
on a module?Steps to Reproduce
terraform init
terraform plan
Additional Context
References
The text was updated successfully, but these errors were encountered: