-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.version
field causes Error: Provider produced inconsistent final plan
#476
Comments
So, I have destroyed all the resources and restarted the whole apply "sequence" from the begining, with Here we are, this is the DEBUG/TRACE log of the crash:
In addition, I put another part of the DEBUG log, about the Namespace creation that produce some warnings:
|
Here is the DEBUG/TRACE log of the next
As you can see, the |
This may be related to #541. We've also seen the "Provider produced inconsistent plan" error on the first Terraform apply of a
|
I am confirming the bug exists, but also it has nothing to do with Error: Provider produced inconsistent final plan
When expanding the plan for
module.metrics_server.helm_release.metrics_server[0] to include new values
learned so far during apply, provider "registry.terraform.io/hashicorp/helm"
produced an invalid new value for .version: was known, but now unknown.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Releasing state lock. This may take a few moments... Our helm provider is set up with variables straight from aws_eks_cluster resource: provider "helm" {
kubernetes {
host = module.eks.endpoint
token = module.eks.token
cluster_ca_certificate = base64decode(module.eks.ca_certificate)
}
} And the helm release is not that complicated: module "metrics_server" {
source = "../../modules/helm-metrics-server"
depends_on = [module.eks, module.dedicated_nodes]
on = var.helm_releases_enabled
endpoint = module.eks.endpoint # explicit dependency for destroy
} in module: resource "helm_release" "metrics_server" {
count = var.on == true ? 1 : 0
name = "metrics-server"
namespace = "kube-system"
repository = "https://charts.bitnami.com/bitnami"
chart = "metrics-server"
atomic = true
set {
name = "apiService.create"
value = "true"
}
} We use 2.0.2 helm provider version. |
Just got this error with a very simple config:
Error:
Versions:
It has worked from the second run without any error, so I'm not able to reproduce it consistently. |
I can confirm when you second run, It does run without issues |
I hit this problem too, when I run a second time it works. Some cached info I guess is confusing the provider. |
Getting this issue when using https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/main/helm/aws-load-balancer-controller |
I too got this issue on installing Grafana. |
I've been getting this with many different helm charts, aws node termination handler, aws efs csi driver, drone-server ect.. |
errors like
fix by specifying type on each parameter, like
|
I've added
|
Here is another one. Apparently the provider cannot reconcile Helm's semver spec (
|
Thanks, @arrrght after adding I can confirm this works with Helm providers versions:
By adding # module.aks_cluster.module.agic_internal[0].helm_release.ingress_azure_internal will be updated in-place
~ resource "helm_release" "ingress_azure_internal" {
id = "ingress-azure-internal"
name = "ingress-azure-internal"
# (26 unchanged attributes hidden)
- set {
- name = "appgw.subnetID" -> null
- type = "string" -> null
- value = "omitted!!" -> null
}
+ set {
+ name = "appgw.subnetID"
+ type = "string"
+ value = (known after apply)
}
- set {
- name = "appgw.subscriptionId" -> null
- type = "string" -> null
- value = "omitted" -> null
}
+ set {
+ name = "appgw.subscriptionId"
+ type = "string"
+ value = (known after apply)
}
# (8 unchanged blocks hidden)
} |
Still seeing this issue on 2.4.1, with The probability for running into the issue seems to scale with the number of |
@johannespetereit |
.version
field causes Error: Provider produced inconsistent final plan
the type=string bug fix works for me but makes validation difficult, as a "false" boolean values will be treated as true values during chart validation. |
This bug also appears to produce plans which use an incorrect value for the In the debug output we see:
and
I am not using a Edit with more info: If I clear Helm's cache, then "plan" shows the correct version. Unfortunately, "apply" fails with an error like I noticed that the downloaded cached index.yaml file's entry for that Helm chart has a strange value in its |
Any update on this issue? We are consistently getting these "inconsistent final plan" errors on many of our chart deploys, making it impossible to reliably release changes. Update: Explicitly setting |
I agree, getting impatient waiting and waiting for a fix. |
Any updates on this issue ? I am still facing it for helm provider version v2.6.0 |
This issue is really stopping using this provider as many helm charts may have random value to provide roll out. |
still be problem on 2.7.0 (👀") |
I could be wrong but I think this (mostly?) appears when enabling the manifest experiments.. which is really a must-have to know for certain what you are deploying. |
Still a bug │ When expanding the plan for helm_release.this to include new
│ values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/helm" produced an invalid new
│ value for .version: was cty.StringVal("0.0.9-SNAPSHOT"), but now
│ cty.StringVal("0.0.10-SNAPSHOT").
│
│ This is a bug in the provider, which should be reported in the
│ provider's own issue tracker. |
This was happening to me as well but it was related to the type attribute and not the value one. By explicitly, setting the type, as shown below, it was possible to eliminate the error: set { |
We see this error in the |
@nitrocode could you provide us with the tf config that reproduces this issue with I've been attempting to reproduce but I get a successful apply with no panic. This is what I'm using for context: terraform {
required_providers {
helm = {
source = "helm"
version = "2.9.0"
}
}
}
provider "helm" {
debug = true
kubernetes {
config_path = "~/.kube/config"
}
}
resource "helm_release" "datadog" {
name = "datadog"
repository = "https://helm.datadoghq.com"
chart = "datadog"
version = "v3.25.0"
} |
We saw it with manifest experiment enabled. After disabling the manifest experiment, it worked as expected. |
Thanks for replying @nitrocode! It seems like this doesn't relate to the original issue involving |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
Community Note
Terraform Version and Provider Version
Provider Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
I do not have Debug Output currently, because this bug only happened on the first time of the following sequence:
If I start another
Terraform apply
, thehelm_release
of Cert-Manager goes well, normally, but every time, it happens only the first time I play the whole "sequence".I'll try to obtain the Debug Output.
Panic Output
Expected Behavior
No bug on Terraform apply for the whole "sequence".
Actual Behavior
Terraform Helm Provider crashes on the first time of the "sequence", but not if I do a Terraform apply a second time.
Steps to Reproduce
terraform apply
for the whole sequence, only the first time.If I want to reproduce this bug, I have to do a
terraform destroy
of the whole K8S Cluster, including the Prometheus-Operator and External-DNS tools, and make anotherterraform apply
.Important Factoids
It seems only happening on the Cert-Manager tool, using every Helm Release since 0.9.0 I have tested.
I currently uses the Terraform Helm Provider through Helm v3 but it happens also on the previous Terraform Helm Provider version that was supporting Helm v2.
References
The text was updated successfully, but these errors were encountered: