-
Notifications
You must be signed in to change notification settings - Fork 959
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spec of PVC no longer found with Terraform 1.8.0 #2468
Comments
Hi @markusheiden, I am not able to reproduce this issue. Versions. $ kubectl version
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2
$ terraform version
Terraform v1.8.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/kubernetes v2.29.0 PVC manifest. apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
creationTimestamp: "2024-04-17T07:54:14Z"
finalizers:
- kubernetes.io/pvc-protection
name: this
namespace: this
resourceVersion: "221402"
uid: 2e68ecfa-1004-47f7-98c0-8eb41e92726f
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: this
volumeMode: Filesystem
status:
phase: Pending PVC state. $ kubectl -n this describe pvc this
Name: this
Namespace: this
StorageClass: this
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 9s (x3 over 23s) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'pd.csi.storage.gke.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. Terraform code. data "kubernetes_persistent_volume_claim" "this" {
metadata {
name = "this"
namespace = "this"
}
}
locals {
volume_name = data.kubernetes_persistent_volume_claim.this.spec[0].volume_name
}
output "this" {
value = local.volume_name
} Output. $ terraform apply -auto-approve
data.kubernetes_persistent_volume_claim.this: Reading...
data.kubernetes_persistent_volume_claim.this: Read complete after 1s [id=this/this]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
this = "" Here is the entire spec output: locals {
volume_name = data.kubernetes_persistent_volume_claim.this.spec
} $ terraform apply -auto-approve
data.kubernetes_persistent_volume_claim.this: Reading...
data.kubernetes_persistent_volume_claim.this: Read complete after 0s [id=this/this]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
this = tolist([
{
"access_modes" = toset([
"ReadWriteOnce",
])
"resources" = tolist([
{
"limits" = tomap({})
"requests" = tomap({
"storage" = "5Gi"
})
},
])
"selector" = tolist([])
"storage_class_name" = "this"
"volume_mode" = "Filesystem"
"volume_name" = ""
},
]) |
First, thanks for caring! Maybe something is special about our PVC. I will try to find that out. Meanwhile, our exact PVC (I replaced some names/IDs by ...): apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
volume.kubernetes.io/selected-node: gke-...
volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
creationTimestamp: "2023-01-26T14:31:09Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/name: graphite
app.kubernetes.io/part-of: graphite
name: graphite
namespace: graphite
resourceVersion: "..."
uid: ...
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
storageClassName: standard-regional-rwo
volumeMode: Filesystem
volumeName: pvc-...
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 4Gi
phase: Bound BTW my machine is an M2 MacBook. May that be a possible reason? |
I logged the data, and it looks like the PVC is not looked up at all or the lookup is delayed: 1.8.0:
1.7.5
|
I have tried Kubernetes 1.27 and still was able to get everything. Run it on an M1 laptop. Could you please execute Terraform with a debug log and share one of the outputs from there? $ TF_LOG_PROVIDER=debug terraform apply I am interested in the output between 2024-04-17T13:43:31.934+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 13:43:31 [INFO] Reading persistent volume claim this
2024-04-17T13:43:31.957+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 13:43:31 [INFO] Received persistent volume claim: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"this", GenerateName:"", Namespace:"this", SelfLink:"", UID:"07fea842-b923-408a-a2f1-eed7c9129bbc", ResourceVersion:"480", Generation:0, CreationTimestamp:time.Date(2024, time.April, 17, 13, 38, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"rancher.io/local-path", "volume.kubernetes.io/selected-node":"kube27-control-plane", "volume.kubernetes.io/storage-provisioner":"rancher.io/local-path"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Apply", APIVersion:"v1", Time:time.Date(2024, time.April, 17, 13, 38, 6, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14000d00b40), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.April, 17, 13, 38, 6, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14000d00b70), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.April, 17, 13, 38, 10, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14000d00ba0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.April, 17, 13, 38, 10, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14000d00bd0), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeName:"pvc-07fea842-b923-408a-a2f1-eed7c9129bbc", StorageClassName:(*string)(0x140010b26a0), VolumeMode:(*v1.PersistentVolumeMode)(0x140010b26b0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), AllocatedResourceStatuses:map[v1.ResourceName]v1.ClaimResourceStatus(nil)}} Thank you! |
I did just 1.8.0
1.7.5:
Diffing those shows that just the hex numbers are different. BTW I upgraded to Terraform 1.8.1 and the problem persists. |
Thank you for the output @markusheiden, it contains everything that I wanted to see. The provider can read the PVC and it looks like all the information is in there regardless of the Terraform version. There is one thing that bothers me. In your 1.8 output Changes to Outputs:
+ graphite_pvc = {
+ id = (known after apply) Would that be possible for you to make one more validation? Create a new isolated data "kubernetes_persistent_volume_claim" "graphite" {
metadata {
name = "graphite"
namespace = "graphite"
}
}
output "this" {
value = data.kubernetes_persistent_volume_claim.graphite
} Thank you. |
That works:
I tested it with providing the namespace via the default value of a variable too (like we do it). And it works too in the separate tf file. Am I doing something wrong? Or is it a generic Terraform problem? Is our state broken somehow? |
Thank you for sharing the results, @markusheiden. This is a Terraform issue and it looks like it is something related to using Modules. There were a few issues fixed in 1.8.1 that were related to module usage but as you confirmed, it did not fix an issue you are facing. We definitely need to report this issue to the Terraform team, however, we need to reproduce it first. Could you please share more information about your code structure? Is this a single module or a nested module? Or any other relevant information that will help us to reproduce it. Thanks! |
My code is simple: Just one component that depends on another module and directly uses the module with the above code. I wasn't able to build a reproducer yet. But when I remove the |
Reproducer:
module "dummy" {
source = "./modules/dummy"
}
module "pvc" {
source = "./modules/pvc"
# With this line removed, everything works fine.
depends_on = [module.dummy]
}
output "pvc" {
# Usually the pvc data is used in the pvc module only.
value = module.pvc.pvc
}
output "spec" {
value = module.pvc.pvc.spec[0]
}
resource "random_pet" "this" {}
data "kubernetes_persistent_volume_claim" "this" {
metadata {
name = "graphite"
namespace = "graphite"
}
}
output "pvc" {
value = data.kubernetes_persistent_volume_claim.this
} The reproducer shows the error with Terraform 1.7.5 too when creating this from scratch. In our infrastructure, the components already exist. When switching from Terraform 1.7.5 to 1.8.1 the error appears though there is no diff. Removing the |
With terraform 1.8.2 this problem seems to be fixed. |
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
Terraform Configuration Files
Plan Output
Steps to Reproduce
graphite
in namespacegraphite
.Expected Behavior
The spec of the PVC should be found.
Actual Behavior
The spec of the PVC is not found.
Community Note
The text was updated successfully, but these errors were encountered: