Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spec of PVC no longer found with Terraform 1.8.0 #2468

Open
markusheiden opened this issue Apr 16, 2024 · 11 comments
Open

Spec of PVC no longer found with Terraform 1.8.0 #2468

markusheiden opened this issue Apr 16, 2024 · 11 comments

Comments

@markusheiden
Copy link

markusheiden commented Apr 16, 2024

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.8
Kubernetes provider version: 2.29.0
Kubernetes version: v1.27.8-gke.1067004

Affected Resource(s)

  • kubernetes_persistent_volume_claim
  • kubernetes_persistent_volume_claim_v1
  • Maybe more...

Terraform Configuration Files

data "kubernetes_persistent_volume_claim" "graphite-volume-claim" {
  metadata {
    name      = "graphite"
    namespace = "graphite"
  }
}

locals {
  volume_name = data.kubernetes_persistent_volume_claim.graphite-volume-claim.spec[0].volume_name
}

Plan Output

╷
│ Error: Invalid index
│ 
│   on ../../../../modules/applications/sea/graphite-backup/main.tf line 32, in locals:
│   32:   volume_name = data.kubernetes_persistent_volume_claim.graphite-volume-claim.spec[0].volume_name
│     ├────────────────
│     │ data.kubernetes_persistent_volume_claim.graphite-volume-claim.spec is empty list of object
│ 
│ The given key does not identify an element in this collection value: the collection has no elements.
╵

Steps to Reproduce

  1. Create the PVC graphite in namespace graphite.
  2. Plan the above HCL with Terraform 1.7.5 and see that it works.
  3. Plan the above HCL with Terraform 1.8.0 and see that it fails with the given message.

Expected Behavior

The spec of the PVC should be found.

Actual Behavior

The spec of the PVC is not found.

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@arybolovlev
Copy link
Contributor

Hi @markusheiden,

I am not able to reproduce this issue.

Versions.

$ kubectl version

Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2

$ terraform version

Terraform v1.8.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/kubernetes v2.29.0

PVC manifest.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
    volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
  creationTimestamp: "2024-04-17T07:54:14Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: this
  namespace: this
  resourceVersion: "221402"
  uid: 2e68ecfa-1004-47f7-98c0-8eb41e92726f
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: this
  volumeMode: Filesystem
status:
  phase: Pending

PVC state.

$ kubectl -n this describe pvc this

Name:          this
Namespace:     this
StorageClass:  this
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
               volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                Age               From                         Message
  ----    ------                ----              ----                         -------
  Normal  ExternalProvisioning  9s (x3 over 23s)  persistentvolume-controller  Waiting for a volume to be created either by the external provisioner 'pd.csi.storage.gke.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

Terraform code.

data "kubernetes_persistent_volume_claim" "this" {
  metadata {
    name      = "this"
    namespace = "this"
  }
}

locals {
  volume_name = data.kubernetes_persistent_volume_claim.this.spec[0].volume_name
}

output "this" {
  value = local.volume_name
}

Output.

$ terraform apply -auto-approve

data.kubernetes_persistent_volume_claim.this: Reading...
data.kubernetes_persistent_volume_claim.this: Read complete after 1s [id=this/this]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

this = ""

Here is the entire spec output:

locals {
  volume_name = data.kubernetes_persistent_volume_claim.this.spec
}
$ terraform apply -auto-approve

data.kubernetes_persistent_volume_claim.this: Reading...
data.kubernetes_persistent_volume_claim.this: Read complete after 0s [id=this/this]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

this = tolist([
  {
    "access_modes" = toset([
      "ReadWriteOnce",
    ])
    "resources" = tolist([
      {
        "limits" = tomap({})
        "requests" = tomap({
          "storage" = "5Gi"
        })
      },
    ])
    "selector" = tolist([])
    "storage_class_name" = "this"
    "volume_mode" = "Filesystem"
    "volume_name" = ""
  },
])

@markusheiden
Copy link
Author

First, thanks for caring!

Maybe something is special about our PVC. I will try to find that out.

Meanwhile, our exact PVC (I replaced some names/IDs by ...):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
    volume.kubernetes.io/selected-node: gke-...
    volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
  creationTimestamp: "2023-01-26T14:31:09Z"
  finalizers:
    - kubernetes.io/pvc-protection
  labels:
    app.kubernetes.io/name: graphite
    app.kubernetes.io/part-of: graphite
  name: graphite
  namespace: graphite
  resourceVersion: "..."
  uid: ...
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: standard-regional-rwo
  volumeMode: Filesystem
  volumeName: pvc-...
status:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 4Gi
  phase: Bound

BTW my machine is an M2 MacBook. May that be a possible reason?

@markusheiden
Copy link
Author

I logged the data, and it looks like the PVC is not looked up at all or the lookup is delayed:

1.8.0:

Changes to Outputs:
  + graphite_pvc          = {
      + id       = (known after apply)
      + metadata = [
          + {
              + annotations      = null
              + generate_name    = null
              + generation       = (known after apply)
              + labels           = null
              + name             = "graphite"
              + namespace        = "graphite"
              + resource_version = (known after apply)
              + uid              = (known after apply)
            },
        ]
      + spec     = []
    }

1.7.5

Changes to Outputs:
  + graphite_pvc          = {
      + id       = "graphite/graphite"
      + metadata = [
          + {
              + annotations      = {
                  + "pv.kubernetes.io/bind-completed"               = "yes"
                  + "pv.kubernetes.io/bound-by-controller"          = "yes"
                  + "volume.beta.kubernetes.io/storage-provisioner" = "pd.csi.storage.gke.io"
                  + "volume.kubernetes.io/selected-node"            = "gke-..."
                  + "volume.kubernetes.io/storage-provisioner"      = "pd.csi.storage.gke.io"
                }
              + generate_name    = ""
              + generation       = 0
              + labels           = {
                  + "app.kubernetes.io/name"    = "graphite"
                  + "app.kubernetes.io/part-of" = "graphite"
                }
              + name             = "graphite"
              + namespace        = "graphite"
              + resource_version = "..."
              + uid              = "..."
            },
        ]
      + spec     = [
          + {
              + access_modes       = [
                  + "ReadWriteOnce",
                ]
              + resources          = [
                  + {
                      + limits   = {}
                      + requests = {
                          + storage = "4Gi"
                        }
                    },
                ]
              + selector           = []
              + storage_class_name = "standard-regional-rwo"
              + volume_mode        = "Filesystem"
              + volume_name        = "pvc-..."
            },
        ]
    }

@arybolovlev
Copy link
Contributor

I have tried Kubernetes 1.27 and still was able to get everything. Run it on an M1 laptop.

Could you please execute Terraform with a debug log and share one of the outputs from there?

$ TF_LOG_PROVIDER=debug terraform apply

I am interested in the output between Reading... and Read complete after messages. It would be something like this:

2024-04-17T13:43:31.934+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 13:43:31 [INFO] Reading persistent volume claim this
2024-04-17T13:43:31.957+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 13:43:31 [INFO] Received persistent volume claim: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"this", GenerateName:"", Namespace:"this", SelfLink:"", UID:"07fea842-b923-408a-a2f1-eed7c9129bbc", ResourceVersion:"480", Generation:0, CreationTimestamp:time.Date(2024, time.April, 17, 13, 38, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"rancher.io/local-path", "volume.kubernetes.io/selected-node":"kube27-control-plane", "volume.kubernetes.io/storage-provisioner":"rancher.io/local-path"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Apply", APIVersion:"v1", Time:time.Date(2024, time.April, 17, 13, 38, 6, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14000d00b40), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.April, 17, 13, 38, 6, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14000d00b70), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.April, 17, 13, 38, 10, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14000d00ba0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.April, 17, 13, 38, 10, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14000d00bd0), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeName:"pvc-07fea842-b923-408a-a2f1-eed7c9129bbc", StorageClassName:(*string)(0x140010b26a0), VolumeMode:(*v1.PersistentVolumeMode)(0x140010b26b0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), AllocatedResourceStatuses:map[v1.ResourceName]v1.ClaimResourceStatus(nil)}}

Thank you!

@markusheiden
Copy link
Author

markusheiden commented Apr 17, 2024

I did just terraform plan, because the problem arises already in the planning. I extracted just the part related to the graphite PVC. Hopefully I did not delete too much...

1.8.0

2024-04-17T14:34:10.871+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:34:10 [INFO] Checking namespace graphite
2024-04-17T14:34:10.909+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:34:10 [INFO] Namespace graphite exists
2024-04-17T14:34:10.909+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:34:10 [INFO] Reading namespace graphite
2024-04-17T14:34:10.933+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:34:10 [INFO] Received namespace: &v1.Namespace{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"graphite", GenerateName:"", Namespace:"", SelfLink:"", UID:"a18004fe-f0ad-445d-9feb-8873efa09d49", ResourceVersion:"292304307", Generation:0, CreationTimestamp:time.Date(2022, time.January, 19, 16, 5, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"kubernetes.io/metadata.name":"graphite"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"HashiCorp", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.January, 19, 16, 5, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x1400136ef90), Subresource:""}}}, Spec:v1.NamespaceSpec{Finalizers:[]v1.FinalizerName{"kubernetes"}}, Status:v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)}}
2024-04-17T14:34:10.944+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:34:10 [INFO] Checking persistent volume claim graphite
2024-04-17T14:34:10.985+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:34:10 [INFO] Reading persistent volume claim graphite
2024-04-17T14:34:11.007+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:34:11 [INFO] Received persistent volume claim: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"graphite", GenerateName:"", Namespace:"graphite", SelfLink:"", UID:"c5cf416d-2876-427f-b0c1-2c42026a0329", ResourceVersion:"739208948", Generation:0, CreationTimestamp:time.Date(2023, time.January, 26, 15, 31, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"graphite", "app.kubernetes.io/part-of":"graphite"}, Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"pd.csi.storage.gke.io", "volume.kubernetes.io/selected-node":"gke-wired-pup-node-202301201807095189-569d383a-wgtc", "volume.kubernetes.io/storage-provisioner":"pd.csi.storage.gke.io"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"HashiCorp", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 26, 15, 31, 9, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x140011531b8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 26, 15, 31, 14, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x140011531e8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 26, 15, 31, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14001153218), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 26, 15, 31, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x14001153248), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:4294967296, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"4Gi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeName:"pvc-c5cf416d-2876-427f-b0c1-2c42026a0329", StorageClassName:(*string)(0x14001896cb0), VolumeMode:(*v1.PersistentVolumeMode)(0x14001896cc0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:4294967296, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"4Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), AllocatedResourceStatuses:map[v1.ResourceName]v1.ClaimResourceStatus(nil)}}

1.7.5:

2024-04-17T14:43:38.763+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:43:38 [INFO] Checking namespace graphite
2024-04-17T14:43:38.870+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:43:38 [INFO] Namespace graphite exists
2024-04-17T14:43:38.870+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:43:38 [INFO] Reading namespace graphite
2024-04-17T14:43:38.896+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:43:38 [INFO] Received namespace: &v1.Namespace{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"graphite", GenerateName:"", Namespace:"", SelfLink:"", UID:"a18004fe-f0ad-445d-9feb-8873efa09d49", ResourceVersion:"292304307", Generation:0, CreationTimestamp:time.Date(2022, time.January, 19, 16, 5, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"kubernetes.io/metadata.name":"graphite"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"HashiCorp", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.January, 19, 16, 5, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x140014a6ca8), Subresource:""}}}, Spec:v1.NamespaceSpec{Finalizers:[]v1.FinalizerName{"kubernetes"}}, Status:v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)}}
2024-04-17T14:43:38.927+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:43:38 [INFO] Checking persistent volume claim graphite
2024-04-17T14:43:38.966+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:43:38 [INFO] Reading persistent volume claim graphite
2024-04-17T14:43:38.990+0200 [DEBUG] provider.terraform-provider-kubernetes_v2.29.0_x5: 2024/04/17 14:43:38 [INFO] Received persistent volume claim: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"graphite", GenerateName:"", Namespace:"graphite", SelfLink:"", UID:"c5cf416d-2876-427f-b0c1-2c42026a0329", ResourceVersion:"739208948", Generation:0, CreationTimestamp:time.Date(2023, time.January, 26, 15, 31, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"graphite", "app.kubernetes.io/part-of":"graphite"}, Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"pd.csi.storage.gke.io", "volume.kubernetes.io/selected-node":"gke-wired-pup-node-202301201807095189-569d383a-wgtc", "volume.kubernetes.io/storage-provisioner":"pd.csi.storage.gke.io"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"HashiCorp", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 26, 15, 31, 9, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x140011de210), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 26, 15, 31, 14, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x140011de258), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 26, 15, 31, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x140011de288), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 26, 15, 31, 18, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x140011de2b8), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:4294967296, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"4Gi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeName:"pvc-c5cf416d-2876-427f-b0c1-2c42026a0329", StorageClassName:(*string)(0x14001212150), VolumeMode:(*v1.PersistentVolumeMode)(0x14001212160), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:4294967296, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"4Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), AllocatedResourceStatuses:map[v1.ResourceName]v1.ClaimResourceStatus(nil)}}

Diffing those shows that just the hex numbers are different.

BTW I upgraded to Terraform 1.8.1 and the problem persists.

@arybolovlev
Copy link
Contributor

Thank you for the output @markusheiden, it contains everything that I wanted to see. The provider can read the PVC and it looks like all the information is in there regardless of the Terraform version.

There is one thing that bothers me. In your 1.8 output id is unknown, although, it should be in the case of a data source.

Changes to Outputs:
  + graphite_pvc          = {
      + id       = (known after apply)

Would that be possible for you to make one more validation? Create a new isolated .tf file with the following code and execute it:

data "kubernetes_persistent_volume_claim" "graphite" {
  metadata {
    name      = "graphite"
    namespace = "graphite"
  }
}

output "this" {
  value = data.kubernetes_persistent_volume_claim.graphite
}

Thank you.

@markusheiden
Copy link
Author

markusheiden commented Apr 17, 2024

That works:

Changes to Outputs:
  + this = {
      + id       = "graphite/graphite"
      + metadata = [
          + {
              + annotations      = {
                  + "pv.kubernetes.io/bind-completed"               = "yes"
                  + "pv.kubernetes.io/bound-by-controller"          = "yes"
                  + "volume.beta.kubernetes.io/storage-provisioner" = "pd.csi.storage.gke.io"
                  + "volume.kubernetes.io/selected-node"            = "gke-..."
                  + "volume.kubernetes.io/storage-provisioner"      = "pd.csi.storage.gke.io"
                }
              + generate_name    = ""
              + generation       = 0
              + labels           = {
                  + "app.kubernetes.io/name"    = "graphite"
                  + "app.kubernetes.io/part-of" = "graphite"
                }
              + name             = "graphite"
              + namespace        = "graphite"
              + resource_version = "..."
              + uid              = "..."
            },
        ]
      + spec     = [
          + {
              + access_modes       = [
                  + "ReadWriteOnce",
                ]
              + resources          = [
                  + {
                      + limits   = {}
                      + requests = {
                          + storage = "4Gi"
                        }
                    },
                ]
              + selector           = []
              + storage_class_name = "standard-regional-rwo"
              + volume_mode        = "Filesystem"
              + volume_name        = "pvc-..."
            },
        ]
    }

I tested it with providing the namespace via the default value of a variable too (like we do it). And it works too in the separate tf file.

Am I doing something wrong? Or is it a generic Terraform problem?

Is our state broken somehow? tf state show module.graphite-backup.data.kubernetes_persistent_volume_claim.graphite-volume-claim shows the correct ID graphite/graphite though.

@arybolovlev
Copy link
Contributor

Thank you for sharing the results, @markusheiden.

This is a Terraform issue and it looks like it is something related to using Modules. There were a few issues fixed in 1.8.1 that were related to module usage but as you confirmed, it did not fix an issue you are facing.

We definitely need to report this issue to the Terraform team, however, we need to reproduce it first.

Could you please share more information about your code structure? Is this a single module or a nested module? Or any other relevant information that will help us to reproduce it.

Thanks!

@markusheiden
Copy link
Author

markusheiden commented Apr 17, 2024

My code is simple: Just one component that depends on another module and directly uses the module with the above code.

I wasn't able to build a reproducer yet. But when I remove the depends_on = [module.other_component] from the component, it works.

@markusheiden
Copy link
Author

markusheiden commented Apr 17, 2024

Reproducer:

main.tf

module "dummy" {
  source = "./modules/dummy"
}

module "pvc" {
  source = "./modules/pvc"

  # With this line removed, everything works fine.
  depends_on = [module.dummy]
}

output "pvc" {
  # Usually the pvc data is used in the pvc module only.
  value = module.pvc.pvc
}

output "spec" {
  value = module.pvc.pvc.spec[0]
}

modules/dummy/main.tf

resource "random_pet" "this" {}

modules/pvc/main.tf

data "kubernetes_persistent_volume_claim" "this" {
  metadata {
    name      = "graphite"
    namespace = "graphite"
  }
}

output "pvc" {
  value = data.kubernetes_persistent_volume_claim.this
}

The reproducer shows the error with Terraform 1.7.5 too when creating this from scratch.

In our infrastructure, the components already exist. When switching from Terraform 1.7.5 to 1.8.1 the error appears though there is no diff. Removing the depends_on fixes the problem for Terraform 1.8.1.

@markusheiden
Copy link
Author

With terraform 1.8.2 this problem seems to be fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants