Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mount.nfs: Connection timed out @ Vultr on VKE. #3670

Open
clearhost-cmd opened this issue Dec 10, 2023 · 6 comments
Open

Mount.nfs: Connection timed out @ Vultr on VKE. #3670

clearhost-cmd opened this issue Dec 10, 2023 · 6 comments
Assignees
Labels
Need INFO Need info from the user

Comments

@clearhost-cmd
Copy link

clearhost-cmd commented Dec 10, 2023

Description

Installing on Vultr results in mount timeout issues.

Expected Behavior

Provision should complete.

Current Behavior

Provision is completed, but timing out in pods. Error:

MountVolume.SetUp failed for volume "pvc-dfe3a690-b30e-425f-98d1-5e55d9283bec" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs 10.97.156.196:/ /var/lib/kubelet/pods/9e0a8f74-07c4-4cfe-b221-d43419613acf/volumes/kubernetes.io~nfs/pvc-dfe3a690-b30e-425f-98d1-5e55d9283bec Output: mount.nfs: Connection timed out

Steps to Reproduce

https://www.vultr.com/docs/how-to-deploy-wordpress-on-vultr-kubernetes-engine/

Your Environment

  • storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-rwx
  uid: 99424904-399e-4a7b-9850-388f56c89e08
  resourceVersion: '41257'
  creationTimestamp: '2023-12-10T19:44:36Z'
  labels:
    k8slens-edit-resource-version: v1
  annotations:
    cas.openebs.io/config: |
      - name: NFSServerType
        value: "kernel"
      - name: BackendStorageClass
        value: "vultr-block-storage"
      # NFSServerResourceRequests defines the resource requests for NFS Server
      #- name: NFSServerResourceRequests
      #  value: |-
      #      memory: 50Mi
      #      cpu: 50m
      # NFSServerResourceLimits defines the resource limits for NFS Server
      #- name: NFSServerResourceLimits
      #  value: |-
      #      memory: 100Mi
      #      cpu: 100m
      # LeaseTime defines the renewal period(in seconds) for client state
      #- name: LeaseTime
      #  value: 30
      # GraceTime defines the recovery period(in seconds) to reclaim locks
      #- name: GraceTime
      #  value: 30
      # FilePermissions defines the file ownership and mode specifications
      # for the NFS server's shared filesystem volume.
      # File permission changes are applied recursively if the root of the
      # volume's filesystem does not match the specified value.
      # Volume-specific file permission configuration can be specified by
      # using the FilePermissions config key in the PVC YAML, instead of
      # the StorageClass's.
      #- name: FilePermissions
      #  data:
      #    UID: "1000"
      #    GID: "2000"
      #    mode: "0744"
      # FSGID defines the group permissions of NFS Volume. If it is set
      # then non-root applications should add FSGID value under pod
      # Suplemental groups.
      # The FSGID config key is being deprecated. Please use the
      # FilePermissions config key instead.
      #- name: FSGID
      #  value: "120"
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"cas.openebs.io/config":"-
      name: NFSServerType\n  value: \"kernel\"\n- name: BackendStorageClass\n 
      value: \"openebs-hostpath\"\n# NFSServerResourceRequests defines the
      resource requests for NFS Server\n#- name: NFSServerResourceRequests\n# 
      value: |-\n#      memory: 50Mi\n#      cpu: 50m\n# NFSServerResourceLimits
      defines the resource limits for NFS Server\n#- name:
      NFSServerResourceLimits\n#  value: |-\n#      memory: 100Mi\n#      cpu:
      100m\n# LeaseTime defines the renewal period(in seconds) for client
      state\n#- name: LeaseTime\n#  value: 30\n# GraceTime defines the recovery
      period(in seconds) to reclaim locks\n#- name: GraceTime\n#  value: 30\n#
      FilePermissions defines the file ownership and mode specifications\n# for
      the NFS server's shared filesystem volume.\n# File permission changes are
      applied recursively if the root of the\n# volume's filesystem does not
      match the specified value.\n# Volume-specific file permission
      configuration can be specified by\n# using the FilePermissions config key
      in the PVC YAML, instead of\n# the StorageClass's.\n#- name:
      FilePermissions\n#  data:\n#    UID: \"1000\"\n#    GID: \"2000\"\n#   
      mode: \"0744\"\n# FSGID defines the group permissions of NFS Volume. If it
      is set\n# then non-root applications should add FSGID value under pod\n#
      Suplemental groups.\n# The FSGID config key is being deprecated. Please
      use the\n# FilePermissions config key instead.\n#- name: FSGID\n#  value:
      \"120\"\n","openebs.io/cas-type":"nfsrwx"},"name":"openebs-rwx"},"provisioner":"openebs.io/nfsrwx","reclaimPolicy":"Delete"}
    openebs.io/cas-type: nfsrwx
    storageclass.kubernetes.io/is-default-class: 'true'
  managedFields:
    - manager: kubectl-client-side-apply
      operation: Update
      apiVersion: storage.k8s.io/v1
      time: '2023-12-10T21:56:46Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
            f:openebs.io/cas-type: {}
        f:provisioner: {}
        f:reclaimPolicy: {}
        f:volumeBindingMode: {}
    - manager: node-fetch
      operation: Update
      apiVersion: storage.k8s.io/v1
      time: '2023-12-10T21:58:03Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            f:cas.openebs.io/config: {}
            f:storageclass.kubernetes.io/is-default-class: {}
          f:labels:
            .: {}
            f:k8slens-edit-resource-version: {}
  selfLink: /apis/storage.k8s.io/v1/storageclasses/openebs-rwx
provisioner: openebs.io/nfsrwx
reclaimPolicy: Delete
volumeBindingMode: Immediate
  • kubectl get nodes:
qws-lhr-1-086d6558f2b3   Ready    <none>   147m   v1.28.3
qws-lhr-1-152697679a1c   Ready    <none>   147m   v1.28.3
qws-lhr-1-687c65528f4c   Ready    <none>   147m   v1.28.3
qws-lhr-2-02e3a414d8a3   Ready    <none>   148m   v1.28.3
qws-lhr-2-84b71ef51763   Ready    <none>   147m   v1.28.3
qws-lhr-2-c5e6e7eee014   Ready    <none>   147m   v1.28.3
  • kubectl get pods --all-namespaces:
GE
default         wordpress-1702241462-58d8b85df-krj6d                            0/1     ContainerCreating   0          75m
default         wordpress-1702241462-58d8b85df-q8zp5                            0/1     ContainerCreating   0          75m
default         wordpress-1702241462-58d8b85df-r7hgp                            0/1     ContainerCreating   0          75m
default         wordpress-1702241462-mariadb-0                                  0/1     ContainerCreating   0          75m
kube-system     calico-kube-controllers-558d465845-qbjkg                        1/1     Running             0          149m
kube-system     calico-node-45zjh                                               1/1     Running             0          148m
kube-system     calico-node-72n7t                                               1/1     Running             0          148m
kube-system     calico-node-j2km8                                               1/1     Running             0          148m
kube-system     calico-node-l848j                                               1/1     Running             0          148m
kube-system     calico-node-sh42l                                               1/1     Running             0          148m
kube-system     calico-node-td948                                               1/1     Running             0          148m
kube-system     cluster-autoscaler-65fd779bfb-wh8hf                             1/1     Running             0          148m
kube-system     coredns-7bdbb56dfb-5v5gb                                        1/1     Running             0          149m
kube-system     csi-vultr-controller-0                                          4/4     Running             0          148m
kube-system     csi-vultr-node-8bk2g                                            2/2     Running             0          147m
kube-system     csi-vultr-node-dqhgs                                            2/2     Running             0          147m
kube-system     csi-vultr-node-fqcsx                                            2/2     Running             0          147m
kube-system     csi-vultr-node-j72qm                                            2/2     Running             0          147m
kube-system     csi-vultr-node-mmnxs                                            2/2     Running             0          147m
kube-system     csi-vultr-node-w6gwg                                            2/2     Running             0          147m
kube-system     konnectivity-agent-6fsmx                                        1/1     Running             0          147m
kube-system     konnectivity-agent-97djn                                        1/1     Running             0          147m
kube-system     konnectivity-agent-d5msw                                        1/1     Running             0          147m
kube-system     konnectivity-agent-gj9zl                                        1/1     Running             0          147m
kube-system     konnectivity-agent-h2mn7                                        1/1     Running             0          147m
kube-system     konnectivity-agent-z8r4q                                        1/1     Running             0          147m
lens-metrics    kube-state-metrics-5454c5fd86-b4cgb                             1/1     Running             0          145m
lens-metrics    node-exporter-6zhxv                                             1/1     Running             0          145m
lens-metrics    node-exporter-c49rz                                             1/1     Running             0          145m
lens-metrics    node-exporter-gmwf4                                             1/1     Running             0          145m
lens-metrics    node-exporter-h9fg9                                             1/1     Running             0          145m
lens-metrics    node-exporter-nfs2q                                             1/1     Running             0          145m
lens-metrics    node-exporter-vg9w4                                             1/1     Running             0          145m
lens-metrics    prometheus-0                                                    1/1     Running             0          145m
lens-platform   bored-agent-5f78b78d87-8t56x                                    1/1     Running             0          145m
lens-platform   bored-agent-updater-28370640-wzhsk                              0/1     Completed           0          126m
lens-platform   bored-agent-updater-28370700-2gd9k                              0/1     Completed           0          66m
lens-platform   bored-agent-updater-28370760-q5fsk                              0/1     Completed           0          6m15s
lens-security   trivy-operator-9c85c765b-lvnqb                                  1/1     Running             0          145m
openebs         nfs-pvc-3ac20861-739e-4750-a078-162975a43742-64b5c566b5-spgvd   0/1     ContainerCreating   0          75m
openebs         nfs-pvc-dfe3a690-b30e-425f-98d1-5e55d9283bec-b4b559fc8-nj7wf    0/1     ContainerCreating   0          75m
openebs         openebs-localpv-provisioner-74cbc5d5b5-r9brn                    1/1     Running             0          141m
openebs         openebs-ndm-52gtn                                               1/1     Running             0          141m
openebs         openebs-ndm-7ft5z                                               1/1     Running             0          141m
openebs         openebs-ndm-9vv98                                               1/1     Running             0          141m
openebs         openebs-ndm-cluster-exporter-cf48c9589-txdgr                    1/1     Running             0          141m
openebs         openebs-ndm-jw7w4                                               1/1     Running             0          141m
openebs         openebs-ndm-node-exporter-7ht55                                 1/1     Running             0          141m
openebs         openebs-ndm-node-exporter-hj54b                                 1/1     Running             0          141m
openebs         openebs-ndm-node-exporter-lmz9k                                 1/1     Running             0          141m
openebs         openebs-ndm-node-exporter-qvq49                                 1/1     Running             0          141m
openebs         openebs-ndm-node-exporter-rgdtb                                 1/1     Running             0          141m
openebs         openebs-ndm-node-exporter-ts9l5                                 1/1     Running             0          141m
openebs         openebs-ndm-operator-745b79d6bd-w9jwh                           1/1     Running             0          141m
openebs         openebs-ndm-x48mn                                               1/1     Running             0          141m
openebs         openebs-ndm-xx2sq                                               1/1     Running             0          141m
openebs         openebs-nfs-provisioner-549958fc7-pdk7x                         1/1     Running             0          141m
shipa           shipa-agent-59f7f54d77-xf8mk                                    1/1     Running             0          144m
shipa           shipa-busybody-fgh5r                                            1/1     Running             0          144m
shipa           shipa-busybody-gbtp8                                            1/1     Running             0          144m
shipa           shipa-busybody-hs2gj                                            1/1     Running             0          144m
shipa           shipa-busybody-r7pzz                                            1/1     Running             0          144m
shipa           shipa-busybody-rdrs6                                            1/1     Running             0          144m
shipa           shipa-busybody-xzlkr                                            1/1     Running             0          144m
shipa           shipa-controller-68dc6b898d-ts9mv                               1/1     Running             0          144m
  • kubectl get services:
kubernetes                     ClusterIP   10.96.0.1        <none>        443/TCP          149m
wordpress-1702241462           ClusterIP   10.111.243.53    <none>        80/TCP,443/TCP   75m
wordpress-1702241462-mariadb   ClusterIP   10.101.105.215   <none>        3306/TCP         75m
  • kubectl get sc:
openebs-device                   openebs.io/local      Delete          WaitForFirstConsumer   false                  142m
openebs-hostpath                 openebs.io/local      Delete          WaitForFirstConsumer   false                  142m
openebs-rwx (default)            openebs.io/nfsrwx     Delete          Immediate              false                  142m
vultr-block-storage              block.csi.vultr.com   Delete          Immediate              true                   149m
vultr-block-storage-hdd          block.csi.vultr.com   Delete          Immediate              true                   149m
vultr-block-storage-hdd-retain   block.csi.vultr.com   Retain          Immediate              true                   149m
vultr-block-storage-retain       block.csi.vultr.com   Retain          Immediate              true                   149m
  • kubectl get pv:
pvc-274dad4aa6a44548                       8Gi        RWO            Delete           Bound    openebs/nfs-pvc-3ac20861-739e-4750-a078-162975a43742   vultr-block-storage            76m
pvc-3ac20861-739e-4750-a078-162975a43742   8Gi        RWO            Delete           Bound    default/data-wordpress-1702241462-mariadb-0            openebs-rwx                    76m
pvc-5a463a9e4fbe4d30                       20Gi       RWO            Delete           Bound    lens-metrics/data-prometheus-0                         vultr-block-storage            146m
pvc-dc8bbb285a244083                       50Gi       RWO            Delete           Bound    openebs/nfs-pvc-dfe3a690-b30e-425f-98d1-5e55d9283bec   vultr-block-storage            76m
pvc-dfe3a690-b30e-425f-98d1-5e55d9283bec   50Gi       RWX            Delete           Bound    default/wordpress-1702241462                           openebs-rwx                    76m
  • kubectl get pvc:
data-wordpress-1702241462-mariadb-0   Bound    pvc-3ac20861-739e-4750-a078-162975a43742   8Gi        RWO            openebs-rwx    76m
wordpress-1702241462                  Bound    pvc-dfe3a690-b30e-425f-98d1-5e55d9283bec   50Gi       RWX            openebs-rwx    76m
@niladrih niladrih self-assigned this Dec 13, 2023
@niladrih
Copy link
Member

@clearhost-cmd could you share the logs of the nfs provisioner container, nfs-server container and the kubelet logs journalctl -u kubelet from the node where your application Pod tried mounting the NFS volume?

@niladrih
Copy link
Member

Also, which version of the NFS-Provisioner is this?

@clearhost-cmd
Copy link
Author

clearhost-cmd commented Dec 13, 2023

I am running the below commands on a completely fresh Vultr VKE Cluster:

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
kubectl apply -f https://openebs.github.io/charts/nfs-operator.yaml

I then update:

  - name: BackendStorageClass
    value: "vultr-block-storage"

The PVC's are then created and bound:

image

Likewise with the PV:

image

In events I receive:

Warning
MountVolume.MountDevice failed for volume "pvc-19b070c0d55944fa" : rpc error: code = Internal desc = mounting failed: exit status 255 cmd: 'mount -t ext4 /dev/disk/by-id/virtio-lhr-7cdcd91b0a184d /var/lib/kubelet/plugins/kubernetes.io/csi/block.csi.vultr.com/9b0820a416d9ec58abad88e6254d35c253ab4100436d3d59bba6f985475f4b61/globalmount' output: "mount: mounting /dev/disk/by-id/virtio-lhr-7cdcd91b0a184d on /var/lib/kubelet/plugins/kubernetes.io/csi/block.csi.vultr.com/9b0820a416d9ec58abad88e6254d35c253ab4100436d3d59bba6f985475f4b61/globalmount failed: Invalid argument\n"

openebs
[Pod: nfs-pvc-b0db3135-d79f-4e50-94ef-d876ec5bf233-765556fbc8-59fpv](https://1f0216174b0ff4c97b0346088813acca.lens.app:55030/events?kube-details=%2Fapi%2Fv1%2Fnamespaces%2Fopenebs%2Fpods%2Fnfs-pvc-b0db3135-d79f-4e50-94ef-d876ec5bf233-765556fbc8-59fpv&kube-selected=undefined)
kubelet qws-lhr-3-9c5a194aa9a6

Warning
MountVolume.MountDevice failed for volume "pvc-af00f7a137eb4e8b" : rpc error: code = Internal desc = mounting failed: exit status 255 cmd: 'mount -t ext4 /dev/disk/by-id/virtio-lhr-1b62bbe018f54e /var/lib/kubelet/plugins/kubernetes.io/csi/block.csi.vultr.com/5a1014e9dbba4387ee042a4770b92f0ab7709e966cada53c5944f74c1b91c2cc/globalmount' output: "mount: mounting /dev/disk/by-id/virtio-lhr-1b62bbe018f54e on /var/lib/kubelet/plugins/kubernetes.io/csi/block.csi.vultr.com/5a1014e9dbba4387ee042a4770b92f0ab7709e966cada53c5944f74c1b91c2cc/globalmount failed: Invalid argument\n"
openebs

[Pod: nfs-pvc-f9cefc61-4242-4dcb-8f7f-ffc434db776c-68f6655f65-6nvjg](https://1f0216174b0ff4c97b0346088813acca.lens.app:55030/events?kube-details=%2Fapi%2Fv1%2Fnamespaces%2Fopenebs%2Fpods%2Fnfs-pvc-f9cefc61-4242-4dcb-8f7f-ffc434db776c-68f6655f65-6nvjg&kube-selected=undefined)
kubelet qws-lhr-3-9c5a194aa9a6

Warning
MountVolume.SetUp failed for volume "pvc-b0db3135-d79f-4e50-94ef-d876ec5bf233" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs 10.108.235.115:/ /var/lib/kubelet/pods/49186d50-0568-4367-b1dd-96e2d2ecbe65/volumes/kubernetes.io~nfs/pvc-b0db3135-d79f-4e50-94ef-d876ec5bf233 Output: mount.nfs: Connection timed out

[Pod: wordpress-1702491394-659457bc9c-cszp6](https://1f0216174b0ff4c97b0346088813acca.lens.app:55030/events?kube-details=%2Fapi%2Fv1%2Fnamespaces%2Fdefault%2Fpods%2Fwordpress-1702491394-659457bc9c-cszp6&kube-selected=undefined)
kubelet qws-lhr-3-9c5a194aa9a6

Warning
MountVolume.SetUp failed for volume "pvc-b0db3135-d79f-4e50-94ef-d876ec5bf233" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs 10.108.235.115:/ /var/lib/kubelet/pods/82baac06-3bd6-4fbc-b548-74b6ebfad173/volumes/kubernetes.io~nfs/pvc-b0db3135-d79f-4e50-94ef-d876ec5bf233 Output: mount.nfs: Connection timed out

[Pod: wordpress-1702491394-659457bc9c-6qlkt](https://1f0216174b0ff4c97b0346088813acca.lens.app:55030/events?kube-details=%2Fapi%2Fv1%2Fnamespaces%2Fdefault%2Fpods%2Fwordpress-1702491394-659457bc9c-6qlkt&kube-selected=undefined)
kubelet qws-lhr-1-bd6706a00cbf

Warning
MountVolume.SetUp failed for volume "pvc-b0db3135-d79f-4e50-94ef-d876ec5bf233" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs 10.108.235.115:/ /var/lib/kubelet/pods/9daa16c0-8b3b-4007-9cc5-9396ec70e23a/volumes/kubernetes.io~nfs/pvc-b0db3135-d79f-4e50-94ef-d876ec5bf233 Output: mount.nfs: Connection timed out

[Pod: wordpress-1702491394-659457bc9c-mcv4m](https://1f0216174b0ff4c97b0346088813acca.lens.app:55030/events?kube-details=%2Fapi%2Fv1%2Fnamespaces%2Fdefault%2Fpods%2Fwordpress-1702491394-659457bc9c-mcv4m&kube-selected=undefined)
kubelet qws-lhr-2-c5e6e7eee014

Warning
MountVolume.SetUp failed for volume "pvc-f9cefc61-4242-4dcb-8f7f-ffc434db776c" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs 10.101.207.15:/ /var/lib/kubelet/pods/830b7a07-1325-48bb-8410-56d2e70b3ca1/volumes/kubernetes.io~nfs/pvc-f9cefc61-4242-4dcb-8f7f-ffc434db776c Output: mount.nfs: Connection timed out

Journal on specific kublet:

-- Journal begins at Sun 2023-12-10 19:34:48 UTC, ends at Wed 2023-12-13 18:26:28 UTC. --
Dec 10 19:34:50 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:34:50.552965    3252 watcher.go:93] Error while processi>Dec 10 19:34:57 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:34:57.325898    3252 transport.go:301] Unable to cancel >Dec 10 19:34:57 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:34:57.326004    3252 controller.go:146] "Failed to ensur>Dec 10 19:34:57 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:34:57.432397    3252 eviction_manager.go:258] "Eviction >Dec 10 19:35:07 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:07.432984    3252 eviction_manager.go:258] "Eviction >Dec 10 19:35:07 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:07.527120    3252 transport.go:301] Unable to cancel >Dec 10 19:35:07 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:07.527231    3252 controller.go:146] "Failed to ensur>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:17.301139    3252 reflector.go:535] vendor/k8s.io/cli>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.301211    3252 trace.go:236] Trace[1039233495]: "R>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[1039233495]: ---"Objects listed" error:Get "https://041329f>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[1039233495]: [30.001633596s] [30.001633596s] END
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.301230    3252 reflector.go:147] vendor/k8s.io/cli>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:17.301273    3252 reflector.go:535] vendor/k8s.io/cli>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.301295    3252 trace.go:236] Trace[1810005452]: "R>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[1810005452]: ---"Objects listed" error:Get "https://041329f>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[1810005452]: [30.000203824s] [30.000203824s] END
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.301300    3252 reflector.go:147] vendor/k8s.io/cli>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.306634    3252 csi_plugin.go:913] Failed to contac>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.307770    3252 event.go:289] Unable to write event>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:17.339641    3252 reflector.go:535] vendor/k8s.io/cli>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.339697    3252 trace.go:236] Trace[443572792]: "Re>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[443572792]: ---"Objects listed" error:Get "https://041329fc>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[443572792]: [30.000478791s] [30.000478791s] END
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.339707    3252 reflector.go:147] vendor/k8s.io/cli>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:17.352090    3252 reflector.go:535] vendor/k8s.io/cli>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.352162    3252 trace.go:236] Trace[675022571]: "Re>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[675022571]: ---"Objects listed" error:Get "https://041329fc>Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[675022571]: [30.003206291s] [30.003206291s] END
lines 1-29...skipping...
-- Journal begins at Sun 2023-12-10 19:34:48 UTC, ends at Wed 2023-12-13 18:26:28 UTC. --
Dec 10 19:34:50 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:34:50.552965    3252 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/resolvconf-pull-resolved.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/system.slice/resolvconf-pull-resolved.service: no such file or directory
Dec 10 19:34:57 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:34:57.325898    3252 transport.go:301] Unable to cancel request for *otelhttp.Transport
Dec 10 19:34:57 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:34:57.326004    3252 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/qws-lhr-2-c5e6e7eee014?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)>
Dec 10 19:34:57 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:34:57.432397    3252 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"qws-lhr-2-c5e6e7eee014\" not found"
Dec 10 19:35:07 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:07.432984    3252 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"qws-lhr-2-c5e6e7eee014\" not found"
Dec 10 19:35:07 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:07.527120    3252 transport.go:301] Unable to cancel request for *otelhttp.Transport
Dec 10 19:35:07 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:07.527231    3252 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/qws-lhr-2-c5e6e7eee014?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)>
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:17.301139    3252 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes?fieldSelector=metadata.name%3Dqws-lhr-2-c5e6e7eee014&limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.301211    3252 trace.go:236] Trace[1039233495]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (10-Dec-2023 19:34:47.299) (total time: 30001ms):
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[1039233495]: ---"Objects listed" error:Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes?fieldSelector=metadata.name%3Dqws-lhr-2-c5e6e7eee014&limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout 30001ms (19:35:17.301)
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[1039233495]: [30.001633596s] [30.001633596s] END
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.301230    3252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes?fieldSelector=metadata.name%3Dqws-lhr-2-c5e6e7eee014&limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:17.301273    3252 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.301295    3252 trace.go:236] Trace[1810005452]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (10-Dec-2023 19:34:47.301) (total time: 30000ms):
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[1810005452]: ---"Objects listed" error:Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout 30000ms (19:35:17.301)
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[1810005452]: [30.000203824s] [30.000203824s] END
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.301300    3252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.306634    3252 csi_plugin.go:913] Failed to contact API server when waiting for CSINode publishing: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/storage.k8s.io/v1/csinodes/qws-lhr-2-c5e6e7eee014": dial tcp 45.77.89.80:6443: i/o timeout
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.307770    3252 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"qws-lhr-2-c5e6e7eee014.179f8f6e1f6a72c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTim>
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:17.339641    3252 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.339697    3252 trace.go:236] Trace[443572792]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (10-Dec-2023 19:34:47.339) (total time: 30000ms):
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[443572792]: ---"Objects listed" error:Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout 30000ms (19:35:17.339)
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[443572792]: [30.000478791s] [30.000478791s] END
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.339707    3252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:17.352090    3252 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.352162    3252 trace.go:236] Trace[675022571]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (10-Dec-2023 19:34:47.348) (total time: 30003ms):
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[675022571]: ---"Objects listed" error:Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout 30003ms (19:35:17.352)
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[675022571]: [30.003206291s] [30.003206291s] END
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.352172    3252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 45.77.89.80:6443: i/o timeout
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.427705    3252 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes\": dial tcp 45.77.89.80:6443: i/o timeout" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.434086    3252 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"qws-lhr-2-c5e6e7eee014\" not found"
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.628565    3252 kubelet_node_status.go:352] "Setting node annotation to enable volume controller attach/detach"
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.629649    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientMemory"
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.629799    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasNoDiskPressure"
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.629908    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientPID"
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:17.630025    3252 kubelet_node_status.go:70] "Attempting to register node" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.631708    3252 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes\": EOF" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:17.928631    3252 transport.go:301] Unable to cancel request for *otelhttp.Transport
Dec 10 19:35:17 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:17.928937    3252 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/qws-lhr-2-c5e6e7eee014?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="800ms"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.032896    3252 kubelet_node_status.go:352] "Setting node annotation to enable volume controller attach/detach"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.033672    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientMemory"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.033774    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasNoDiskPressure"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.033874    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientPID"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.033971    3252 kubelet_node_status.go:70] "Attempting to register node" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:18.035341    3252 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes\": EOF" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.836471    3252 kubelet_node_status.go:352] "Setting node annotation to enable volume controller attach/detach"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.837424    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientMemory"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.837527    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasNoDiskPressure"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.837628    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientPID"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:18.837726    3252 kubelet_node_status.go:70] "Attempting to register node" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:18 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:18.839370    3252 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes\": EOF" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:20 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:20.440118    3252 kubelet_node_status.go:352] "Setting node annotation to enable volume controller attach/detach"
Dec 10 19:35:20 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:20.441237    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientMemory"
Dec 10 19:35:20 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:20.441357    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasNoDiskPressure"
Dec 10 19:35:20 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:20.441444    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientPID"
Dec 10 19:35:20 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:20.441545    3252 kubelet_node_status.go:70] "Attempting to register node" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:20 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:20.443460    3252 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes\": EOF" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:22 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:22.001419    3252 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"qws-lhr-2-c5e6e7eee014.179f8f6e1f6a72c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTim>
Dec 10 19:35:23 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:23.644319    3252 kubelet_node_status.go:352] "Setting node annotation to enable volume controller attach/detach"
Dec 10 19:35:23 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:23.645262    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientMemory"
Dec 10 19:35:23 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:23.645361    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasNoDiskPressure"
Dec 10 19:35:23 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:23.645449    3252 kubelet_node_status.go:669] "Recording event message for node" node="qws-lhr-2-c5e6e7eee014" event="NodeHasSufficientPID"
Dec 10 19:35:23 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:23.645544    3252 kubelet_node_status.go:70] "Attempting to register node" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:23 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:23.647274    3252 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes\": EOF" node="qws-lhr-2-c5e6e7eee014"
Dec 10 19:35:27 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:27.434440    3252 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"qws-lhr-2-c5e6e7eee014\" not found"
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:28.182118    3252 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/services?limit=500&resourceVersion=0": EOF
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:28.182338    3252 trace.go:236] Trace[465541580]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (10-Dec-2023 19:35:18.156) (total time: 10026ms):
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[465541580]: ---"Objects listed" error:Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/services?limit=500&resourceVersion=0": EOF 10026ms (19:35:28.182)
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[465541580]: [10.026254352s] [10.026254352s] END
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:28.182492    3252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/services?limit=500&resourceVersion=0": EOF
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:28.331380    3252 csi_plugin.go:913] Failed to contact API server when waiting for CSINode publishing: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/storage.k8s.io/v1/csinodes/qws-lhr-2-c5e6e7eee014": EOF
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:28.563255    3252 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes?fieldSelector=metadata.name%3Dqws-lhr-2-c5e6e7eee014&limit=500&resourceVersion=0": EOF
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:28.563681    3252 trace.go:236] Trace[525637543]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (10-Dec-2023 19:35:18.537) (total time: 10025ms):
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[525637543]: ---"Objects listed" error:Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes?fieldSelector=metadata.name%3Dqws-lhr-2-c5e6e7eee014&limit=500&resourceVersion=0": EOF 10025ms (19:35:28.563)
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[525637543]: [10.025980562s] [10.025980562s] END
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:28.563834    3252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/api/v1/nodes?fieldSelector=metadata.name%3Dqws-lhr-2-c5e6e7eee014&limit=500&resourceVersion=0": EOF
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:28.752140    3252 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/qws-lhr-2-c5e6e7eee014?timeout=10s\": context deadline exceeded - error from a previous attempt: EOF" interval="1.6s"
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:28.821479    3252 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": EOF
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: I1210 19:35:28.821745    3252 trace.go:236] Trace[365613302]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (10-Dec-2023 19:35:18.799) (total time: 10022ms):
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[365613302]: ---"Objects listed" error:Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": EOF 10021ms (19:35:28.821)
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: Trace[365613302]: [10.022258988s] [10.022258988s] END
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: E1210 19:35:28.821891    3252 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": EOF
Dec 10 19:35:28 qws-lhr-2-c5e6e7eee014 kubelet[3252]: W1210 19:35:28.916474    3252 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://041329fc-94c2-4c11-93e4-cd80205ea45d.vultr-k8s.com:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": EOF

@niladrih
Copy link
Member

The mount command for the backing volumes used by the NFS servers is erroring out with the following error message:

 Operation for "{volumeName:kubernetes.io/csi/block.csi.vultr.com^7cdcd91b-0a18-4dfc-b4eb-ad2a8c5a351b podName: nodeName:}" failed. No retries permitted until 2023-12-14 08:21:55.513465952 +0000 UTC m=+205370.670595393 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-19b070c0d55944fa" (UniqueName: "kubernetes.io/csi/block.csi.vultr.com^7cdcd91b-0a18-4dfc-b4eb-ad2a8c5a351b") pod "nfs-pvc-b0db3135-d79f-4e50-94ef-d876ec5bf233-765556fbc8-gf97w" (UID: "22fab542-a651-44b5-9388-aab0be9d31b7") : rpc error: code = Internal desc = mounting failed: exit status 255 cmd: 'mount -t ext4 /dev/disk/by-id/virtio-lhr-7cdcd91b0a184d /var/lib/kubelet/plugins/kubernetes.io/csi/block.csi.vultr.com/9b0820a416d9ec58abad88e6254d35c253ab4100436d3d59bba6f985475f4b61/globalmount' output: "mount: mounting /dev/disk/by-id/virtio-lhr-7cdcd91b0a184d on /var/lib/kubelet/plugins/kubernetes.io/csi/block.csi.vultr.com/9b0820a416d9ec58abad88e6254d35c253ab4100436d3d59bba6f985475f4b61/globalmount failed: Invalid argument\n"

There might be an issue with the CSI driver which provisions the backing volume. Might be a good idea to investigate on that front.

@niladrih
Copy link
Member

@clearhost-cmd were you able to deploy the NFS servers?

@avishnu avishnu added LINUX Arch Need INFO Need info from the user and removed LINUX Arch labels Jan 20, 2024
@danjenkins
Copy link

looks like I'm having the same issue @niladrih both with the nfs servers and with direct to block storage.... I've got an active ticket open with vultr made a little while ago.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Need INFO Need info from the user
Projects
None yet
Development

No branches or pull requests

5 participants