Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PVC on cephfs with EC is always pending. #9244

Closed
manjo-git opened this issue Nov 24, 2021 · 11 comments
Closed

PVC on cephfs with EC is always pending. #9244

manjo-git opened this issue Nov 24, 2021 · 11 comments
Labels

Comments

@manjo-git
Copy link

manjo-git commented Nov 24, 2021

I commented on the issue that was closed (#7319), and I am opening a new issue here so that it might get some attention. I am using the master branch ( commit 89159a5 (HEAD -> release-1.7, origin/release-1.7)) and I am running into the same issue where PVC is always pending. Here are some logs based on the common-issues document. All my OSDs are bluestore

>ceph osd metadata $ID | grep -e id -e hostname -e osd_objectstore
        "id": 0,
        "container_hostname": "rook-ceph-osd-0-9d8d6b89-jt7br",
        "device_ids": "nvme8n1=Micron_7300_MTFDHBE3T8TDF_2017288AEC0E",
        "hostname": "node6",
        "osd_objectstore": "bluestore",
        "id": 1,

Here is how I deployed my cluster with cephfs-ec.

kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
kubectl create -f toolbox.yaml
kubectl create -f csi/cephfs/storageclass-ec.yaml
kubectl create -f filesystem-ec.yaml 

Created PVC but it is always pending

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
  namespace: rook-ceph
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Ti
  storageClassName: rook-cephfs
$ kubectl create -f mypvc.yaml 

$ kubectl get pvc -n rook-ceph
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-pvc   Pending                                      rook-cephfs    6s

$ kubectl get pvc -n rook-ceph
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-pvc   Pending                                      rook-cephfs    5m40s

Logs based on common-issues document

$ kubectl get cephcluster -A
NAMESPACE   NAME        DATADIRHOSTPATH   MONCOUNT   AGE   PHASE   MESSAGE                        HEALTH      EXTERNAL
rook-ceph   rook-ceph   /var/lib/rook     3          40m   Ready   Cluster created successfully   HEALTH_OK

$ kubectl get sc -A
NAME                 PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)   driver.longhorn.io              Delete          Immediate           true                   86d
rook-cephfs          rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   2m27s

$ kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME                                       READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-ec-a-64d488bb8f-wd9rl   1/1     Running   0          3m1s
rook-ceph-mds-myfs-ec-b-5bbd7d7875-gchb9   1/1     Running   0          3m

$ kubectl -n rook-ceph get pod -l app=csi-cephfsplugin-provisioner
NAME                                            READY   STATUS    RESTARTS   AGE
csi-cephfsplugin-provisioner-54fdf994d6-8kb2g   6/6     Running   0          44m
csi-cephfsplugin-provisioner-54fdf994d6-mnbk7   6/6     Running   0          44m

$ kubectl -n rook-ceph logs csi-cephfsplugin-provisioner-54fdf994d6-8kb2g csi-provisioner
I1124 15:52:19.671409       1 csi-provisioner.go:138] Version: v3.0.0
I1124 15:52:19.671461       1 csi-provisioner.go:161] Building kube configs for running in cluster...
I1124 15:52:20.674332       1 common.go:111] Probing CSI driver for readiness
I1124 15:52:20.677640       1 csi-provisioner.go:277] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
I1124 15:52:20.678851       1 leaderelection.go:248] attempting to acquire leader lease rook-ceph/rook-ceph-cephfs-csi-ceph-com...
E1124 15:52:20.852911       1 leaderelection.go:334] error initially creating leader election record: leases.coordination.k8s.io "rook-ceph-cephfs-csi-ceph-com" already exists
@manjo-git manjo-git added the bug label Nov 24, 2021
@Madhu-1
Copy link
Member

Madhu-1 commented Nov 25, 2021

Increase the log level to 5

# Set logging level for csi containers.
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
# CSI_LOG_LEVEL: "0"
and paste csi-cephfsplugin container logs from csi-cephfsplugin-provisioner pod

@manjo-git
Copy link
Author

$ kubectl get pvc -n rook-ceph
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
arm-pvc   Pending                                      rook-cephfs    4m17s

Here are the logs for csi-cephfsplugin-provisioner with CSI_LOG_LEVEL: "5" set.

$ kubectl -n rook-ceph logs csi-cephfsplugin-provisioner-6847ff9dc-gptmd csi-provisioner**

I1129 15:10:05.112192       1 feature_gate.go:245] feature gates: &{map[]}
I1129 15:10:05.112242       1 csi-provisioner.go:138] Version: v3.0.0
I1129 15:10:05.112260       1 csi-provisioner.go:161] Building kube configs for running in cluster...
I1129 15:10:05.114178       1 connection.go:154] Connecting to unix:///csi/csi-provisioner.sock
I1129 15:10:06.115832       1 common.go:111] Probing CSI driver for readiness
I1129 15:10:06.115853       1 connection.go:183] GRPC call: /csi.v1.Identity/Probe
I1129 15:10:06.115860       1 connection.go:184] GRPC request: {}
I1129 15:10:06.118548       1 connection.go:186] GRPC response: {}
I1129 15:10:06.118611       1 connection.go:187] GRPC error: <nil>
I1129 15:10:06.118620       1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
I1129 15:10:06.118624       1 connection.go:184] GRPC request: {}
I1129 15:10:06.118961       1 connection.go:186] GRPC response: {"name":"rook-ceph.cephfs.csi.ceph.com","vendor_version":"v3.4.0"}
I1129 15:10:06.119021       1 connection.go:187] GRPC error: <nil>
I1129 15:10:06.119033       1 csi-provisioner.go:205] Detected CSI driver rook-ceph.cephfs.csi.ceph.com
I1129 15:10:06.119043       1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I1129 15:10:06.119048       1 connection.go:184] GRPC request: {}
I1129 15:10:06.119517       1 connection.go:186] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]}
I1129 15:10:06.119657       1 connection.go:187] GRPC error: <nil>
I1129 15:10:06.119666       1 connection.go:183] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I1129 15:10:06.119671       1 connection.go:184] GRPC request: {}
I1129 15:10:06.120052       1 connection.go:186] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":9}}},{"Type":{"Rpc":{"type":7}}}]}
I1129 15:10:06.120160       1 connection.go:187] GRPC error: <nil>
I1129 15:10:06.120475       1 csi-provisioner.go:277] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
I1129 15:10:06.120836       1 controller.go:731] Using saving PVs to API server in background
I1129 15:10:06.122687       1 leaderelection.go:248] attempting to acquire leader lease rook-ceph/rook-ceph-cephfs-csi-ceph-com...
I1129 15:10:06.205935       1 leaderelection.go:258] successfully acquired lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:06.205972       1 leader_election.go:212] new leader detected, current leader: csi-cephfsplugin-provisioner-6847ff9dc-gptmd
I1129 15:10:06.205997       1 leader_election.go:205] became leader, starting
I1129 15:10:06.206217       1 reflector.go:219] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:134
I1129 15:10:06.206232       1 reflector.go:255] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1129 15:10:06.206271       1 reflector.go:219] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134
I1129 15:10:06.206281       1 reflector.go:255] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1129 15:10:06.212015       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:06.307001       1 shared_informer.go:270] caches populated
I1129 15:10:06.307019       1 shared_informer.go:270] caches populated
I1129 15:10:06.307033       1 controller.go:810] Starting provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-6847ff9dc-gptmd_f58041f5-15b4-44ac-8ab8-ddc508636297!
I1129 15:10:06.307045       1 clone_controller.go:66] Starting CloningProtection controller
I1129 15:10:06.307059       1 volume_store.go:97] Starting save volume queue
I1129 15:10:06.307086       1 clone_controller.go:82] Started CloningProtection controller
I1129 15:10:06.307200       1 reflector.go:219] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:844
I1129 15:10:06.307210       1 reflector.go:255] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:844
I1129 15:10:06.307284       1 reflector.go:219] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:847
I1129 15:10:06.307295       1 reflector.go:255] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:847
I1129 15:10:06.407700       1 shared_informer.go:270] caches populated
I1129 15:10:06.407884       1 controller.go:859] Started provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-6847ff9dc-gptmd_f58041f5-15b4-44ac-8ab8-ddc508636297!
I1129 15:10:11.220177       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:16.228032       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:21.236364       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:26.244002       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:31.253112       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:36.260629       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:41.269620       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:46.278979       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:51.288127       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:56.296625       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:01.304077       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:06.313528       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:11.323184       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:16.331819       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:21.340674       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:26.349500       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:31.359543       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:36.366595       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:41.377380       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:46.385444       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:51.395278       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:56.402886       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:01.410429       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:06.418248       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:11.425209       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:16.431710       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:21.438808       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:27.029099       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:32.035140       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:37.074098       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:42.122497       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:47.127897       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:52.134509       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:57.140792       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:02.146455       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:07.151829       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:12.158472       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:17.165428       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:22.172716       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:27.183504       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:32.190275       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:37.203233       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:42.213463       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:47.225425       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:52.232414       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:57.241947       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:02.248688       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:07.258989       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:12.273209       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:17.282176       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:22.290665       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:27.296811       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:32.304717       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:37.314932       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:42.323891       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:47.332781       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:52.341015       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:57.350658       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:02.358380       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:07.365093       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:12.372594       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:17.380015       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:22.386613       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:27.393358       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:32.400349       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:37.408549       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:42.416563       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:47.424464       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:52.434904       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:57.443991       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:02.450697       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:07.458156       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:12.466824       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:17.478043       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:22.490662       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:27.501008       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:32.512715       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:37.519697       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:42.531995       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:47.538689       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:52.549634       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:57.560041       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:02.571539       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:07.580811       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:12.590805       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:13.310324       1 reflector.go:535] sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:847: Watch close - *v1.StorageClass total 0 items received
I1129 15:17:17.210638       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 0 items received
I1129 15:17:17.597124       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:22.608514       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:27.616011       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:32.624731       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:37.631723       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:42.642401       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:47.651943       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:52.663645       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:57.670373       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:02.679026       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:07.685976       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:12.695909       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:17.702979       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:22.713369       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:25.209254       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 0 items received
I1129 15:18:27.719489       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:32.320232       1 reflector.go:535] sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:844: Watch close - *v1.PersistentVolume total 0 items received
I1129 15:18:32.729821       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:37.737487       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:42.745610       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:47.755636       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:52.764050       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:57.770842       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:02.778736       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:07.786408       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:12.794784       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:17.801650       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:22.809195       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:27.815807       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:32.823911       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:37.831505       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:42.838828       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:47.845778       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:52.855403       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:57.863449       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:02.915588       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:07.921954       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:12.979763       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:17.986244       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:23.055052       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:28.061861       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:33.118410       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:38.124666       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:43.173728       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:48.181393       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:53.247677       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:58.254766       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:03.311160       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:08.319394       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:13.375283       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:18.385423       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:23.437170       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:28.445826       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:33.509920       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:34.810734       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:34.810922       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:34.814259       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:34.814275       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:34.830267       1 connection.go:186] GRPC response: {}
I1129 15:21:34.830347       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:34.830375       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:34.830409       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:34.830419       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 0
E1129 15:21:34.830438       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:34.830470       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:35.331489       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:35.331645       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:35.338861       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:35.338873       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:35.340830       1 connection.go:186] GRPC response: {}
I1129 15:21:35.340857       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:35.340877       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:35.340904       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:35.340912       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 1
E1129 15:21:35.340928       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:35.340974       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:36.341033       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:36.341184       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:36.346746       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:36.346758       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:36.348416       1 connection.go:186] GRPC response: {}
I1129 15:21:36.348440       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:36.348468       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:36.348502       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:36.348513       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 2
E1129 15:21:36.348532       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:36.348553       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.348969       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:38.349125       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:38.353320       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:38.353343       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:38.355092       1 connection.go:186] GRPC response: {}
I1129 15:21:38.355121       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.355142       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.355171       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:38.355181       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 3
E1129 15:21:38.355202       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.355220       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.517108       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:42.355882       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:42.356045       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:42.359627       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:42.359639       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:42.361467       1 connection.go:186] GRPC response: {}
I1129 15:21:42.361501       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:42.361534       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:42.361576       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:42.361590       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 4
E1129 15:21:42.361616       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:42.361603       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:43.572854       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:48.579731       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:50.362041       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:50.362218       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:50.366506       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:50.366519       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:50.368079       1 connection.go:186] GRPC response: {}
I1129 15:21:50.368108       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:50.368135       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:50.368178       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:50.368188       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 5
E1129 15:21:50.368210       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:50.368230       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:53.638259       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:58.647642       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:03.705820       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:06.369006       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:22:06.369169       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:22:06.373185       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:22:06.373196       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:22:06.374945       1 connection.go:186] GRPC response: {}
I1129 15:22:06.374970       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:22:06.374990       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:22:06.375013       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:22:06.375022       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 6
E1129 15:22:06.375037       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:22:06.375055       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:22:08.715562       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:13.763563       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:18.772260       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:23.835831       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:28.845865       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com

@Madhu-1
Copy link
Member

Madhu-1 commented Nov 30, 2021

I1129 15:22:06.374970 1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found

This means the cephfs filesystem is not created. can you please check you have the filesystem created?

@manjo-git
Copy link
Author

manjo-git commented Nov 30, 2021

It looks like the filesystem was created as per kubectl.

$ kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME                                       READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-a-65bc798c6b-gsm44      1/1     Running   0          91s
rook-ceph-mds-myfs-b-799464fd66-vv6jl      1/1     Running   0          90s
rook-ceph-mds-myfs-ec-a-6b9d44549c-wjbzx   1/1     Running   0          7m37s
rook-ceph-mds-myfs-ec-b-5c69495555-htvjm   1/1     Running   0          7m36s

Am I deploying this right? or am I missing a step here? From reading the docs I cant see any issues with how I deploy.

kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
kubectl create -f toolbox.yaml
kubectl create -f filesystem-ec.yaml 
kubectl create -f csi/cephfs/storageclass-ec.yaml

@manjo-git
Copy link
Author

manjo-git commented Nov 30, 2021

I have tried the following branches and got the same results.

commit a9cda1e (HEAD -> release-1.7, origin/release-1.7)
commit 83f7c2b (HEAD -> master, origin/master, origin/HEAD)

The only time PVC pending issue happens is with cephfs + EC. I am able to deploy other configurations without any issues.

@manjo-git
Copy link
Author

manjo-git commented Dec 1, 2021

My previous comment was misleading, creating filesystem-ec actually failed on the cluster.

I did a tear down and rebuild of the rook ceph.

Using commit 83f7c2b (HEAD -> master, origin/master, origin/HEAD)
branch of rook-ceph.

kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
kubectl create -f toolbox.yaml
kubectl create -f filesystem-ec.yaml 
[rook@rook-ceph-tools-555c879675-b7hjj /]$ ceph status
  cluster:
    id:     1ac2fe59-a2ac-4fa4-ad02-a1f46ba3e5e2
    health: HEALTH_WARN
            clock skew detected on mon.b, mon.c

  services:
    mon: 3 daemons, quorum a,b,c (age 24m)
    mgr: a(active, since 23m)
    osd: 8 osds: 8 up (since 23m), 8 in (since 23m)

  data:
    pools:   3 pools, 395 pgs
    objects: 0 objects, 0 B
    usage:   105 MiB used, 28 TiB / 28 TiB avail
    pgs:     0.253% pgs not active
             394 active+clean
             1   peering

  progress:
    Global Recovery Event (0s)
      [............................]

[rook@rook-ceph-tools-555c879675-b7hjj /]$ ceph df
--- RAW STORAGE ---
CLASS    SIZE   AVAIL     USED  RAW USED  %RAW USED
ssd    28 TiB  28 TiB  109 MiB   109 MiB          0
TOTAL  28 TiB  28 TiB  109 MiB   109 MiB          0

--- POOLS ---
POOL                   ID  PGS  STORED  OBJECTS  USED  %USED  MAX AVAIL
device_health_metrics   1  230     0 B        0   0 B      0    8.8 TiB
myfs-ec-metadata        2  128     0 B        0   0 B      0    8.8 TiB
myfs-ec-data0           3   32     0 B        0   0 B      0     18 TiB
[rook@rook-ceph-tools-555c879675-b7hjj /]$
[rook@rook-ceph-tools-555c879675-b7hjj /]$ rados df
POOL_NAME              USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED  RD_OPS   RD  WR_OPS   WR  USED COMPR  UNDER COMPR
device_health_metrics   0 B        0       0       0                   0        0         0       0  0 B       0  0 B         0 B          0 B
myfs-ec-data0           0 B        0       0       0                   0        0         0       0  0 B       0  0 B         0 B          0 B
myfs-ec-metadata        0 B        0       0       0                   0        0         0       0  0 B       0  0 B         0 B          0 B

total_objects    0
total_used       148 MiB
total_avail      28 TiB
total_space      28 TiB
[rook@rook-ceph-tools-555c879675-b7hjj /]$

Creating filesystem-ec failed

[rook@rook-ceph-tools-555c879675-b7hjj /]$ ceph fs ls
No filesystems enabled
[rook@rook-ceph-tools-555c879675-b7hjj /]$
geeko@mi:~/rook-2/rook/cluster/examples/kubernetes/ceph> kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME                                       READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-ec-a-bc5688f5b-qd2rs    1/1     Running   0          9m26s
rook-ceph-mds-myfs-ec-b-7cd7989fd6-tpkxf   1/1     Running   0          9m26s
geeko@mi:~/rook-2/rook/cluster/examples/kubernetes/ceph>

Looking into ceph-operator for errors. I have pasted a snip from those logs. Complete logs can be found here: http://dpaste.com//FCAUP9L5S

geeko@mi:~/rook-2/rook/cluster/examples/kubernetes/ceph> kubectl -n rook-ceph get pod -l app=rook-ceph-operator
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-757546f8c7-lbxtj   1/1     Running   0          34m
geeko@mi:~/rook-2/rook/cluster/examples/kubernetes/ceph> kubectl -n rook-ceph logs rook-ceph-operator-757546f8c7-lbxtj

        2021-12-01 14:48:39.825272 I | ceph-spec: adding finalizer "cephfilesystem.ceph.rook.io" on "myfs-ec"
        2021-12-01 14:48:39.847072 W | ceph-file-controller: failed to set filesystem "myfs-ec" status to "". failed to update object "rook-ceph/myfs-ec" status: Operation cannot be fulfilled on cephfilesystems.ceph.rook.io "myfs-ec": the object has been modified; please apply your changes to the latest version and try again
        2021-12-01 14:48:39.851751 I | op-mon: parsing mon endpoints: a=10.43.137.50:6789,b=10.43.253.12:6789,c=10.43.251.240:6789
        2021-12-01 14:48:39.851792 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v16.2.6...
        2021-12-01 14:48:40.272746 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings
        2021-12-01 14:48:40.272763 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd
        2021-12-01 14:48:42.494671 I | ceph-spec: detected ceph image version: "16.2.6-0 pacific"
        2021-12-01 14:48:42.876716 I | ceph-file-controller: start running mdses for filesystem "myfs-ec"
        2021-12-01 14:48:43.243986 I | cephclient: getting or creating ceph auth key "mds.myfs-ec-a"
        2021-12-01 14:48:43.642372 I | op-mds: setting mds config flags
        2021-12-01 14:48:43.642393 I | op-config: setting "mds.myfs-ec-a"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:48:43.984751 I | op-config: successfully set "mds.myfs-ec-a"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:48:43.993590 I | cephclient: getting or creating ceph auth key "mds.myfs-ec-b"
        2021-12-01 14:48:44.066978 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-node6": the object has been modified; please apply your changes to the latest version and try again
        2021-12-01 14:48:44.076966 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-node6": the object has been modified; please apply your changes to the latest version and try again
        2021-12-01 14:48:44.389915 I | op-mds: setting mds config flags
        2021-12-01 14:48:44.389938 I | op-config: setting "mds.myfs-ec-b"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:48:44.750138 I | op-config: successfully set "mds.myfs-ec-b"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:48:44.836951 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-node8": the object has been modified; please apply your changes to the latest version and try again
        2021-12-01 14:48:45.836183 I | ceph-file-controller: creating filesystem "myfs-ec"
        2021-12-01 14:48:48.778129 I | cephclient: creating replicated pool myfs-ec-metadata succeeded
        2021-12-01 14:48:51.802081 I | cephclient: setting pool property "allow_ec_overwrites" to "true" on pool "myfs-ec-data0"
        2021-12-01 14:48:52.804684 I | cephclient: setting pool property "compression_mode" to "none" on pool "myfs-ec-data0"
        2021-12-01 14:48:53.808917 I | cephclient: creating EC pool myfs-ec-data0 succeeded
        2021-12-01 14:48:53.808944 I | cephclient: setting pool property "allow_ec_overwrites" to "true" on pool "myfs-ec-data0"
        2021-12-01 14:48:54.870894 I | cephclient: creating filesystem "myfs-ec" with metadata pool "myfs-ec-metadata" and data pools [myfs-ec-data0]
        2021-12-01 14:48:56.228939 E | ceph-file-controller: failed to reconcile failed to create filesystem "myfs-ec": failed to create filesystem "myfs-ec": failed enabling ceph fs "myfs-ec": exit status 22
        2021-12-01 14:48:56.239430 I | op-mon: parsing mon endpoints: a=10.43.137.50:6789,b=10.43.253.12:6789,c=10.43.251.240:6789
        2021-12-01 14:48:56.239469 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v16.2.6...
        2021-12-01 14:48:56.619490 I | clusterdisruption-controller: all "host" failure domains: [node4 node6 node8 node9]. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:310} {StateName:unknown Count:10}]"
        2021-12-01 14:49:02.447334 I | ceph-spec: detected ceph image version: "16.2.6-0 pacific"
        2021-12-01 14:49:02.835028 I | ceph-file-controller: start running mdses for filesystem "myfs-ec"
        2021-12-01 14:49:03.205503 I | cephclient: getting or creating ceph auth key "mds.myfs-ec-a"
        2021-12-01 14:49:03.603099 I | op-mds: setting mds config flags
        2021-12-01 14:49:03.603126 I | op-config: setting "mds.myfs-ec-a"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:49:03.923568 I | op-config: successfully set "mds.myfs-ec-a"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:49:03.933173 I | op-mds: deployment for mds "rook-ceph-mds-myfs-ec-a" already exists. updating if needed
        2021-12-01 14:49:03.943436 I | op-k8sutil: deployment "rook-ceph-mds-myfs-ec-a" did not change, nothing to update
        2021-12-01 14:49:03.943459 I | cephclient: getting or creating ceph auth key "mds.myfs-ec-b"
        2021-12-01 14:49:04.340567 I | op-mds: setting mds config flags
        2021-12-01 14:49:04.340591 I | op-config: setting "mds.myfs-ec-b"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:49:04.656124 I | op-config: successfully set "mds.myfs-ec-b"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:49:04.665473 I | op-mds: deployment for mds "rook-ceph-mds-myfs-ec-b" already exists. updating if needed
        2021-12-01 14:49:04.675902 I | op-k8sutil: deployment "rook-ceph-mds-myfs-ec-b" did not change, nothing to update
        2021-12-01 14:49:05.691518 I | ceph-file-controller: creating filesystem "myfs-ec"
        2021-12-01 14:49:05.691550 I | cephclient: creating filesystem "myfs-ec" with metadata pool "myfs-ec-metadata" and data pools [myfs-ec-data0]
        2021-12-01 14:49:07.125081 E | ceph-file-controller: failed to reconcile failed to create filesystem "myfs-ec": failed to create filesystem "myfs-ec": failed enabling ceph fs "myfs-ec": exit status 22
        2021-12-01 14:49:07.140157 I | op-mon: parsing mon endpoints: a=10.43.137.50:6789,b=10.43.253.12:6789,c=10.43.251.240:6789
        2021-12-01 14:49:07.140205 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v16.2.6...
        2021-12-01 14:49:13.144111 I | ceph-spec: detected ceph image version: "16.2.6-0 pacific"
        2021-12-01 14:49:13.514013 I | ceph-file-controller: start running mdses for filesystem "myfs-ec"
        2021-12-01 14:49:13.879127 I | cephclient: getting or creating ceph auth key "mds.myfs-ec-a"
        2021-12-01 14:49:14.273820 I | op-mds: setting mds config flags
        2021-12-01 14:49:14.273840 I | op-config: setting "mds.myfs-ec-a"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:49:14.592764 I | op-config: successfully set "mds.myfs-ec-a"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:49:14.604095 I | op-mds: deployment for mds "rook-ceph-mds-myfs-ec-a" already exists. updating if needed
        2021-12-01 14:49:14.613776 I | op-k8sutil: deployment "rook-ceph-mds-myfs-ec-a" did not change, nothing to update
        2021-12-01 14:49:14.613799 I | cephclient: getting or creating ceph auth key "mds.myfs-ec-b"
        2021-12-01 14:49:15.002930 I | op-mds: setting mds config flags
        2021-12-01 14:49:15.002955 I | op-config: setting "mds.myfs-ec-b"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:49:15.321423 I | op-config: successfully set "mds.myfs-ec-b"="mds_join_fs"="myfs-ec" option to the mon configuration database
        2021-12-01 14:49:15.331201 I | op-mds: deployment for mds "rook-ceph-mds-myfs-ec-b" already exists. updating if needed
        2021-12-01 14:49:15.343655 I | op-k8sutil: deployment "rook-ceph-mds-myfs-ec-b" did not change, nothing to update
        2021-12-01 14:49:16.439256 I | ceph-file-controller: creating filesystem "myfs-ec"
        2021-12-01 14:49:16.439291 I | cephclient: creating filesystem "myfs-ec" with metadata pool "myfs-ec-metadata" and data pools [myfs-ec-data0]
        2021-12-01 14:49:18.048649 E | ceph-file-controller: failed to reconcile failed to create filesystem "myfs-ec": failed to create filesystem "myfs-ec": failed enabling ceph fs "myfs-ec": exit status 22

Out of curiocity I tried to create a non-EC filesystem as well.

kubectl create -f filesystem.yaml

And there was no failures in the case of non-EC filesystem. Failure only occurs for EC filesystems.

[rook@rook-ceph-tools-555c879675-b7hjj /]$ ceph fs ls
name: myfs, metadata pool: myfs-metadata, data pools: [myfs-data0 ]
[rook@rook-ceph-tools-555c879675-b7hjj /]$
[rook@rook-ceph-tools-555c879675-b7hjj /]$ ceph status
  cluster:
    id:     1ac2fe59-a2ac-4fa4-ad02-a1f46ba3e5e2
    health: HEALTH_WARN
            clock skew detected on mon.b, mon.c

  services:
    mon: 3 daemons, quorum a,b,c (age 70m)
    mgr: a(active, since 70m)
    mds: 1/1 daemons up, 2 standby, 1 hot standby
    osd: 8 osds: 8 up (since 69m), 8 in (since 70m)

  data:
    volumes: 1/1 healthy
    pools:   5 pools, 512 pgs
    objects: 22 objects, 2.3 KiB
    usage:   256 MiB used, 28 TiB / 28 TiB avail
    pgs:     512 active+clean

  io:
    client:   852 B/s rd, 1 op/s rd, 0 op/s wr

[rook@rook-ceph-tools-555c879675-b7hjj /]$
[rook@rook-ceph-tools-555c879675-b7hjj /]$ ceph df
--- RAW STORAGE ---
CLASS    SIZE   AVAIL     USED  RAW USED  %RAW USED
ssd    28 TiB  28 TiB  256 MiB   256 MiB          0
TOTAL  28 TiB  28 TiB  256 MiB   256 MiB          0

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS    USED  %USED  MAX AVAIL
device_health_metrics   1   64      0 B        0     0 B      0    8.8 TiB
myfs-ec-metadata        2  128      0 B        0     0 B      0    8.8 TiB
myfs-ec-data0           3   32      0 B        0     0 B      0     18 TiB
myfs-metadata           4  256  2.3 KiB       22  96 KiB      0    8.8 TiB
myfs-data0              5   32      0 B        0     0 B      0    8.8 TiB
[rook@rook-ceph-tools-555c879675-b7hjj /]$

Creating pvc on non-ec filesystem/storageclass

kubectl create -f csi/cephfs/storageclass.yaml
kubectl create -f csi/cephfs/pvc.yaml
geeko@mi:~/rook-2/rook/cluster/examples/kubernetes/ceph> kubectl get pvc -n rook-ceph
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc   Bound    pvc-c961b7e7-2381-4d2a-bc83-4361f8479143   1Gi        RWO            rook-cephfs    58s
geeko@mi:~/rook-2/rook/cluster/examples/kubernetes/ceph>

@travisn
Copy link
Member

travisn commented Dec 1, 2021

For an EC filesystem, try the updates found in this PR: https://github.com/rook/rook/pull/8452/files

@manjo-git
Copy link
Author

Tear down and recreate the cluster with the patches from (https://github.com/rook/rook/pull/8452/files) applied.

kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
kubectl create -f toolbox.yaml
kubectl create -f filesystem-ec.yaml 

The filesystem was created successfully

[rook@rook-ceph-tools-555c879675-gv7sh /]$ ceph fs ls
name: myfs-ec, metadata pool: myfs-ec-metadata, data pools: [myfs-ec-data0 myfs-ec-data1 ]
[rook@rook-ceph-tools-555c879675-gv7sh /]$

[rook@rook-ceph-tools-555c879675-gv7sh /]$ ceph df
--- RAW STORAGE ---
CLASS    SIZE   AVAIL     USED  RAW USED  %RAW USED
ssd    28 TiB  28 TiB  126 MiB   126 MiB          0
TOTAL  28 TiB  28 TiB  126 MiB   126 MiB          0

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS    USED  %USED  MAX AVAIL
device_health_metrics   1  223      0 B        0     0 B      0    8.8 TiB
myfs-ec-metadata        2  256  2.3 KiB       22  96 KiB      0    8.8 TiB
myfs-ec-data0           3   32      0 B        0     0 B      0    8.8 TiB
myfs-ec-data1           4   32      0 B        0     0 B      0     18 TiB
[rook@rook-ceph-tools-555c879675-gv7sh /]$

[rook@rook-ceph-tools-555c879675-gv7sh /]$ ceph status
  cluster:
    id:     2a84152e-be29-4ace-a5ca-01391fdd9701
    health: HEALTH_WARN
            Reduced data availability: 1 pg inactive, 1 pg peering

  services:
    mon: 3 daemons, quorum a,b,c (age 15m)
    mgr: a(active, since 14m)
    mds: 1/1 daemons up, 1 hot standby
    osd: 8 osds: 8 up (since 14m), 8 in (since 14m)

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 532 pgs
    objects: 22 objects, 2.3 KiB
    usage:   164 MiB used, 28 TiB / 28 TiB avail
    pgs:     0.376% pgs not active
             530 active+clean
             2   peering

  io:
    client:   1.2 KiB/s rd, 2 op/s rd, 0 op/s wr

  progress:

[rook@rook-ceph-tools-555c879675-gv7sh /]$

Successfully Created storageclass-ec

kubectl create -f csi/cephfs/storageclass-ec.yaml

geeko@mi:~/rook-3/rook/cluster/examples/kubernetes/ceph> kubectl get sc -n rook-ceph
NAME                 PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)   driver.longhorn.io              Delete          Immediate           true                   93d
rook-cephfs          rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   50s
geeko@mi:~/rook-3/rook/cluster/examples/kubernetes/ceph>

Successfully created PVC with rook-cephfs

geeko@mi:~/rook-3/rook/cluster/examples/kubernetes/ceph> kubectl create -f csi/cephfs/pvc.yaml
persistentvolumeclaim/cephfs-pvc created
geeko@mi:~/rook-3/rook/cluster/examples/kubernetes/ceph> kubectl get pvc -n rook-ceph
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc   Bound    pvc-00245dd0-5550-4ab4-bc19-41afa25ebf50   1Gi        RWO            rook-cephfs    6s
geeko@mi:~/rook-3/rook/cluster/examples/kubernetes/ceph>

@travisn
Copy link
Member

travisn commented Dec 1, 2021

@manjo-git Great to see it's working for you now. Would you mind commenting on #8452 that those updates fixed this scenario for you?

@BlaineEXE
Copy link
Member

Closing since this seems like it has been resolved now with #8452.

@skrobul
Copy link

skrobul commented May 18, 2023

For anyone getting here through search engine, the "fix" mentioned above worked for me but only when I have used new cephfilesystem name.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants