Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create erasure-coded cephfs presumable due to ceph-bug #1785465 #7319

Closed
davralin opened this issue Feb 26, 2021 · 7 comments · Fixed by #7320
Closed

Unable to create erasure-coded cephfs presumable due to ceph-bug #1785465 #7319

davralin opened this issue Feb 26, 2021 · 7 comments · Fixed by #7320
Labels

Comments

@davralin
Copy link

davralin commented Feb 26, 2021

Is this a bug report or feature request?

  • Bug Report

Background:
I had my bulkstorage cephfs pool created with replication, decided to recreate it with erasure-coded pools instead, to increase storage capacity.
Deleted all PVC's from cephfs, kubectl delete -f filesystem.yaml; kubectl delete -f csi/cephfs/storageclass.yaml.
kubectl apply -f csi/cephfs/storageclass.yaml; kubectl apply -f filesystem-ec.yaml (with minor tweaks to deviceClass and chunks)
PVC's didn't work, further investigation last night got me to this ceph-issue, so I decided to call it quits and return today to create an issue.
Unfortunately, due to a power-issue, I have no idea where I got that log-output from, and I can't seem to find any traces of it either. PVC-creation still doesn't work though.

Deleting and recreating everything with replicated-pools makes PVC-creation work again.

Deviation from expected behavior:
Unable to create erasure-coded cephfs.
Perhaps due to creation missing a --force, as required by https://bugzilla.redhat.com/show_bug.cgi?id=1785465

Expected behavior:
kubectl apply -f csi/cephfs/storageclass.yaml; kubectl apply -f filesystem-ec.yaml
Then able to provision PVC's.

How to reproduce it (minimal and precise):

kubectl apply -f csi/cephfs/storageclass.yaml; kubectl apply -f filesystem-ec.yaml
Then create a PVC, get events in that namespace reveals failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found

File(s) to submit:

  • Cluster CR (custom resource), typically called cluster.yaml, if necessary
    Diff with upstream filesystems-ec.yaml:
diff --git a/filesystem-ec.yaml b/filesystem-ec.yaml
index c5d4b96..ca436ee 100644
--- a/filesystem-ec.yaml
+++ b/filesystem-ec.yaml
@@ -15,17 +15,15 @@ spec:
     replicated:
       # You need at least three OSDs on different nodes for this config to work
       size: 3
-      deviceClass: ssd
   # The list of data pool specs
   dataPools:
     # You need at least three `bluestore` OSDs on different nodes for this config to work
     - erasureCoded:
         dataChunks: 2
-        codingChunks: 2
-        deviceClass: hdd
+        codingChunks: 1
       # Inline compression mode for the data pool
       parameters:
-        compression_mode: aggressive
+        compression_mode: none
   # Whether to preserve filesystem after CephFilesystem CRD deletion
   preserveFilesystemOnDelete: true
   # The metadata service (mds) configuration
  • Operator's logs, if necessary
2021-02-26 10:04:35.066466 I | ceph-spec: adding finalizer "cephfilesystem.ceph.rook.io" on "myfs-ec"
2021-02-26 10:04:35.092474 E | ceph-file-controller: failed to set filesystem "myfs-ec" status to "Created". failed to update object "myfs-ec" status: Operation cannot be fulfilled on cephfilesystems.ceph.rook.io "myfs-ec": the object has been modified; please apply your changes to the latest version and try again
2021-02-26 10:04:35.101111 I | op-mon: parsing mon endpoints: ce=10.110.44.132:6789,ci=10.110.247.34:6789,ch=10.102.71.124:6789
2021-02-26 10:04:36.268529 I | ceph-file-controller: Creating filesystem myfs-ec
2021-02-26 10:04:36.268560 I | cephclient: creating filesystem "myfs-ec" with metadata pool "myfs-ec-metadata" and data pools [myfs-ec-data0]
2021-02-26 10:04:36.268566 I | cephclient: Filesystem "myfs-ec" will reuse pre-existing pools
2021-02-26 10:04:36.618207 I | ceph-file-controller: created filesystem myfs-ec on 1 data pool(s) and metadata pool myfs-ec-metadata
2021-02-26 10:04:36.917449 I | cephclient: setting allow_standby_replay for filesystem "myfs-ec"
2021-02-26 10:04:37.946077 I | ceph-file-controller: start running mdses for filesystem "myfs-ec"
2021-02-26 10:04:37.946104 I | cephclient: getting or creating ceph auth key "mds.myfs-ec-a"
2021-02-26 10:04:38.305629 I | op-mds: setting mds config flags
2021-02-26 10:04:38.617182 I | cephclient: getting or creating ceph auth key "mds.myfs-ec-b"
2021-02-26 10:04:38.815772 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-nynaeve": the object has been modified; please apply your changes to the latest version and try again
2021-02-26 10:04:39.920985 I | op-mds: setting mds config flags
2021-02-26 10:04:40.670483 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-min": the object has been modified; please apply your changes to the latest version and try again
  • Crashing pod(s) logs, if necessary
kubectl -n rook-ceph logs csi-cephfsplugin-provisioner-8658f67749-mnwkr -c csi-provisioner
I0226 10:52:10.748658       1 controller.go:1317] provision "syncthing/pioneer" class "rook-cephfs": started
I0226 10:52:10.748787       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"syncthing", Name:"pioneer", UID:"bb8b5f5a-0b3c-4d10-a08f-d48f3b755841", APIVersion:"v1", ResourceVersion:"49576303", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "syncthing/pioneer"
I0226 10:52:10.759469       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"syncthing", Name:"pioneer", UID:"bb8b5f5a-0b3c-4d10-a08f-d48f3b755841", APIVersion:"v1", ResourceVersion:"49576303", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I0226 10:54:18.759511       1 controller.go:1317] provision "syncthing/pioneer" class "rook-cephfs": started
I0226 10:54:18.759877       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"syncthing", Name:"pioneer", UID:"bb8b5f5a-0b3c-4d10-a08f-d48f3b755841", APIVersion:"v1", ResourceVersion:"49576303", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "syncthing/pioneer"
I0226 10:54:18.772535       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"syncthing", Name:"pioneer", UID:"bb8b5f5a-0b3c-4d10-a08f-d48f3b755841", APIVersion:"v1", ResourceVersion:"49576303", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I0226 10:58:34.772643       1 controller.go:1317] provision "syncthing/pioneer" class "rook-cephfs": started
I0226 10:58:34.772891       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"syncthing", Name:"pioneer", UID:"bb8b5f5a-0b3c-4d10-a08f-d48f3b755841", APIVersion:"v1", ResourceVersion:"49576303", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "syncthing/pioneer"
I0226 10:58:34.884502       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"syncthing", Name:"pioneer", UID:"bb8b5f5a-0b3c-4d10-a08f-d48f3b755841", APIVersion:"v1", ResourceVersion:"49576303", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
  • Events from namespace with PVC:
0s          Normal    Provisioning             persistentvolumeclaim/pioneer   External provisioner is provisioning volume for claim "syncthing/pioneer"
0s          Warning   ProvisioningFailed       persistentvolumeclaim/pioneer   failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found

To get logs, use kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use the insert code button from the Github UI.
Read Github documentation if you need help.

Environment:

  • OS (e.g. from /etc/os-release):
    PRETTY_NAME="Debian GNU/Linux 10 (buster)"
  • Kernel (e.g. uname -a):
    Linux Node1 4.19.0-14-amd64 Monitor bootstrapping with libcephd #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux
  • Cloud provider or hardware configuration:
    Bunch of whitebox homelab-on-prem.
  • Rook version (use rook version inside of a Rook Pod):
    rook: v1.5.7
    go: go1.13.8
  • Storage backend version (e.g. for ceph do ceph -v):
    ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)
  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:03:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
    kubernetes
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):

pre-apply:

  cluster:
    id:     3393bacd-249f-40e9-afc2-80a63b0515cf
    health: HEALTH_WARN
            2 daemons have recently crashed

  services:
    mon: 3 daemons, quorum ce,ch,ci (age 4h)
    mgr: a(active, since 12h)c
    osd: 15 osds: 15 up (since 5h), 13 in (since 10h)

  data::
    pools:   4 pools, 97 pgs)
    objects: 131k objects, 505 GiB
    usage:   1.5 TiB used, 53 TiB / 54 TiB avail
    pgs:     97 active+clean

  io:o
    client:   156 KiB/s wr, 0 op/s rd, 16 op/s wr

post-apply:

  cluster:
    id:     3393bacd-249f-40e9-afc2-80a63b0515cf
    health: HEALTH_WARN
            2 daemons have recently crashed
 
  services:
    mon: 3 daemons, quorum ce,ch,ci (age 5h)
    mgr: a(active, since 12h)
    mds: myfs-ec:1 {0=myfs-ec-a=up:active} 1 up:standby-replay
    osd: 15 osds: 15 up (since 6h), 13 in (since 10h)
 
  task status:
 
  data:
    pools:   4 pools, 97 pgs
    objects: 131.00k objects, 505 GiB
    usage:   1.5 TiB used, 53 TiB / 54 TiB avail
    pgs:     97 active+clean
 
  io:
    client:   2.8 KiB/s rd, 154 KiB/s wr, 1 op/s rd, 18 op/s wr

@davralin davralin added the bug label Feb 26, 2021
@davralin
Copy link
Author

Think I found the culprit here, might not be a bug anyway - working on confirming. :-)

@davralin
Copy link
Author

davralin commented Feb 26, 2021

So, this was almost all me.

index 6e4e2cb..8af07c8 100644
--- a/rook/storageclass.yaml
+++ b/rook/storageclass.yaml
@@ -8,11 +8,11 @@ parameters:
   clusterID: rook-ceph # namespace:cluster
 
   # CephFS filesystem name into which the volume shall be created
-  fsName: myfs
+  fsName: myfs-ec
 
   # Ceph pool into which the volume shall be created
   # Required for provisionVolume: "true"
-  pool: myfs-data0
+  pool: myfs-ec-data0
 
   # Root path of an existing CephFS volume
   # Required for provisionVolume: "false"

Wrong fsName/pool in the cephfs-storageclass.

I did encounter #4006 also, but that was easily fixed by creating the subvolumes (remember to replace cephfs with myfs-ec).

That being said, would it make sense to have a separate storageclass-ec.yaml in the repo?

Since it's basically being this diff, I can create/PR it if wanted.

Consider the "bug" rejected and a question about that PR being wanted or not as the only thing needed here.

@travisn
Copy link
Member

travisn commented Feb 27, 2021

@davralin Good to hear it's working now. Yes, sounds good if you want to open a PR for the cephfs storageclass-ec.yaml. We have one already for rbd, so an example would help for cephfs as well. Thanks!

subhamkrai pushed a commit to subhamkrai/rook that referenced this issue Oct 1, 2021
**Description of your changes:**
Add dedicated documentation for cephfs-storageclass using erasure-coding.

**Which issue is resolved by this Pull Request:**
Resolves rook#7319

**Checklist:**

- [x] **Commit Message Formatting**: Commit titles and messages follow guidelines in the [developer guide](https://rook.io/docs/rook/master/development-flow.html#commit-structure).
- [x] **Skip Tests for Docs**: Add the flag for skipping the build if this is only a documentation change. See [here](https://github.com/rook/rook/blob/master/INSTALL.md#skip-ci) for the flag.
- [ ] **Skip Unrelated Tests**: Add a flag to run tests for a specific storage provider. See [test options](https://github.com/rook/rook/blob/master/INSTALL.md#test-storage-provider).
- [x] Reviewed the developer guide on [Submitting a Pull Request](https://rook.io/docs/rook/master/development-flow.html#submitting-a-pull-request)
- [x] Documentation has been updated, if necessary.
- [ ] Unit tests have been added, if necessary.
- [ ] Integration tests have been added, if necessary.
- [ ] Pending release notes updated with breaking and/or notable changes, if necessary.
- [ ] Upgrade from previous release is tested and upgrade user guide is updated, if necessary.
- [ ] Code generation (`make codegen`) has been run to update object specifications, if necessary.

Signed-off-by: davralin <git@davralin.work>
@manjo-git
Copy link

I am using the master branch and I am running into the same issue where PVC is always pending. Here are some logs based on the common-issues document. All my OSDs are bluestore

>ceph osd metadata $ID | grep -e id -e hostname -e osd_objectstore
        "id": 0,
        "container_hostname": "rook-ceph-osd-0-9d8d6b89-jt7br",
        "device_ids": "nvme8n1=Micron_7300_MTFDHBE3T8TDF_2017288AEC0E",
        "hostname": "node6",
        "osd_objectstore": "bluestore",
        "id": 1,

Here is how I deployed my cluster with cephfs-ec.

kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
kubectl create -f toolbox.yaml
kubectl create -f csi/cephfs/storageclass-ec.yaml
kubectl create -f filesystem-ec.yaml 

Created PVC but it is always pending

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
  namespace: rook-ceph
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Ti
  storageClassName: rook-cephfs
$ kubectl create -f mypvc.yaml 

$ kubectl get pvc -n rook-ceph
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-pvc   Pending                                      rook-cephfs    6s

Logs based on common-issues document

$ kubectl get cephcluster -A
NAMESPACE   NAME        DATADIRHOSTPATH   MONCOUNT   AGE   PHASE   MESSAGE                        HEALTH      EXTERNAL
rook-ceph   rook-ceph   /var/lib/rook     3          40m   Ready   Cluster created successfully   HEALTH_OK

$ kubectl get sc -A
NAME                 PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)   driver.longhorn.io              Delete          Immediate           true                   86d
rook-cephfs          rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   2m27s

$ kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME                                       READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-ec-a-64d488bb8f-wd9rl   1/1     Running   0          3m1s
rook-ceph-mds-myfs-ec-b-5bbd7d7875-gchb9   1/1     Running   0          3m

$ kubectl -n rook-ceph get pod -l app=csi-cephfsplugin-provisioner
NAME                                            READY   STATUS    RESTARTS   AGE
csi-cephfsplugin-provisioner-54fdf994d6-8kb2g   6/6     Running   0          44m
csi-cephfsplugin-provisioner-54fdf994d6-mnbk7   6/6     Running   0          44m

$ kubectl -n rook-ceph logs csi-cephfsplugin-provisioner-54fdf994d6-8kb2g csi-provisioner
I1124 15:52:19.671409       1 csi-provisioner.go:138] Version: v3.0.0
I1124 15:52:19.671461       1 csi-provisioner.go:161] Building kube configs for running in cluster...
I1124 15:52:20.674332       1 common.go:111] Probing CSI driver for readiness
I1124 15:52:20.677640       1 csi-provisioner.go:277] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
I1124 15:52:20.678851       1 leaderelection.go:248] attempting to acquire leader lease rook-ceph/rook-ceph-cephfs-csi-ceph-com...
E1124 15:52:20.852911       1 leaderelection.go:334] error initially creating leader election record: leases.coordination.k8s.io "rook-ceph-cephfs-csi-ceph-com" already exists

@Madhu-1
Copy link
Member

Madhu-1 commented Nov 25, 2021

Increase the log level to 5

# Set logging level for csi containers.
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
# CSI_LOG_LEVEL: "0"
and paste csi-cephfsplugin container logs from csi-cephfsplugin-provisioner pod

@manjo-git
Copy link

$ kubectl get pvc -n rook-ceph
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
arm-pvc   Pending                                      rook-cephfs    4m17s

Here are the logs for csi-cephfsplugin-provisioner with CSI_LOG_LEVEL: "5" set.

$ kubectl -n rook-ceph logs csi-cephfsplugin-provisioner-6847ff9dc-gptmd csi-provisioner**

I1129 15:10:05.112192       1 feature_gate.go:245] feature gates: &{map[]}
I1129 15:10:05.112242       1 csi-provisioner.go:138] Version: v3.0.0
I1129 15:10:05.112260       1 csi-provisioner.go:161] Building kube configs for running in cluster...
I1129 15:10:05.114178       1 connection.go:154] Connecting to unix:///csi/csi-provisioner.sock
I1129 15:10:06.115832       1 common.go:111] Probing CSI driver for readiness
I1129 15:10:06.115853       1 connection.go:183] GRPC call: /csi.v1.Identity/Probe
I1129 15:10:06.115860       1 connection.go:184] GRPC request: {}
I1129 15:10:06.118548       1 connection.go:186] GRPC response: {}
I1129 15:10:06.118611       1 connection.go:187] GRPC error: <nil>
I1129 15:10:06.118620       1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
I1129 15:10:06.118624       1 connection.go:184] GRPC request: {}
I1129 15:10:06.118961       1 connection.go:186] GRPC response: {"name":"rook-ceph.cephfs.csi.ceph.com","vendor_version":"v3.4.0"}
I1129 15:10:06.119021       1 connection.go:187] GRPC error: <nil>
I1129 15:10:06.119033       1 csi-provisioner.go:205] Detected CSI driver rook-ceph.cephfs.csi.ceph.com
I1129 15:10:06.119043       1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I1129 15:10:06.119048       1 connection.go:184] GRPC request: {}
I1129 15:10:06.119517       1 connection.go:186] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]}
I1129 15:10:06.119657       1 connection.go:187] GRPC error: <nil>
I1129 15:10:06.119666       1 connection.go:183] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I1129 15:10:06.119671       1 connection.go:184] GRPC request: {}
I1129 15:10:06.120052       1 connection.go:186] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":9}}},{"Type":{"Rpc":{"type":7}}}]}
I1129 15:10:06.120160       1 connection.go:187] GRPC error: <nil>
I1129 15:10:06.120475       1 csi-provisioner.go:277] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
I1129 15:10:06.120836       1 controller.go:731] Using saving PVs to API server in background
I1129 15:10:06.122687       1 leaderelection.go:248] attempting to acquire leader lease rook-ceph/rook-ceph-cephfs-csi-ceph-com...
I1129 15:10:06.205935       1 leaderelection.go:258] successfully acquired lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:06.205972       1 leader_election.go:212] new leader detected, current leader: csi-cephfsplugin-provisioner-6847ff9dc-gptmd
I1129 15:10:06.205997       1 leader_election.go:205] became leader, starting
I1129 15:10:06.206217       1 reflector.go:219] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:134
I1129 15:10:06.206232       1 reflector.go:255] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1129 15:10:06.206271       1 reflector.go:219] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134
I1129 15:10:06.206281       1 reflector.go:255] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1129 15:10:06.212015       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:06.307001       1 shared_informer.go:270] caches populated
I1129 15:10:06.307019       1 shared_informer.go:270] caches populated
I1129 15:10:06.307033       1 controller.go:810] Starting provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-6847ff9dc-gptmd_f58041f5-15b4-44ac-8ab8-ddc508636297!
I1129 15:10:06.307045       1 clone_controller.go:66] Starting CloningProtection controller
I1129 15:10:06.307059       1 volume_store.go:97] Starting save volume queue
I1129 15:10:06.307086       1 clone_controller.go:82] Started CloningProtection controller
I1129 15:10:06.307200       1 reflector.go:219] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:844
I1129 15:10:06.307210       1 reflector.go:255] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:844
I1129 15:10:06.307284       1 reflector.go:219] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:847
I1129 15:10:06.307295       1 reflector.go:255] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:847
I1129 15:10:06.407700       1 shared_informer.go:270] caches populated
I1129 15:10:06.407884       1 controller.go:859] Started provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-6847ff9dc-gptmd_f58041f5-15b4-44ac-8ab8-ddc508636297!
I1129 15:10:11.220177       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:16.228032       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:21.236364       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:26.244002       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:31.253112       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:36.260629       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:41.269620       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:46.278979       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:51.288127       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:10:56.296625       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:01.304077       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:06.313528       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:11.323184       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:16.331819       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:21.340674       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:26.349500       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:31.359543       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:36.366595       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:41.377380       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:46.385444       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:51.395278       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:11:56.402886       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:01.410429       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:06.418248       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:11.425209       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:16.431710       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:21.438808       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:27.029099       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:32.035140       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:37.074098       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:42.122497       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:47.127897       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:52.134509       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:12:57.140792       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:02.146455       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:07.151829       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:12.158472       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:17.165428       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:22.172716       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:27.183504       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:32.190275       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:37.203233       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:42.213463       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:47.225425       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:52.232414       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:13:57.241947       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:02.248688       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:07.258989       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:12.273209       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:17.282176       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:22.290665       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:27.296811       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:32.304717       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:37.314932       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:42.323891       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:47.332781       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:52.341015       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:14:57.350658       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:02.358380       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:07.365093       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:12.372594       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:17.380015       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:22.386613       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:27.393358       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:32.400349       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:37.408549       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:42.416563       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:47.424464       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:52.434904       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:15:57.443991       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:02.450697       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:07.458156       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:12.466824       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:17.478043       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:22.490662       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:27.501008       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:32.512715       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:37.519697       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:42.531995       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:47.538689       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:52.549634       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:16:57.560041       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:02.571539       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:07.580811       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:12.590805       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:13.310324       1 reflector.go:535] sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:847: Watch close - *v1.StorageClass total 0 items received
I1129 15:17:17.210638       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 0 items received
I1129 15:17:17.597124       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:22.608514       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:27.616011       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:32.624731       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:37.631723       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:42.642401       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:47.651943       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:52.663645       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:17:57.670373       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:02.679026       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:07.685976       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:12.695909       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:17.702979       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:22.713369       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:25.209254       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 0 items received
I1129 15:18:27.719489       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:32.320232       1 reflector.go:535] sigs.k8s.io/sig-storage-lib-external-provisioner/v7/controller/controller.go:844: Watch close - *v1.PersistentVolume total 0 items received
I1129 15:18:32.729821       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:37.737487       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:42.745610       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:47.755636       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:52.764050       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:18:57.770842       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:02.778736       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:07.786408       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:12.794784       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:17.801650       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:22.809195       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:27.815807       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:32.823911       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:37.831505       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:42.838828       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:47.845778       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:52.855403       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:19:57.863449       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:02.915588       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:07.921954       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:12.979763       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:17.986244       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:23.055052       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:28.061861       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:33.118410       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:38.124666       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:43.173728       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:48.181393       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:53.247677       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:20:58.254766       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:03.311160       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:08.319394       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:13.375283       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:18.385423       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:23.437170       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:28.445826       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:33.509920       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:34.810734       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:34.810922       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:34.814259       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:34.814275       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:34.830267       1 connection.go:186] GRPC response: {}
I1129 15:21:34.830347       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:34.830375       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:34.830409       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:34.830419       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 0
E1129 15:21:34.830438       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:34.830470       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:35.331489       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:35.331645       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:35.338861       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:35.338873       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:35.340830       1 connection.go:186] GRPC response: {}
I1129 15:21:35.340857       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:35.340877       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:35.340904       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:35.340912       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 1
E1129 15:21:35.340928       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:35.340974       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:36.341033       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:36.341184       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:36.346746       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:36.346758       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:36.348416       1 connection.go:186] GRPC response: {}
I1129 15:21:36.348440       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:36.348468       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:36.348502       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:36.348513       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 2
E1129 15:21:36.348532       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:36.348553       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.348969       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:38.349125       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:38.353320       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:38.353343       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:38.355092       1 connection.go:186] GRPC response: {}
I1129 15:21:38.355121       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.355142       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.355171       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:38.355181       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 3
E1129 15:21:38.355202       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.355220       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:38.517108       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:42.355882       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:42.356045       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:42.359627       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:42.359639       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:42.361467       1 connection.go:186] GRPC response: {}
I1129 15:21:42.361501       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:42.361534       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:42.361576       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:42.361590       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 4
E1129 15:21:42.361616       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:42.361603       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:43.572854       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:48.579731       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:50.362041       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:21:50.362218       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:21:50.366506       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:21:50.366519       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:21:50.368079       1 connection.go:186] GRPC response: {}
I1129 15:21:50.368108       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:50.368135       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:50.368178       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:21:50.368188       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 5
E1129 15:21:50.368210       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:50.368230       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:21:53.638259       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:21:58.647642       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:03.705820       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:06.369006       1 controller.go:1279] provision "rook-ceph/arm-pvc" class "rook-cephfs": started
I1129 15:22:06.369169       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/arm-pvc"
I1129 15:22:06.373185       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I1129 15:22:06.373196       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":8796093022208},"name":"pvc-c4da391c-90a5-4c47-afa6-82553a07f6f7","parameters":{"clusterID":"rook-ceph","fsName":"myfs-ec","pool":"myfs-ec-data0"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1129 15:22:06.374945       1 connection.go:186] GRPC response: {}
I1129 15:22:06.374970       1 connection.go:187] GRPC error: rpc error: code = InvalidArgument desc = volume not found
I1129 15:22:06.374990       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = volume not found
I1129 15:22:06.375013       1 controller.go:1074] Final error received, removing PVC c4da391c-90a5-4c47-afa6-82553a07f6f7 from claims in progress
W1129 15:22:06.375022       1 controller.go:933] Retrying syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7", failure 6
E1129 15:22:06.375037       1 controller.go:956] error syncing claim "c4da391c-90a5-4c47-afa6-82553a07f6f7": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:22:06.375055       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"arm-pvc", UID:"c4da391c-90a5-4c47-afa6-82553a07f6f7", APIVersion:"v1", ResourceVersion:"35092380", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
I1129 15:22:08.715562       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:13.763563       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:18.772260       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:23.835831       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1129 15:22:28.845865       1 leaderelection.go:278] successfully renewed lease rook-ceph/rook-ceph-cephfs-csi-ceph-com

@manjo-git
Copy link

manjo-git commented Nov 29, 2021

@Madhu-1 could we use #9244 to track this issue? I had opened that bug report after I posted my issue here without realizing the this one (#7319) was closed a while back.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants