Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about metrics #9

Open
tcoupin opened this issue Jul 15, 2020 · 5 comments
Open

about metrics #9

tcoupin opened this issue Jul 15, 2020 · 5 comments

Comments

@tcoupin
Copy link

tcoupin commented Jul 15, 2020

Hi,

For some pv, the results is strange. I see the whole storage capacity for last 3 pv instead of pv capacity.
The first use rook-ceph-rbd sc, and the last 3 rook-cephfs sc. Do you think it's related to df-pv or CSI ?

 NAMESPACE  PVC NAME                                   PV NAME                                   POD NAME                                                         VOLUME MOUNT NAME  SIZE    USED   AVAILABLE  %USED  IUSED  IFREE                 %IUSED
 sandbox    data-drive-preprod-mariadb-galera-0        pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896  drive-preprod-mariadb-galera-0                                   data               1014Mi  695Mi  318Mi      68.63  304    523984                0.06
 sandbox    data-drive-preprod-mariadb-tooling-backup  pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b  drive-preprod-mariadb-tooling-restore-shell-c5c585478-wxk5m      mariadb            21Gi    500Mi  21Gi       2.27   33290  18446744073709551615  100.00
 sandbox    drive-preprod-xxx-nextcloud-ncdata         pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0  bckp-nextcloud-preprod-xxx-basic-volume-bckp-restore-shellcfgjx  source             21Gi    500Mi  21Gi       2.27   33290  18446744073709551615  100.00
 sandbox    drive-preprod-xxx-nextcloud-ncdata         pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0  drive-preprod-xxx-nextcloud-7b877b4d78-w6cg8                     data               21Gi    500Mi  21Gi       2.27   33290  18446744073709551615  100.00
@tcoupin
Copy link
Author

tcoupin commented Jul 15, 2020

$ k get pvc
NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data-drive-preprod-mariadb-galera-0         Bound    pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896   1Gi        RWO            rook-ceph-block   35h
data-drive-preprod-mariadb-tooling-backup   Bound    pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b   500Mi      RWX            rook-cephfs       57d
drive-preprod-ird-nextcloud-ncdata          Bound    pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0   1Gi        RWX            rook-cephfs       57d

$ k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                               STORAGECLASS      REASON   AGE
pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0   1Gi        RWX            Delete           Bound    sandbox/drive-preprod-xxx-nextcloud-ncdata          rook-cephfs                57d
pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896   1Gi        RWO            Retain           Bound    sandbox/data-drive-preprod-mariadb-galera-0         rook-ceph-block            5d22h
pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b   500Mi      RWX            Delete           Bound    sandbox/data-drive-preprod-mariadb-tooling-backup   rook-cephfs                57d

@yashbhutwala
Copy link
Owner

yashbhutwala commented Jul 15, 2020

@tcoupin thanks for reporting the ticket 👏! Can you give more information by running kubectl df-pv -v trace and then searching for the specific PVs? I'm looking at inspecting the json returned from the node that has those pods (obviously get rid of any PII information if you need to)

@tcoupin
Copy link
Author

tcoupin commented Jul 17, 2020

with trace log level: df.log

@yashbhutwala
Copy link
Owner

@tcoupin seems like you had some auth error, unrelated to this, so these logs are not helpful. Can you try and reproduce the above mentioned exact output and send the trace logs of that transaction?

@lee-harmonic
Copy link

It appears to me like it is an issue with the way CephFS reports inodes. See https://tracker.ceph.com/issues/24849

Might now be solved with a new ceph version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants