-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CephPoolQuotaBytesNearExhaustion and CephPoolQuotaBytesCriticallyExhausted rules should use ceph_pool_stored #8735
Comments
@anmolsachan PTAL |
@leseb Can you please assign this to @aruniiird . I will coordinate with him to fix it. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
Uhm, can somebody please remove |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
Not stale. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
Not stale. |
I believe this is fixed/made irrelevant with #9837. |
It indeed made it irrelevant, sorry. |
Is this a bug report or feature request?
Deviation from expected behavior:
Currently the alert expression is:
The problem is the the pool quotas alert expression is set for
ceph_pool_stored_raw
which includes replication factor, while the ceph quotes are set for logical data size (non-replicated).Expected behavior:
It should use
ceph_pool_stored
, because that's what ceph takes into account when enforces the quota: only logical size, excluding replication factor.How to reproduce it (minimal and precise):
Create a pool with a quota 100Mb.
Set replication factor 3.
Fill it by 50Mb.
See the rule is triggered, but ceph is healthy.
Now try to change the quota to 60Mb. See that ceph is still healthy again.
Now change it to 40Mb and see that only now ceph reports pool being over capacity.
File(s) to submit:
cluster.yaml
, if necessaryTo get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use the
insert code
button from the Github UI.Read Github documentation if you need help.
Environment:
uname -a
):rook version
inside of a Rook Pod):ceph -v
):kubectl version
):ceph health
in the Rook Ceph toolbox):The text was updated successfully, but these errors were encountered: