Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph: increasing the auto-resolvable alerts' delay to 15m #8896

Merged
merged 1 commit into from Oct 1, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Expand Up @@ -91,7 +91,7 @@ spec:
storage_type: ceph
expr: |
(ceph_mon_metadata{job="rook-ceph-mgr"} * on (ceph_daemon) group_left() (rate(ceph_mon_num_elections{job="rook-ceph-mgr"}[5m]) * 60)) > 0.95
for: 5m
for: 15m
labels:
severity: warning
- name: ceph-node-alert.rules
Expand Down Expand Up @@ -150,7 +150,7 @@ spec:
storage_type: ceph
expr: |
label_replace((ceph_osd_in == 1 and ceph_osd_up == 0),"disk","$1","ceph_daemon","osd.(.*)") + on(ceph_daemon) group_left(host, device) label_replace(ceph_disk_occupation,"host","$1","exported_instance","(.*)")
for: 1m
for: 15m
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changing interval for a critical severity error like CephOSDDiskNotResponding from 1 minute to 15 minutes seems a bit ... risky!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aruniiird what's the rationale behind that change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is mainly for ODF Managed Service product , their SREs (their customer support team) are getting (action required) alerts, which are automatically resolved (either by ceph itself or through OCS Operator reconciliation). So they want to increase the alert time delay for the following alerts,

CephMonHighNumberOfLeaderChanges
CephOSDDiskNotResponding
CephClusterWarningState

As we don't have a separate alert mechanism for Managed Services , making the changes here.

labels:
severity: critical
- alert: CephOSDDiskUnavailable
Expand Down Expand Up @@ -242,7 +242,7 @@ spec:
storage_type: ceph
expr: |
ceph_health_status{job="rook-ceph-mgr"} == 1
for: 10m
for: 15m
labels:
severity: warning
- alert: CephOSDVersionMismatch
Expand Down