-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add rule_files configuration for the prometheus CRD to make prometheus watch other directories except those are created by the prometheus operator. #6561
Comments
It shouldn't be the case unless it takes a long time for Prometheus to restart. Is your request related to #5085? I'd rather fix the annoyance of restarting the pods when a new rule configmap is mounted than tweaking the pod volumes manually. |
@simonpasquier I tried optional configmap. I set my prometheus like below. volumes:
- configMap:
defaultMode: 420
name: prometheus-app-prometheus-rulefiles-1
optional: true
name: prometheus-app-prometheus-rulefiles-1-prevision
- configMap:
defaultMode: 420
name: prometheus-app-prometheus-rulefiles-2
optional: true
name: prometheus-app-prometheus-rulefiles-2-prevision And I added prometheusrules, then the operator editted the statefulset volumes like below. volumes:
- configMap:
defaultMode: 420
name: prometheus-app-prometheus-rulefiles-0
name: prometheus-app-prometheus-rulefiles-0
- configMap:
defaultMode: 420
name: prometheus-app-prometheus-rulefiles-1
name: prometheus-app-prometheus-rulefiles-1
- configMap:
defaultMode: 420
name: prometheus-app-prometheus-rulefiles-1
optional: true
name: prometheus-app-prometheus-rulefiles-1-prevision
- configMap:
defaultMode: 420
name: prometheus-app-prometheus-rulefiles-2
optional: true
name: prometheus-app-prometheus-rulefiles-2-prevision The prometheus pod restarted since the volumes were changed. If I set the prometheus volumes like below not to change the volumes, volumes:
- configMap:
defaultMode: 420
name: prometheus-app-prometheus-rulefiles-1
optional: true
name: prometheus-app-prometheus-rulefiles-1
- configMap:
defaultMode: 420
name: prometheus-app-prometheus-rulefiles-2
optional: true
name: prometheus-app-prometheus-rulefiles-2 The k8s events says that |
I think the operator should check the volumes.items[*]. |
#5085 is about not deleting the configmaps which are provisioned by the operator. But if you fix #5085 then I could solve my issue too. |
Component(s)
Prometheus, PrometheusRule
What is missing? Please describe.
If the total amount of prometheusrules are over the limitation of k8s configmap (1Mib), then the operator create one more configmap and add additional volumes, volumeMounts configuration for the prometheus pod.
This leads the prometheus pod to be restarted. (changes under .spec leads for the pod to be restarted. )
As you know, as prometheus pod restarts the alerts are reset.
So now if I add some of prometheus rules, then it could lead all the alerts those are firing get reset.
My first work-around is mounting additional volumes where some of rulefiles those prometheus could comprehend are stored for the prometheus pod. (like /etc/prometheus/rules/custom-additional-rules)
But, the prometheus would watch only directory that the prometheus operator creates. (/etc/prometheus/rules/-rulefiles-)
The configuration of the prometheus is like below
According to above configuration, I have to mount rulefiles on the /etc/prometheus/rules/-rulefiles-0, but that is not allowed in the k8s volume (The mountPath should be unique).
My 2nd work-around is copying the rulefiles to the /etc/prometheus/rules/-rulefiles-0
The file system "/etc/prometheus/rules/-rulefiles-0" is mounted as read-only.
So I can't copying anything to there.
Describe alternatives you've considered.
My suggestion: How about adding the rule_files configuration to set custom rulefiles directory?
prometheus.yaml
then the prometheus.env.yaml would be renderd like below
Environment Information.
Environment
Kubernetes Version: v1.20.7
Prometheus-Operator Version: v0.53.1
The text was updated successfully, but these errors were encountered: