-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable customization of prometheus metrics labels created by rook operator #9618
Comments
@svolland-csgroup The @anmolsachan Do you recall why only |
Applying any label would solve the issue and only require little code changes. Leveraging the
Care must be taken that the labels specified in Configuration: spec:
labels:
monitoring:
my-monitoring-label: my-value generated mgr service: apiVersion: v1
kind: Service
metadata:
labels:
app: rook-ceph-mgr
ceph_daemon_id: a
rook_cluster: my-cluster
my-monitoring-label: my-value
name: rook-ceph-mgr
[...] generated ServiceMonitor: apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: rook-ceph-mgr
spec:
[...]
targetLabels:
- my-monitoring-label |
How will it help the ServiceMonitor if the targetLabels support a custom label? Don't the labels already on the rook-ceph-mgr service already uniquely identify the service that prometheus needs to scrape? Or what is different about the custom targetLabels? |
Ah, it must be as you said in the initial post that the targetLabels will help in this case: |
Hey Travis! Apologies for the late reply on this. There was no strictness for using rook.io/managedBy, but there was only one use case at that moment. Use case: "As a Kubernetes Admin I should be able to identify the metrics coming from the cephclusters uniquely, when on the same K8s cluster I create multiple cephclusters at the same time". Adding an extra managedBy label in the Service Monitor adds a label to every metric exported by the Prometheus exporter. Also, it was optional because the above-stated use case might not be true. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
I believe this is made irrelevant by #9837. Closing this for now. We have the option to reopen if it is still relevant with the changed method of deploying prometheus metrics rules. |
This issue is still relevent @BlaineEXE, in pretty much ever widely used helm chart, it is possible to define serviceMonitor relabelings and metricsRelabelings. Need to add these under Another use case: without relabeling, the |
@holmesb I would consider this a new feature request to the current helm chart. |
Is this a bug report or feature request?
What should the feature do:
The operator actually supports an undocumented (?) monitoring label
rook.io/managedBy
that is used to add amanagedBy
label to the prometheus metrics.The operator adds a relabeling to the ServiceMonitor for this specific label, this is implemented in function
applyMonitoringLabels
in mgr.go.The new feature shall enable to define a list of fully customizable labels to add to the metrics (instead of a single fixed-named one).
This could also be implemented by leveraging the
targetLabels
feature of the ServiceMonitor. In this case the labels already specified in the clusterspec for the mgr could also be applied to the Service spec of the mgr daemon (they are actually only applied to the mgr deployment/pods). Then one could pick theses labels by configuring the list oftargetLabels
.What is use case behind this feature:
The use case is the need to add extra labels to prometheus metrics. This is particularly useful when the operator is managing more than one cluster, to distinguish the metrics by using more functional label meanings than the currently available labels that might serve this purpose (e.g. namespace, instance or pod) .
The text was updated successfully, but these errors were encountered: