Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto-generate metric emitter mappings #16346

Open
ztzxt opened this issue Apr 29, 2024 · 2 comments
Open

Auto-generate metric emitter mappings #16346

ztzxt opened this issue Apr 29, 2024 · 2 comments

Comments

@ztzxt
Copy link
Contributor

ztzxt commented Apr 29, 2024

Description

Whenever a new metric is added or existing ones are changed, metric mapping files in emitters should be automatically updated.

Example files:

https://github.com/apache/druid/blob/master/extensions-contrib/prometheus-emitter/src/main/resources/defaultMetrics.json
https://github.com/apache/druid/blob/master/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json
https://github.com/apache/druid/blob/master/extensions-contrib/graphite-emitter/src/main/resources/defaultWhiteListMap.json

Motivation

As soon as a new metric is introduced, it has to be defined manually for each emitter to be available. This creates a maintenance load and state difference between emitters. Since all of the mapping files has some pre-defined schema, updating these files should be automated.

@cryptoe
Copy link
Contributor

cryptoe commented Apr 29, 2024

@ztzxt Do you want to take this up ?. We can always guide you. The logic would be a little brittle since we would have to parse out the metric from metrics.md file. Instead what we can do, as part of the release process, we can make a python script which parses out this logic from the metrics file and creates the relevant files. This can be done by the release manager one time before the release.

@ztzxt
Copy link
Contributor Author

ztzxt commented Apr 29, 2024

Is metrics.md updated manually or is it generated from code? If it is generated from code, we can implement this in where the doc is generated. Otherwise, accepting the metrics.md as a source of truth makes sense. Also, could you point me to it if there is already a process for running scripts pre-release?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants