-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of memory after using Schedulers.enableMetrics()
#2899
Comments
we'd need to somehow detect schedulers with potential for high/unbounded cardinality and figure out how to cater to that use case... but can we really figure out a generic satisfying solution? the only thing that gets instrumented in on the other hand we get |
@Novery what kind of metrics are you interested in, in Schedulers? (as a minimum) |
I mainly focus on executor_queued_tasks, executor_seconds_max, executor_queued_tasks, executor_completed_tasks_total to understand the load of the parallel scheduler |
there's an ongoing work to add |
ok,thank you. I will keep watching |
The tags now seem to be clean and controllable. The "executorId" is no longer works as the "executorServiceName". I think it can fix the current problems. |
Out of memory after using
Schedulers.enableMetrics()
Actual Behavior
When metrics are enabled, the metrics do not converge.
The data for accessing /actuator/prometheus is as follows
name="boundedElastic(\"boundedElastic\",maxThreads=40,maxTaskQueuedPerThread=100000,ttl=60s)-{$number}
the number do not convergeSteps to Reproduce
reactor.core.scheduler.SchedulerMetricDecorator will generate an
executorId
incrementally for each execution for the same SchedulerAt the same time
executorId
as metrics nameBoundedElasticScheduler will periodically recycle idle resources and recreate resources when needed later, and the executor Id is incremented when new resources are created
This leads to an uncontrollable number of Metric and eventually leads to memory overflow
Your Environment
The text was updated successfully, but these errors were encountered: