-
Notifications
You must be signed in to change notification settings - Fork 992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Triggers for same type not working #5755
Comments
Hello, You can extend the scaling differences based on metric types here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ |
Hi, thank you for the reply. Before I had only one Prometheus trigger and the Scaler was respecting the threshold |
It makes the difference totally, as it's the HPA Controller who applies the difference 😄 You are returning
You can read more about the HPA algorithm here https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details
Was your original single trigger based on the response time? Your other triggers are probably okey, the only one that needs to be value 100% is the time based |
thank you once more:D , I will try to change it to
|
Let me know how it goes :)
That's truly weird xD |
Report
I have a trigger configuration that uses 3 triggers from the same type
prometheus
, and the scale is not working for the first trigger metadata related to response time.triggers:
metadata:
[metadata1 - related to response time]
metadata:
[metadata2 - related to CPU metrics]
metadata:
[metadata3 - related to Memory Metric]
Expected Behavior
The scaler would scale up and down if the threshold of each Prometheus metric is reached.
Actual Behavior
The scaler is ignoring the response time metrics. My threshold is 20ms and my application is over that value and the number of pods is not increasing.
Steps to Reproduce the Problem
Logs from KEDA operator
KEDA Version
2.11.2
Kubernetes Version
1.28
Platform
Amazon Web Services
Scaler Details
ScaledObject
Anything else?
No response
The text was updated successfully, but these errors were encountered: