Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metric Adapter is taking metrics from cache #553

Open
aknam opened this issue Apr 28, 2023 · 1 comment
Open

Metric Adapter is taking metrics from cache #553

aknam opened this issue Apr 28, 2023 · 1 comment

Comments

@aknam
Copy link

aknam commented Apr 28, 2023

Expected Behavior

we want to get average value getting reflected at faster rate so that it would not impact the downtime in the application.

Actual Behavior

we are using custom metric for our application at specified json-path interval as 1 sec as below but when we are restarting all the pods then its taking more than 5 minutes to get zero average value and in the logs we see all our metrics are zero as we restarted all the pods. so is there any way to control this average value at faster rate so it would not impact the application.

hpa configuations annotations
metric-config.pods.used-pool-length.json-path/interval: "1s"

kube metric adapter logs

time="2023-04-27T12:37:09Z" level=info msg="Collected new custom metric 'used-pool-length' (0) for Pod qa/browser-manager-7b7cbc9b75-c48pm" provider=hpa
time="2023-04-27T12:37:09Z" level=info msg="Collected new custom metric 'used-pool-length' (0) for Pod qa/browser-manager-7b7cbc9b75-hz9fx" provider=hpa
time="2023-04-27T12:37:09Z" level=info msg="Collected new custom metric 'used-pool-length' (0) for Pod qa/browser-manager-7b7cbc9b75-k9c74" provider=hpa
time="2023-04-27T12:37:09Z" level=info msg="Collected new custom metric 'used-pool-length' (0) for Pod qa/browser-manager-7b7cbc9b75-jn9g9" provider=hpa
time="2023-04-27T12:37:09Z" level=info msg="Collected new custom metric 'used-pool-length' (0) for Pod qa/browser-manager-7b7cbc9b75-ftzzx" provider=hpa
time="2023-04-27T12:37:09Z" level=info msg="Collected new custom metric 'used-pool-length' (0) for Pod qa/browser-manager-7b7cbc9b75-qwfnf" provider=hpa
time="2023-04-27T12:37:09Z" level=info msg="Collected new custom metric 'used-pool-length' (0) for Pod qa/browser-manager-7b7cbc9b75-jbbmj" provider=hpa
time="2023-04-27T12:37:09Z" level=info msg="Collected new custom metric 'used-pool-length' (0) for Pod qa/browser-manager-7b7cbc9b75-c65vf" provider=hpa
time="2023-04-27T12:37:09Z" level=info msg="Collected new custom metric 'used-pool-length' (0) for Pod qa/browser-manager-7b7cbc9b75-cjfzl" provider=hpa```

## Steps to Reproduce the Problem

 1.
 1.
 1.

## Specifications

 - Version: registry.opensource.zalan.do/teapot/kube-metrics-adapter:v0.1.19
 - Platform: eks
 - Subsystem: amazon linux 2
@mikkeloscar mikkeloscar added the enhancement New feature or request label Apr 28, 2023
@mikkeloscar mikkeloscar added the bug Something isn't working label May 17, 2023
@mikkeloscar mikkeloscar removed bug Something isn't working enhancement New feature or request labels Sep 16, 2023
@mikkeloscar
Copy link
Contributor

I'm not exactly sure what the described problem is, but I think it's a problem of how often the HPA fetches metrics. You can see tha kube-metrics-adapter is up to date by querying the metric like this:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/queue-length?labelSelector=foo%3Dbar" | jq '.'
# (where `foo%3Dbar` is the label selector of the targeted deployment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants