Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The revision_request_count metric does not appear to resemble a counter metric. #14925

Open
yabea opened this issue Feb 21, 2024 · 6 comments
Open
Labels
kind/question Further information is requested

Comments

@yabea
Copy link

yabea commented Feb 21, 2024

Ask your question here:

Hi, I'm using the metrics of knative serving, and I see that the type of the revision_request_count metric is counter.
I have also confirmed in code that it is a counter type.
but the collected data is not continuosly increasing.
image

What could be the reason for this?

txs.

@yabea yabea added the kind/question Further information is requested label Feb 21, 2024
@skonto
Copy link
Contributor

skonto commented Mar 5, 2024

Hi @yabea here are some more details and example similar to the above:

image

  • You have multiple series, one per pod, thus the different countings. For the cardinality issue check: Metrics cardinality is too high #11248
  • When Prometheus scrapes a pod it will get the value of counter exported at scraping time.

@shauryagoel
Copy link

Hi, I have also encountered the same issue. If no requests come for ~10 minutes, the graph starts showing no value in my case as well. Whenever a request comes, the count starts from 1.
Upon digging further I found out that other metrics are also null in the same time period when there is no data in the graph. Metrics such as revision_app_request_count, revision_app_request_latencies_bucket, revision_request_latencies_bucket, revision_request_latencies_sum etc. are all null. The only metrics available are of golang- revision_go_* metrics.
Whenever a request comes in I can see all of the metrics start from scratch.
I checked this by entering the container and sending a curl request to http://localhost:9091/metrics.

@skonto
Copy link
Contributor

skonto commented Mar 11, 2024

@shauryagoel hi, could you add more details like screenshot of your prom query and pod status eg. does it scale down or not?

@shauryagoel
Copy link

I am using knative v1.7 and here is a sample yaml which gives the error-

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello
  namespace: testing
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port: '9091'
    prometheus.io/path: /metrics
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target: '10'
        autoscaling.knative.dev/min-scale: '1'
        autoscaling.knative.dev/max-scale: '100'
    spec:
      containers:
        - image: ghcr.io/knative/helloworld-go:latest
          ports:
            - containerPort: 8080
          env:
            - name: TARGET
              value: World

The pod does not scale down as min scale is set to 1 above. Still I am getting the same error.
Prom query- revision_app_request_count{configuration_name="hello"}
Output is similar to what is attached in the issue; data is missing.

@skonto
Copy link
Contributor

skonto commented Mar 11, 2024

Could you try with a community supported version? 1.7 is too old.

@shauryagoel
Copy link

Hi, is there some parameter which adjusts the duration of the metrics export? This parameter is most likely set to around 10min as discussed previously. I am currently unable to deploy to a higher version of knative as we have a knative dependency of <=1.9.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants