Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tlsConfig from scrape class not added to PodMonitor jobs #6556

Closed
1 task done
taisph opened this issue Apr 30, 2024 · 0 comments · Fixed by #6573
Closed
1 task done

tlsConfig from scrape class not added to PodMonitor jobs #6556

taisph opened this issue Apr 30, 2024 · 0 comments · Fixed by #6573
Assignees
Labels

Comments

@taisph
Copy link

taisph commented Apr 30, 2024

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

Description

Given a scrape class config such as the one below and setting the same scrape class in a PodMonitor service, only the relabelings are picked up by the resulting Prometheus job config, tlsConfig is ignored. Using the same scrape class in a ServiceMonitor works as expected.

  scrapeClasses:
    - name: istio-mtls
      tlsConfig:
        caFile: /etc/prom-certs/root-cert.pem
        certFile: /etc/prom-certs/cert-chain.pem
        keyFile: /etc/prom-certs/key.pem
        insecureSkipVerify: true
      relabelings:
        - sourceLabels:
            - __meta_kubernetes_pod_controller_name
          action: replace
          targetLabel: pod_controller

Prometheus Operator Version

v0.73.2

Kubernetes Version

v1.26.14-gke.1044000

Kubernetes Cluster Type

GKE

How did you deploy Prometheus-Operator?

yaml manifests

Manifests

No response

prometheus-operator log output

level=info ts=2024-04-23T13:19:03.172556975Z caller=main.go:186 msg="Starting Prometheus Operator" version="(version=0.73.2, branch=refs/tags/v0.73.2, revision=20beb0b00b0bb61840ec2461ace4f22e4037b474)"
level=info ts=2024-04-23T13:19:03.172689787Z caller=main.go:187 build_context="(go=go1.22.2, platform=linux/amd64, user=Action-Run-ID-8756162704, date=20240419-15:41:23, tags=unknown)"
level=info ts=2024-04-23T13:19:03.172761034Z caller=main.go:198 msg="namespaces filtering configuration " config="{allow_list=\"\",deny_list=\"\",prometheus_allow_list=\"\",alertmanager_allow_list=\"\",alertmanagerconfig_allow_list=\"\",thanosruler_allow_list=\"\"}"
level=info ts=2024-04-23T13:19:03.183289537Z caller=main.go:227 msg="connection established" cluster-version=v1.26.14-gke.1006000
level=info ts=2024-04-23T13:19:03.470535119Z caller=operator.go:335 component=prometheus-controller msg="Kubernetes API capabilities" endpointslices=true
level=info ts=2024-04-23T13:19:03.496306112Z caller=operator.go:320 component=prometheusagent-controller msg="Kubernetes API capabilities" endpointslices=true
level=info ts=2024-04-23T13:19:03.76785328Z caller=server.go:298 msg="starting insecure server" address=[::]:8080
level=info ts=2024-04-23T13:19:15.163979134Z caller=operator.go:283 component=thanos-controller msg="successfully synced all caches"
level=info ts=2024-04-23T13:19:15.164918213Z caller=operator.go:471 component=thanos-controller key=ops-prometheus/ops msg="sync thanos-ruler"
level=info ts=2024-04-23T13:19:15.764754538Z caller=operator.go:429 component=prometheusagent-controller msg="successfully synced all caches"
level=info ts=2024-04-23T13:19:16.364611806Z caller=operator.go:313 component=alertmanager-controller msg="successfully synced all caches"
level=info ts=2024-04-23T13:19:16.664420225Z caller=operator.go:572 component=alertmanager-controller key=ops-prometheus/ops msg="sync alertmanager"
level=info ts=2024-04-23T13:19:17.064015143Z caller=operator.go:392 component=prometheus-controller msg="successfully synced all caches"
level=info ts=2024-04-23T13:19:17.164542347Z caller=operator.go:766 component=prometheus-controller key=ops-prometheus/ops msg="sync prometheus"
level=info ts=2024-04-23T13:19:18.072601122Z caller=operator.go:471 component=thanos-controller key=ops-prometheus/ops msg="sync thanos-ruler"
level=info ts=2024-04-23T13:19:18.364018584Z caller=operator.go:572 component=alertmanager-controller key=ops-prometheus/ops msg="sync alertmanager"
level=info ts=2024-04-23T13:19:20.168259359Z caller=operator.go:572 component=alertmanager-controller key=ops-prometheus/ops msg="sync alertmanager"
level=info ts=2024-04-23T13:19:20.469952038Z caller=operator.go:471 component=thanos-controller key=ops-prometheus/ops msg="sync thanos-ruler"
level=info ts=2024-04-23T13:19:21.763307314Z caller=operator.go:572 component=alertmanager-controller key=ops-prometheus/ops msg="sync alertmanager"
level=info ts=2024-04-23T13:19:23.566719097Z caller=operator.go:766 component=prometheus-controller key=ops-prometheus/ops msg="sync prometheus"
level=info ts=2024-04-23T13:19:26.374348431Z caller=operator.go:766 component=prometheus-controller key=ops-prometheus/ops msg="sync prometheus"
level=info ts=2024-04-23T13:19:29.172491491Z caller=operator.go:766 component=prometheus-controller key=ops-prometheus/ops msg="sync prometheus"
level=info ts=2024-04-30T14:01:53.209344412Z caller=operator.go:766 component=prometheus-controller key=ops-prometheus/ops msg="sync prometheus"
level=info ts=2024-04-30T14:01:54.781074571Z caller=operator.go:766 component=prometheus-controller key=ops-prometheus/ops msg="sync prometheus"
level=info ts=2024-04-30T14:01:56.271176288Z caller=operator.go:572 component=alertmanager-controller key=ops-prometheus/ops msg="sync alertmanager"
level=info ts=2024-04-30T14:01:56.38629942Z caller=operator.go:766 component=prometheus-controller key=ops-prometheus/ops msg="sync prometheus"
level=info ts=2024-04-30T14:13:25.576596704Z caller=operator.go:766 component=prometheus-controller key=ops-prometheus/ops msg="sync prometheus"
level=info ts=2024-04-30T14:13:26.86308537Z caller=operator.go:572 component=alertmanager-controller key=ops-prometheus/ops msg="sync alertmanager"
level=info ts=2024-04-30T14:13:26.975031847Z caller=operator.go:766 component=prometheus-controller key=ops-prometheus/ops msg="sync prometheus"

Anything else?

No response

@taisph taisph added kind/bug needs-triage Issues that haven't been triaged yet labels Apr 30, 2024
@simonpasquier simonpasquier removed the needs-triage Issues that haven't been triaged yet label May 6, 2024
@simonpasquier simonpasquier self-assigned this May 6, 2024
simonpasquier added a commit to simonpasquier/prometheus-operator that referenced this issue May 7, 2024
Before this change, the TLS configuration from the scrape class wasn't
applied to the generated configuration for PodMonitor, ScrapeConfig and
Probe objects.

Closes prometheus-operator#6556

Signed-off-by: Simon Pasquier <spasquie@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants