Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Traefik for Ingress #2385

Open
tobiasbartel opened this issue Mar 11, 2024 · 2 comments
Open

Using Traefik for Ingress #2385

tobiasbartel opened this issue Mar 11, 2024 · 2 comments

Comments

@tobiasbartel
Copy link

tobiasbartel commented Mar 11, 2024

What happened?

Description

I am trying to get Prometheus Operator deployed but I am hitting a wall at the Ingress for Alertmanager, Prometheus and Grafana.

I deployed using the manifests files, I did not use Helm.
I am also not using nginx but Traefik as Ingress controller with cert-manager to generate the certs
From my understanding of the documentation, I am supposed to provide the necessary information (class, hostname, cert, ....) to a Prometheus object? Is that correct?

I tried to configure a custom ingress for the three services, that results in a "Bad Gateway" error for Grafana (aka the service is not responding) and a 404 for both both Alertmanager & Prometheus

I then tried to configure my own service, connected directly to the pods, that resulted in a 404 for all three applications.

My question: What do I need to add where? It seems all documentation and most solved issues on this topic are related to deployments via Helm.

The cluster is based on TKG (VMware Tanzu Kubernetes Grid)

Prometheus Operator Version

Name:                   prometheus-operator
Namespace:              monitoring
CreationTimestamp:      Sat, 09 Mar 2024 15:43:19 +0100
Labels:                 app.kubernetes.io/component=controller
                        app.kubernetes.io/instance=cluster01-prometheus
                        app.kubernetes.io/name=prometheus-operator
                        app.kubernetes.io/part-of=kube-prometheus
                        app.kubernetes.io/version=0.71.2
Annotations:            argocd.argoproj.io/tracking-id: cluster01-prometheus:apps/Deployment:monitoring/prometheus-operator
                        deployment.kubernetes.io/revision: 1
Selector:               app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=kube-prometheus
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/component=controller
                    app.kubernetes.io/name=prometheus-operator
                    app.kubernetes.io/part-of=kube-prometheus
                    app.kubernetes.io/version=0.71.2
  Annotations:      kubectl.kubernetes.io/default-container: prometheus-operator
  Service Account:  prometheus-operator
  Containers:
   prometheus-operator:
    Image:      quay.io/prometheus-operator/prometheus-operator:v0.71.2
    Port:       8080/TCP
    Host Port:  0/TCP
    Args:
      --kubelet-service=kube-system/kubelet
      --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.71.2
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     100m
      memory:  100Mi
    Environment:
      GOGC:  30
    Mounts:  <none>
   kube-rbac-proxy:
    Image:           quay.io/brancz/kube-rbac-proxy:v0.16.0
    Port:            8443/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Args:
      --secure-listen-address=:8443
      --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDH
E_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      --upstream=http://127.0.0.1:8080/
    Limits:
      cpu:     20m
      memory:  40Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   prometheus-operator-5575b484df (1/1 replicas created)
Events:          <none>

Kubernetes Version

clientVersion:
  buildDate: "2024-02-14T10:40:49Z"
  compiler: gc
  gitCommit: 4b8e819355d791d96b7e9d9efe4cbafae2311c88
  gitTreeState: clean
  gitVersion: v1.29.2
  goVersion: go1.21.7
  major: "1"
  minor: "29"
  platform: linux/amd64
kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3
serverVersion:
  buildDate: "2024-02-29T20:36:21Z"
  compiler: gc
  gitCommit: 051b14b248655896fdfd7ba6c93db6182cde7431
  gitTreeState: clean
  gitVersion: v1.28.7+k3s1
  goVersion: go1.21.7
  major: "1"
  minor: "28"
  platform: linux/amd64

Kubernetes Cluster Type

Other (please comment)

How did you deploy Prometheus-Operator?

yaml manifests

Manifests

unknwon

prometheus-operator log output

none

Anything else?

No response

@tobiasbartel
Copy link
Author

tobiasbartel commented Mar 11, 2024

Some more info

cat prometheus-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata: 
  name: prometheus-ingress
  namespace: monitoring
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-http-live
spec:
  tls:
  - secretName: prometheus-ingress-cert
    hosts: 
    - prometheus....
  rules:
  - host: prometheus....
    http:
      paths:
      - path: / 
      #- path: /prometheus
        pathType: Prefix
        backend:
          service:
            name:  prometheus-k8s  
            port:
                name: http`

k describe -n monitoring service prometheus-k8s

Name:              prometheus-k8s
Namespace:         monitoring
Labels:            app.kubernetes.io/component=prometheus
                   app.kubernetes.io/instance=cluster01-prometheus
                   app.kubernetes.io/name=prometheus
                   app.kubernetes.io/part-of=kube-prometheus
                   app.kubernetes.io/version=2.50.1
Annotations:       argocd.argoproj.io/tracking-id: cluster01-prometheus:/Service:monitoring/prometheus-k8s
Selector:          app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=k8s,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=kube-prometheus
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.43.109.197
IPs:               10.43.109.197
Port:              web  9090/TCP
TargetPort:        web/TCP
Endpoints:         10.42.0.211:9090,10.42.3.105:9090
Port:              reloader-web  8080/TCP
TargetPort:        reloader-web/TCP
Endpoints:         10.42.0.211:8080,10.42.3.105:8080
Session Affinity:  ClientIP
Events:            <none>

@simonpasquier simonpasquier transferred this issue from prometheus-operator/prometheus-operator Mar 26, 2024
@lam-guo
Copy link

lam-guo commented May 11, 2024

  1. check your ingress status,if it was working fine(try to compare to other working ingress): k describe -n monitoring ingress prometheus-ingress
  2. maybe your prometheus.yaml lack of field "ingressClassName" in spec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants