Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm chart v0.1.1 fails to install with default values.yaml #765

Open
hollanbm opened this issue May 3, 2024 · 3 comments
Open

helm chart v0.1.1 fails to install with default values.yaml #765

hollanbm opened this issue May 3, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@hollanbm
Copy link

hollanbm commented May 3, 2024

What's wrong?

https://github.com/grafana/alloy/blob/main/operations/helm/charts/alloy/charts/crds/crds/monitoring.grafana.com_podlogs.yaml

The CustomResourceDefinition "podlogs.monitoring.grafana.com" is invalid: status.storedVersions[0]: Invalid value: "v1alpha1": must appear in spec.versions

Steps to reproduce

https://grafana.com/docs/alloy/latest/get-started/install/kubernetes/

I am working on standing up the grafana stack on my cluster.

I installed these charts first.
grafana/loki -- chart v6.4.2
grafana/promtail -- chart v6.15.5

I have not (yet) installed prometheus, or any other grafana helm charts.

upon installing grafana/alloy chart v0.1.1 (with default values) I get the following error when the CRD's are installed.

The CustomResourceDefinition "podlogs.monitoring.grafana.com" is invalid: status.storedVersions[0]: Invalid value: "v1alpha1": must appear in spec.versions

In order to troubleshoot, I deleted the helm chart, and attempted to manually install the crds.yaml

Thats when I received the same error, and realized what was happening.

System information

k3s v1.29.3

Software version

No response

Configuration

chart is installed via flux

---
apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
  name: alloy
  namespace: monitoring
spec:
  chart:
    spec:
      chart: alloy
      sourceRef:
        kind: HelmRepository
        name: grafana
        namespace: flux-system
      version: 0.1.1
  interval: 15m
  releaseName: alloy
  timeout: 10m
   install:
     crds: CreateReplace
     remediation:
       retries: 1
       remediateLastFailure: true
   upgrade:
     crds: CreateReplace
     cleanupOnFail: true
     remediation:
       retries: 1
       remediateLastFailure: true
  rollback:
    recreate: true
    cleanupOnFail: true
  values:
➜  ~ kgno -o wide
NAME    STATUS   ROLES                       AGE     VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION        CONTAINER-RUNTIME
k3s1    Ready    control-plane,etcd,master   132d    v1.29.3+k3s1   10.254.1.71   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s10   Ready    worker                      125d    v1.29.3+k3s1   10.254.1.64   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s11   Ready    worker                      125d    v1.29.3+k3s1   10.254.1.65   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s2    Ready    worker                      3h52m   v1.29.3+k3s1   10.254.1.72   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s3    Ready    control-plane,etcd,master   132d    v1.29.3+k3s1   10.254.1.73   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s4    Ready    worker                      132d    v1.29.3+k3s1   10.254.1.74   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s5    Ready    control-plane,etcd,master   132d    v1.29.3+k3s1   10.254.1.75   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s6    Ready    worker                      67m     v1.29.3+k3s1   10.254.1.76   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s7    Ready    worker                      125d    v1.29.3+k3s1   10.254.1.61   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s8    Ready    worker                      125d    v1.29.3+k3s1   10.254.1.62   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2
k3s9    Ready    worker                      125d    v1.29.3+k3s1   10.254.1.63   <none>        Debian GNU/Linux 12 (bookworm)   6.6.20+rpt-rpi-2712   containerd://1.7.11-k3s2

Logs

No response

@hollanbm hollanbm added the bug Something isn't working label May 3, 2024
@behdadkh
Copy link

behdadkh commented May 4, 2024

Facing the exact same problem while trying to install Alloy Helm chart 0.1.1.

@hainenber
Copy link
Contributor

It seems the similar CRD created in Loki chart is the culprit for the conflict here. We probably have to switch Loki's self-monitoring from Agent to Alloy in the long run.

I managed to dig out this workaround in another thread. Hope this helps out!

@behdadkh
Copy link

behdadkh commented May 5, 2024

Thanks @hainenber this indeed helped me towards the right direction.

I could successfully install Alloy Helm 0.1.1 along Loki Helm 5.47.2, I had to also set the test.enabled to false for Loki as well. Also, manually delete the:

kubectl delete crds grafanaagents.monitoring.grafana.com podlogs.monitoring.grafana.com

Then re-syncing the Alloy (via Argocd) went through successfully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants