Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support other Helm storage backends besides Secrets #760

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

fiscafusca
Copy link

Overview

This PR adds support for all Helm storage backends (see the official documentation).

Usage

The storage driver can be set through the --helm-storage-driver flag, and its value is propagated to the NewRunner function. The set of allowed values is [secret, configmap, sql]. If the flag is unset, the value is retrieved from the environment variable HELM_DRIVER. If the environment variable is also unset, the value simply defaults to secret - for backwards compatibility.

Use case

The possibility to switch between backends is essential to have more flexibility in the cluster. Storing release information in Secrets may not always be the best option, mostly due to:

  • Size of Helm releases information, which may exceed 1MB;
  • Combined total size of cluster Secrets, which may cause disruptions in the cluster;
  • Helm release location - in some use cases it could be necessary to move release information to another location for compliance.

Moving the release information, for example, to a SQL backend, would easily address these issues and allow keeping a longer history of deployments.

Testing (cluster)

The changes were tested on a local K8s cluster. I personally used a single node kind cluster with K8s v1.27.3.

Steps to reproduce

Create the cluster:

kind create cluster

Build a local image (I named it test-helm-controller:latest) and load it to the kind cluster:

export IMG=test-helm-controller:latest
make docker-build
kind load image test-helm-controller:latest

Deploy the default helm-controller and source-controller:

make deploy

Create the namespace for the Helm release:

kubectl create namespace hello

Prepare the manifests for the HelmRepo and HelmRelease to use for testing. I personally used this chart for simplicity.

helmrepo.yaml:

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
  name: cloudecho
  namespace: hello
spec:
  interval: 1m0s
  url: https://cloudecho.github.io/charts/

helmrelease.yaml:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: hello
  namespace: hello
spec:
  chart:
    spec:
      chart: hello
      sourceRef:
        kind: HelmRepository
        name: cloudecho
      version: 0.1.2
  values:
    replicaCount: 3
  interval: 1m0s

Test case 1: Backwards compatibility

The Helm release information should still be stored in Secrets when both the flag and the env variable are unset.

Deployment patch in config/manager/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest

Deploy the patched helm-controller:

kustomize build config/manager | k apply -f -

Apply the sample HelmRepo and HelmRelease:

kubectl apply -f helmrepo.yaml
kubectl apply -f helmrelease.yaml

Check for the Helm release secret:

kubectl get secrets -n hello -l 'owner=helm'

NAME                          TYPE                 DATA   AGE
sh.helm.release.v1.hello.v1   helm.sh/release.v1   1      29s

Test case 2: Configmaps

The Helm release information should still be stored in Configmaps.

Deployment patch in config/manager/kustomization.yaml (flag):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args/-
      value: --helm-storage-driver=configmap
  target:
    kind: Deployment

OR (env var):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value: {"name": "HELM_DRIVER", "value": "configmap"}
  target:
    kind: Deployment

Deploy the patched helm-controller:

kustomize build config/manager | k apply -f - 

Apply the sample HelmRepo and HelmRelease:

kubectl apply -f helmrepo.yaml
kubectl apply -f helmrelease.yaml

Check for the Helm release configmap:

kubectl get configmaps -n hello -l 'owner=helm'

NAME                          DATA   AGE
sh.helm.release.v1.hello.v1   1      25s

Test case 3: SQL storage

The Helm release information should still be stored in Configmaps.
For this test, I used a PostgreSQL DB hosted on an Azure server.

Deployment patch in config/manager/kustomization.yaml (flag):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args/-
      value: --helm-storage-driver=sql
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value: {"name": "HELM_DRIVER_SQL_CONNECTION_STRING", "value": "<your-connection-string>"}
  target:
    kind: Deployment

OR (env var):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value: {"name": "HELM_DRIVER", "value": "sql"}
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value: {"name": "HELM_DRIVER_SQL_CONNECTION_STRING", "value": "<your-connection-string>"}
  target:
    kind: Deployment

Deploy the patched helm-controller:

kustomize build config/manager | k apply -f - 

Apply the sample HelmRepo and HelmRelease:

kubectl apply -f helmrepo.yaml
kubectl apply -f helmrelease.yaml

Connect to the DB (I used psql) and check the release_v1 table. You will see that there's a new row:

helm=> SELECT count(*) FROM releases_v1 ;
 count
-------
     1
(1 row)

The content can be checked by simply running the following SQL query:

SELECT * FROM releases_v1 ;

Unit and regression testing

The runner.go file has no test file, and the functions in helmrelease_controller.go that involve the new variable do not have any coverage, so it wasn't clear to me how to proceed. The changes I made are backwards compatible and should not cause any issue AFAIK, but I'm open to add anything else if needed. So feel free to leave any feedback/suggestion :)

Signed-off-by: Giorgia Fiscaletti <giorgiafiscaletti@gmail.com>
@hiddeco
Copy link
Member

hiddeco commented Aug 24, 2023

This is a welcome contribution, but with the changes on the horizon (see dev branch), and e.g. #738. It would be better to incorporate this in the new code, which contains substantial changes (and test coverage) around how we work with the Helm storage.

@fiscafusca
Copy link
Author

@hiddeco I see, just took a quick look at the dev and new-reconciler branches.
I could still help with adding this feature to the new code too, if you're interested.

Are you planning to release the rework soon?

@fiscafusca
Copy link
Author

@stefanprodan could you maybe take a look and tell me whether we can proceed with incorporating this in the new code? Sorry for the ping, but it would be really helpful to have this feature in the near future!

This also closes #272.

I see @hiddeco is on paternity leave - congratulations 🎈 enjoy your time as a new dad!

@carlossg
Copy link
Contributor

From https://twitter.com/stefanprodan/status/1716833055615443138

By the end of this year we’ll release a new version of the Helm APIs together with a major refactoring of helm-controller. After this, we can look at alternative storage backends.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants