Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]: cluster multitenancy providers: vcluster #5989

Open
salotz opened this issue May 1, 2024 · 8 comments
Open

[FEATURE]: cluster multitenancy providers: vcluster #5989

salotz opened this issue May 1, 2024 · 8 comments

Comments

@salotz
Copy link

salotz commented May 1, 2024

Feature Request

Background / Motivation

As my testing stack gets more complex I run into issues where I am deploying e.g. CRDs which are global to the cluster from multiple different garden envs (user dev, CI, release pipelines, preview environments, etc.).

This causes ownership issues and can be difficult to understand and debug. IMO Kubernetes should have thought of this a long time ago and obviously namespaces where never going to cut it. Anyways a number of different solutions for this kind of problem are cropping up that I am interested in trying out.

One being vcluster which virtualizes an entire cluster within another kubernetes cluster. The benefit being that tenants can pretend they have an entire cluster and test without interrupting the other tenants/environments.

This would also make it much easier to test things like from-scratch installations onto clean clusters.

What should the user be able to do?

User should be able to define/choose a provider that automatically creates a fresh virtual cluster or tenant slot (e.g. via vcluster) to run an environment in.

How important is this feature for you/your team?

Somewhere between:

🌵 Not having this feature makes using Garden painful

🌹 It’s a nice to have, but nice things are nice 🙂

@stefreak
Copy link
Member

stefreak commented May 7, 2024

That's a great and valid feature request @salotz, thank you! We also offer ephemeral clusters in the community tier: https://docs.garden.io/kubernetes-plugins/ephemeral-k8s

You can give that a try by following our quick start guide: https://docs.garden.io/getting-started/quickstart

How close to a solution of your problem is that feature for you?

@salotz
Copy link
Author

salotz commented May 7, 2024

I don't think its quite the same thing, but maybe you can clarify.

I want to have each unique environment.namespace be run in a different (virtual) cluster. For instance in CI I want to test the installation of a full application which assumes that it is not sharing the cluster with anything else. So the environment might be ci and the namespace is project-name-${replace(replace("garden-ci-branch-${local.env.CI_COMMIT_REF_NAME}", "_", "-"), ".", "-")} (from defaultNamespace usign Gitlab CI/CD variables).
And then on the cluster it creates a K8s namespace like project-name-garden-ci-branch-branch-name for the environment.

Instead I would like the whole virtual cluster name to be the namespace name project-name-garden-ci-branch-branch-name and then the namespaces within it would look like a production deployment of the application e.g. top-level namespaces: prometheus, myapp, databases etc.

IIUC the ephemeral-cluster would provide spinning up of a cluster based on the logged in user, but otherwise there is no additional considerations for tenancy/namespacing.

The issue is less about making efficient use of resources and more about having a virtual environment to do proper E2E testing that isn't currently really possible with the current Garden kubernetes provider architecture.

@stefreak
Copy link
Member

@salotz that makes a lot of sense to me and it's technically possible to further develop the ephemeral kubernetes plugin to fulfill these requirements.
I'll bring this up internally 👍

@twelvemo
Copy link
Collaborator

@salotz for the time being you could also create a vcluster in an initScript in an exec provider. You can then have your kubernetes provider for the environment you want to run the tests in depend on the exec provider and set the kubecontext to the one created with vcluster.
To clean it all up you could create an exec action that destroys the vcluster and have it depend on the test action(s).

@twelvemo
Copy link
Collaborator

But i also agree that it would be a great addition to allow spinning up an ephemeral kubernetes cluster per command and not only per user.

@salotz
Copy link
Author

salotz commented May 14, 2024

@twelvemo

for the time being you could also create a vcluster in an initScript in an exec provider. You can then have your kubernetes provider for the environment you want to run the tests in depend on the exec provider and set the kubecontext to the one created with vcluster.
To clean it all up you could create an exec action that destroys the vcluster and have it depend on the test action(s).

That's a good idea. I might try that out given enough pressure on me to actually set up those E2E tests. Might have to do some magic with naming the environments/namespaces with UUIDs.

@twelvemo
Copy link
Collaborator

That's a good idea. I might try that out given enough pressure on me to actually set up those E2E tests. Might have to do some magic with naming the environments/namespaces with UUIDs.

I created that setup at some point as an example, but i can't find it any more. Feel free to reach out on Discord if you run into any blockers.

@twelvemo
Copy link
Collaborator

twelvemo commented May 15, 2024

Might have to do some magic with naming the environments/namespaces with UUIDs.

Maybe the git.commitHash could be useful here, available in the provider context.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants