-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Multi-Tenant Auth] Ability to customize the namespaces dropdown behavior/contents #8496
Comments
Happy to help on a PR for this. |
You can specify the fallback namespace list and default namespace via settings page or directly via config map that stores settings. |
this doesn't really help in a multi tenant environment because it's static. User's access is going to be different based on the logged in user, so having a static list doesn't solve the UX issue this feature request does. |
Using a custom header seems to be the only viable solution to me. It should not be hard to add on the backend side. Unfortunately, I have almost no free time lately to work on the Dashboard. The feature work would have to be contributed by someone else. I could give some pointers if needed though. |
@floreks would it need to be against the 3.0.0 branch? I don't know how close that is to release. |
Ye, it would have to be a PR to the main branch. It is marked alpha just because the helm chart configuration might not be flexible enough to work for most users. Other than that api and frontend didn't really change. They were just split into separate containers. |
I can work on this, but what is the easiest way to get a containerized dev workflow going? We use Skaffold typically which works great w/ relatively vanilla builds, but the UI's API Dockerfile is doing some unconventional stuff (seems like it depends on |
@floreks or @mlbiam - either of you guys know how I can get the api backend working w/ a combination of in-cluster config (i.e. the My apiserver keeps responding w/ 401 unauthorized:
I'm passing the SA secret's token data via
I'm aiming for:
|
The permissions on the dashboard's sa are irrelevant. With impersonation you need an authorization header with a token that is allowed to impersonate the user and the correct headers for who to impersonate. The dashboard's sa is only used when there's no external token or login cookie |
@mlbiam but why is it structured that way? It seems to me that one should be able to use in-cluster config (i.e. the As of right now, it seems I have to copy-paste the SA's secret token (i.e. the same token that is mounted at That seems quite roundabout. I would rather the api code "fallback" (not sure what the right term is) to using in-cluster config in the case that my auth proxy has provided Thoughts? |
first, the dashboard doesn't have any mechanism for authenticating or authorizing your request. So unless you limited your scope of Impersonation via RBAC to a single user you have no mechanism to make sure that the headers that are coming in are valid. Impersonation is essentially a privileged access, which means anyone who gets into the dashboard and bypasses the security has access to the cluster in a privileged state. Keeping this privilege at a higher level (the reverse proxy) cuts down on the amount of risk from a breach. If there were some kind of application level bug in the dashboard, when running in the mode you suggest, the attacker could now elevate themselves with the Imagine a scenario where a vulnerable |
@mlbiam I see what you're saying. In our setup, we have configured our k8s apiserver as follows:
So, it would be preferable and safer to point the dashboard api to the k8s apiserver via a protected ingress, e.g.
However, as you can see, this encounters a TLS issue. I believe it's because the ingress uses a self-signed certificate (this is local development in my dev cluster), but one should probably be able to provide TLS CA/cert data again similarly to how kubeapps does it... maybe this is the bug/thing to focus on instead? |
I don't think the dashboard has support for generic header passthrough. At least when I implemented impersonation I think it only looks for impersonation headers to passthrough.
I think if you created a kubeconfig file that was accessible from your dev environment with your certificate and a token you'd be all set from that perspective. |
The issue w/ using a kubeconfig is that the token is dynamic or variable. Each user that interacts with the dashboard (since we have a multi-tenant cluster) will provide a different token (i.e.
Yes, but would be easy to modify the configuration to enable this to be customizable. |
two different things. The kueconfig used by the API server doesn't do anything besides tell the dashbaord where the API server is and what cert to use. The token is only used if
|
@mlbiam do you have an example of such a kubeconfig anywhere? I'm a little unclear on how to populate the commented out apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64-encoded string containing the "ca.crt" in the TLS secret associated w/ the ingress hostname on following line>
server: https://cloud-app-kubernetes-ingress.cloud/k8s # This ingress host ensures all traffic is sent via our API gateway before hitting the k8s apiserver
name: coreweave
contexts:
- context:
cluster: coreweave
# namespace: is this needed?
# user: is this needed?
name: coreweave
current-context: coreweave
kind: Config
preferences: {}
###################################
# ASSUME ALL BELOW ARE COMMENTED OUT
# What "user" will this kubeconfig operate as?
users:
- name: minikube
user:
client-certificate: /home/hector/.minikube/profiles/minikube/client.crt
client-key: /home/hector/.minikube/profiles/minikube/client.key
users:
- name: token-NNRMoro77pGm6FsZvkhU
user:
token: JpuaBbZvDjHR3xqayJ8YYR2NKabQrcJaUWnu5jm3 |
I ended up adding in this hack, hopefully it's clear why I need it, but LMK! f6833e8#diff-53e31565fe24dd7b0ea4a3fd2d6fc93a50cbfc225ac3242a6cf268a44e5fa584R371-R376 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
What would you like to be added?
We run a multi-tenant k8s cluster and use namespaces to segregate customers from one another.
I've setup the k8s dashboard UI to proxy auth to the logged-in user and use its credentials. This part is working as expected.
The remaining glaring issue with rolling this out to customers is that I don't have the ability to configure or customize the behavior of the namespaces dropdown list:
default
is confusing here and that namespace is most definitely not something our customers will be able to access.We worked with kubeapps team and implemented a fallback mechanism there for something similar - would something like this be possible in k8s dashboard UI? Or is there some existing mechanism by which we can configure or even disable the namespaces dropdown?
Why is this needed?
For better multi-tenant cluster support, we need the ability to configure some sort of fallback so that our customers can rely on the UI's namespaces dropdown to contain the list of namespaces they are able to access.
This feature is necessary for multi-tenant clusters in which isolation is done via namespaces (i.e. customer A has namespace
tenant-a
, customer B has namespacetenant-b
, etc., and neither has much cluster-scoped RBAC as it is a shared cluster).Here is how the api namespace endpoint currently responds in e.g. a shared cluster where users' namespaces are segregated (and those users do not have list namespaces RBAC in the cluster):
With this response, the user sees the following namespace dropdown:
So in summary: in clusters where users do not have RBAC to list namespaces, there needs to be a programmatic way to modify the list of namespaces returned by
/api/v1/namespace
endpoint to answer the question "what namespaces does this [auth proxied] user have access to?" An acceptable fallback alternative would be to make the namespaces list setting dynamic and/or configurable per-user as well.Kubeapps has implemented something similar (we worked w/ them on it) where you can configure a values
trustedNamespaces.headerName
andtrustedNamespaces.headerPattern
that allow the backend API to extract "valid" accessible namespaces from another proxied request header. For kube dashboard, since you already support proxying theImpersonate-*
k8s request headers, one of those could probably be used; or we could implement something similar to kubeapps where the proxied header name is entirely configurable.https://github.com/vmware-tanzu/kubeapps/blob/main/chart/kubeapps/values.yaml#L1614-L1626
The text was updated successfully, but these errors were encountered: