Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kube-prometheus-stack] Restrict secrets access to namespace #4156

Open
ghost opened this issue Jan 18, 2024 · 14 comments · May be fixed by #4283
Open

[kube-prometheus-stack] Restrict secrets access to namespace #4156

ghost opened this issue Jan 18, 2024 · 14 comments · May be fixed by #4283
Labels
enhancement New feature or request

Comments

@ghost
Copy link

ghost commented Jan 18, 2024

Is your feature request related to a problem ?

Currently, ClusterRoleBindings are used to attach the ClusterRole to the service account created in the Helm Chart. Read access to the secrets API is configured in the ClusterRole. This allows a potential attacker to read secrets from all namespaces in the Kubernetes cluster.

Describe the solution you'd like.

Instead of a ClusterRoleBinding, a RoleBinding should be used to only allow access to secrets in the namespace.

Describe alternatives you've considered.

NONE

Additional context.

Steps to reproduce:

  1. Install Helm Chart in a Kubernetes cluster with global.rbac.create: true
  2. Create a secret in a different namespace than the one the helm release was installed in
  3. Copy kubectl into a running container, e.g. B. grafana.
    kubectl cp kubectl kube-prometheus-stack-grafana-57b9499c74-5jgzl:/tmp -n monitoring -c grafana
  4. Connect to the container the kubectl binary was uploaded to
  5. Execute chmod +x /tmp/kubectl
  6. Show which ServiceAccount is used
    /tmp/kubectl auth whoami
  7. View the secret from a different namespace
    /tmp/kubectl get secrets -n kube-system
@ghost ghost added the enhancement New feature or request label Jan 18, 2024
@ghost
Copy link
Author

ghost commented Jan 18, 2024

For the grafana subchart, you can fix this issue by setting grafana.rbac.namespaced: true

@jkroepke
Copy link
Member

Is it correct that you are looking for a toggle with reduces the scope of the prometheus-operator to the current scope? Same has grafana.rbac.namespaced?

@pfrydids
Copy link

There are some closed stale issues that relate to this as well

#3080
#3158

@pfrydids
Copy link

Is it correct that you are looking for a toggle with reduces the scope of the prometheus-operator to the current scope? Same has grafana.rbac.namespaced?

That would be a a good solution. The grafana helm chart also supports specifying your own cluster role or namespaced role (if the grafana.rabc.namespaced parameter is set to true).

For my use case though something like grafana.rbac.namespaced would be sufficient.

@pfrydids
Copy link

@zaljic when I have set grafana.rbac.namespaced to true there were no errors but the dashboards that come with kube-prometheus-stack do not show up anymore the list is empty.

@pfrydids
Copy link

The following cluster role, role and role binding can be applied on top of the helm deployment of kube-prometheus-stack and seems to work (though without grafana.rbac.namespaced set to true)

kube-prometheus-stack-operator-.zip

Simply kubectl apply on the files (in the correct sequence) should be sufficient post helm chart deployment .

The cluster role and role I'm sure will require refinement so I'm not suggesting using these in a production environment!

@jkroepke
Copy link
Member

I have something in working draft already

@petenorth
Copy link

Any update on this?

@jkroepke
Copy link
Member

Any update on this?

You can test #4283

@pfrydids
Copy link

pfrydids commented Apr 16, 2024

Any update on this?

You can test #4283

I have cloned your repo and

  • in the charts directory run mv kube-state-metrics prometheus-node-exporter prometheus-windows-exporter grafana kube-prometheus-stack/chart
  • copied in the grafana chart directory as well after cloning etc
  • then run tar -czvf kube-prometheus-stack.tgz kube-prometheus-stack
  • then used this helm chart archive with
--set prometheusOperator.rbac.namespaced=true
--set prometheusOperator.namespaces.releaseNamespace=true
--set prometheusOperator.kubeletService.enabled=false

the deployment of the helm chart fails with

clusterroles.rbac.authorization.k8s.io "kube-prometheus-stack-prometheus" already exists

nevertheless I do see

>kubectl get role -n monitoring
NAME                             CREATED AT
kube-prometheus-stack-grafana    2024-04-16T12:17:21Z
kube-prometheus-stack-operator   2024-04-16T12:17:21Z

and

>kubectl get clusterrole | grep kube-prometheus-stack
kube-prometheus-stack-grafana-clusterrole                              2024-04-16T12:17:21Z
kube-prometheus-stack-kube-state-metrics                               2024-04-16T12:17:21Z
kube-prometheus-stack-operator                                         2024-04-16T12:17:21Z
kube-prometheus-stack-prometheus                                       2024-04-16T12:17:21Z

Also note that the cluster role ClusterRole/kube-prometheus-stack-kube-state-metrics is also problematic as it allows the listing of all secrets

From https://kubernetes.io/docs/concepts/security/rbac-good-practices/#listing-secrets

Listing secrets
It is generally clear that allowing get access on Secrets will allow a user to read their contents. It is also important to note that list and watch access also effectively allow for users to reveal the Secret contents. For example, when a List response is returned (for example, via kubectl get secrets -A -o yaml), the response includes the contents of all Secrets.

@jkroepke
Copy link
Member

jkroepke commented Apr 16, 2024

Also note that the cluster role ClusterRole/kube-prometheus-stack-kube-state-metrics is also problematic as it allows the listing of all secrets

Sadly, kube-state-metrics and grafana is an other dependencies which needs to be configured separately. In kube-prometheus-stack chart, I do not have the capabilities to restrict the cluster role. You can try todo things with kube-state-metics.rbac.useClusterRole=false

@jkroepke
Copy link
Member

Here is a fully untested namespaced values yaml

---
grafana:
  rbac:
    namespaced: true

kube-state-metics:
  rbac:
    useClusterRole: false

prometheusOperator:
  rbac:
    namespaced: true
  namespaces:
    releaseNamespace: true
  kubeletService:
    enabled: false

prometheus:
  rbac:
    namespaced: true
 
kubelet:
  enabled: false

coreDns:
  enabled: false

kubeDns:
  enabled: false

kubeApiServer:
  enabled: false

kubeControllerManager:
  enabled: false

kubeEtcd:
  enabled: false

kubeProxy:
  enabled: false

kubeScheduler:
  enabled: false

@pfrydids
Copy link

@jkroepke This allows for a clean deployment with the cluster role no longer having cluster wide secrets read permissions

BUT

when I have set grafana.rbac.namespaced to true there were no errors but the dashboards that come with kube-prometheus-stack do not show up anymore the list is empty.

@jkroepke
Copy link
Member

I tested that on a clean installation and I have dashboards as expected

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
3 participants