Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v0.1.3 requires additional rbac permissions #142

Open
bentrombley opened this issue Apr 14, 2020 · 13 comments
Open

v0.1.3 requires additional rbac permissions #142

bentrombley opened this issue Apr 14, 2020 · 13 comments

Comments

@bentrombley
Copy link

bentrombley commented Apr 14, 2020

We just upgraded from 0.1.0 to 0.1.3 and started seeing errors in our logs like:

E0413 20:07:05.845509 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:kube-system:custom-metrics-apiserver" cannot list resource "configmaps" in API group "" in the namespace "kube-system"

I'm not sure what changed, but adding this apiGroups section to the rules for custom-metrics-resource-collector fixed it for us:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-resource-collector
rules:
...
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - list
  - watch

Expected Behavior

There should be no errors/warnings in the logs.

Actual Behavior

See above logs. This caused the HorizontalPodAutoscalers to fail.

Steps to Reproduce the Problem

  1. Create installation following the configuration in the docs/ folder.
  2. Use either AWS SQS queue or HTTP collector (following docs in the README)
  3. Look at the logs on the kube-metrics-adapter pod.

Specifications

  • Version: Kubernetes v.15.17
@bentrombley
Copy link
Author

Sorry, I'd submit a pull request, but that would require a lengthy approval process from my employer.

@mikkeloscar
Copy link
Contributor

Hi @bentrombley there should be a default role in the cluster called: extension-apiserver-authentication-reader This we bind to here:

name: extension-apiserver-authentication-reader

Does your cluster not have this role as default? From what I can see it should also be in v1.15.7

@zsoltbaloghmaden
Copy link

I experienced same issue after I added that line working perfectly.

@RRAlex
Copy link

RRAlex commented Jun 18, 2020

Same here, adding this section to the clusterrole/custom-metrics-resource-collector made the metrics work on my k8s (1.18.3).
What confuses me is that I installed a cluster a month ago (1.18.2), without that part, and it works... 🤔

@edsonmarquezani
Copy link

edsonmarquezani commented Jun 24, 2020

Hello, I came up with this fix also, but I'm still getting the following logs.

I0624 19:40:42.941182       1 serving.go:306] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
W0624 19:40:43.437556       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0624 19:40:43.437629       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found

In debug mode I can see the configmap being read, though.

prageethw added a commit to prageethw/kube-metrics-adapter that referenced this issue Jul 20, 2020
this fix related to the original repos issue
zalando-incubator#142
@mikkeloscar
Copy link
Contributor

Can you check if you have the role extension-apiserver-authentication-reader in your cluster?

I have just created a new cluster based on v1.18.6 and it's there by default:

kubectl --namespace kube-system get role extension-apiserver-authentication-reader
NAME                                        CREATED AT
extension-apiserver-authentication-reader   2020-07-21T19:14:01Z

Or could it be that you're not deploying the kube-metrics-adapter to kube-system namespace but to another namespace?

@prageethw, @edsonmarquezani, @bentrombley

@prageethw
Copy link

@mikkeloscar
yes default role exist in kube-system namespace, it seems the issue is with custom-metrics-resource-reader and custom-metrics-resource-collector that gets created in the namespace I install the adopter it seems.

@mikkeloscar
Copy link
Contributor

@prageethw Is installing to kube-system an option for you. Then we could change the docs to include the namespace for all resources. If not then we need to adapt it for users not installing to kube-system where the extension-apiserver-authentication-reader is no available. But this also means the configmap is not available and thus I would expect some things to not work as the configMap includes the CA of the apiserver.

@prageethw
Copy link

@mikkeloscar Thanks for the reply, yeah installing in kube-system is always an option though not sure whether that will be the best practice, generally, I like to keep Kube-system untouched, though I guess everyone is different. What stop are we giving enough access rights to custom-metrics-resource-reader and custom-metrics-resource-collector instead of forcing users to install to Kube-system?

@mikkeloscar
Copy link
Contributor

What stop are we giving enough access rights to custom-metrics-resource-reader and custom-metrics-resource-collector instead of forcing users to install to Kube-system?

What I want to avoid is that we document more access than is needed. With the change suggested in this thread and in #181 the kube-metrics-adapter would get access to ALL configmaps instead of just a single one as intended.

@prageethw
Copy link

@mikkeloscar fair point, I think TBH that is an extreme measure if someone can access to custom-metrics-resource-collector and custom-metrics-resource-reader, most likely he is already inside the cluster.

@mikkeloscar
Copy link
Contributor

@mikkeloscar fair point, I think TBH that is an extreme measure if someone can access to custom-metrics-resource-collector and custom-metrics-resource-reader, most likely he is already inside the cluster.

I want to document the best practice which is the least amount of permissions. If users need something custom or more relaxed they're free to use a custom role setup. I'm also fine documenting that if we clearly state the reason (e.g. not deploying in kube-system) However considering that the original issue clearly states a problem when the service account is in kube-system and that we also only document kube-system as the default setup right now, then I suspect something else is wrong if it doesn't work for folks without these extra changes.

Does anyone in this thread deploy to kube-system and still have the problem with the default RBAC roles we have documented?

@sharkymcdongles
Copy link

I prefer to have all monitoring related stuff in the monitoring namespace. I have moved the adapter for now though because it seems the only way to get this to work is have it in kube-system since even giving it the right RBAC would mean it needs a clusterwide priv instead of a single cm priv as documented here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants