Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quickstart for Helm-based Operators is not working on OpenShift #3767

Closed
nmasse-itix opened this issue Aug 20, 2020 · 11 comments · Fixed by #3779 or #4105
Closed

Quickstart for Helm-based Operators is not working on OpenShift #3767

nmasse-itix opened this issue Aug 20, 2020 · 11 comments · Fixed by #3779 or #4105
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. language/helm Issue is related to a Helm operator project
Milestone

Comments

@nmasse-itix
Copy link

nmasse-itix commented Aug 20, 2020

Bug Report

What did you do?
I followed https://master.sdk.operatorframework.io/docs/building-operators/helm/quickstart/

mkdir nginx-operator
cd nginx-operator
operator-sdk init --plugins=helm
operator-sdk create api --group demo --version v1 --kind Nginx
make docker-build docker-push IMG=docker.io/nmasse/nginx-operator:test
make install
make deploy IMG=docker.io/nmasse/nginx-operator:test
kubectl apply -f config/samples/demo_v1_nginx.yaml
kubectl get all -l "app.kubernetes.io/instance=nginx-sample"

What did you expect to see?

I expected to see an nginx pod come up. But it never came up.

What did you see instead? Under which circumstances?

The operator logs exhibit an error failed to install release: serviceaccounts \"nginx-sample\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on

Full log file:

{"level":"info","ts":1597936277.6508987,"logger":"cmd","msg":"Version","Go Version":"go1.13.15","GOOS":"linux","GOARCH":"amd64","helm-operator":"v1.0.0"}
{"level":"info","ts":1597936277.6517258,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1597936281.6490111,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1597936281.6504083,"logger":"helm.controller","msg":"Watching resource","apiVersion":"demo.my.domain/v1","kind":"Nginx","namespace":"","reconcilePeriod":"1m0s"}
I0820 15:11:21.651919       1 leaderelection.go:242] attempting to acquire leader lease  nginx-operator-system/nginx-operator...
{"level":"info","ts":1597936281.7339883,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0820 15:11:39.625672       1 leaderelection.go:252] successfully acquired lease nginx-operator-system/nginx-operator
{"level":"info","ts":1597936299.6272435,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: demo.my.domain/v1, Kind=Nginx"}
{"level":"info","ts":1597936299.7290106,"logger":"controller","msg":"Starting Controller","controller":"nginx-controller"}
{"level":"info","ts":1597936299.7297232,"logger":"controller","msg":"Starting workers","controller":"nginx-controller","worker count":4}
{"level":"error","ts":1597936394.6697314,"logger":"helm.controller","msg":"Release failed","namespace":"nginx-operator","name":"nginx-sample","apiVersion":"demo.my.domain/v1","kind":"Nginx","release":"nginx-sample","error":"failed to install release: serviceaccounts \"nginx-sample\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128\ngithub.com/operator-framework/operator-sdk/internal/helm/controller.HelmOperatorReconciler.Reconcile\n\tsrc/github.com/operator-framework/operator-sdk/internal/helm/controller/reconcile.go:181\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1597936394.6907682,"logger":"controller","msg":"Reconciler error","controller":"nginx-controller","name":"nginx-sample","namespace":"nginx-operator","error":"failed to install release: serviceaccounts \"nginx-sample\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:237\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:90"}

Environment

  • operator-sdk version:
operator-sdk version: "v1.0.0", commit: "d7d5e0cd6cf5468bb66e0849f08fda5bf557f4fa", kubernetes version: "v1.18.2", go version: "go1.13.11 darwin/amd64", GOOS: "darwin", GOARCH: "amd64"
helm-operator version: "v1.0.0", commit: "d7d5e0cd6cf5468bb66e0849f08fda5bf557f4fa", kubernetes version: "v1.18.2", go version: "go1.13.11 darwin/amd64", GOOS: "darwin", GOARCH: "amd64"
  • Kubernetes version information:

I'm using OpenShift Version 4.3.5, which is based on Kubernetes v1.16.2.
I'm the cluster-admin of this cluster.

  • Kubernetes cluster kind: 3 masters, 3 workers, in VM.

  • Are you writing your operator in ansible, helm, or go?

Helm

Possible Solution

The generated config/rbac/role.yaml is missing privileges on nginxes/finalizers.
The operator-sdk create api command that generates this file needs to be updated to include this privilege since it is required on OpenShift.

##
## Rules for demo.my.domain/v1, Kind: Nginx
##
- apiGroups:
  - demo.my.domain
  resources:
  - nginxes
  - nginxes/status
  - nginxes/finalizers
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch

Since the generated objects include a ownerReference field that references the created CR, on OpenShift the operator needs to be able to set finalizers on the created CR.

apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    meta.helm.sh/release-name: nginx-sample
    meta.helm.sh/release-namespace: nginx-operator
  creationTimestamp: "2020-08-20T15:19:08Z"
  labels:
    app.kubernetes.io/instance: nginx-sample
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nginx
    app.kubernetes.io/version: 1.16.0
    helm.sh/chart: nginx-0.1.0
  name: nginx-sample
  namespace: nginx-operator
  ownerReferences:
  - apiVersion: demo.my.domain/v1
    blockOwnerDeletion: true
    controller: true
    kind: Nginx
    name: nginx-sample
    uid: c5d7e996-013f-40ca-bd19-14ba73728eaf
  resourceVersion: "150618365"
  selfLink: /api/v1/namespaces/nginx-operator/serviceaccounts/nginx-sample
  uid: ce916607-58dd-4c1f-bf1e-dbf6f6e86c04
secrets:
- name: nginx-sample-token-99vgm
- name: nginx-sample-dockercfg-rrzb7
@camilamacedo86
Copy link
Contributor

camilamacedo86 commented Aug 20, 2020

Hi @nmasse-itix,

To follow up the tutorial and quickstart you need to have cluster-admin permissions. The tool scaffold k8s manifests by default which requires this permission. You can change the scope of your project as described in my comment #3733 (comment).

The need to create a doc to explain the scope per type is tracked already. See my comment in: #3447 (comment)

@camilamacedo86 camilamacedo86 added language/helm Issue is related to a Helm operator project triage/needs-information Indicates an issue needs more information in order to work on it. labels Aug 20, 2020
@nmasse-itix
Copy link
Author

Hi @camilamacedo86 !

Thanks for jumping in and helping me.

I re-did the whole tutorial (after proper cleanup of course) and I do confirm that:

  1. I am cluster-admin, and connected to the cluster when doing the operator-sdk create api that scaffolds k8s manifests.
$ oc whoami
nmasse@redhat.com

$ oc describe clusterrolebinding nmasse@redhat.com-cluster-admin 
Name:         nmasse@redhat.com-cluster-admin
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  cluster-admin
Subjects:
  Kind  Name               Namespace
  ----  ----               ---------
  User  nmasse@redhat.com  

$ operator-sdk create api --group demo --version v1 --kind Nginx
Created helm-charts/nginx
Generating RBAC rules
WARN[0004] The RBAC rules generated in config/rbac/role.yaml are based on the chart's default manifest. Some rules may be missing for resources that are only enabled with custom values, and some existing rules may be overly broad. Double check the rules generated in config/rbac/role.yaml to ensure they meet the operator's permission requirements. 
  1. When doing so (operator-sdk create api while being connected as cluster-admin), the generated role.yaml does NOT contain nginxes/finalizers.

  2. The generated operator fails with the same error as before.

So, I followed step-by-step the tutorial, while being cluster-admin and the described steps lead to a non-working operator.

Can we fix the operator-sdk create api command to generate/scaffold correctly the proper RBAC permissions ?

@camilamacedo86 camilamacedo86 self-assigned this Aug 21, 2020
@camilamacedo86
Copy link
Contributor

camilamacedo86 commented Aug 21, 2020

Hi @nmasse-itix,

I just followed the tutorial and all worked. The finalizer role is not required for the steps performed so far. Also, I checked it with the helm-chart Nginx as well and the Memcached operator example in the samples. You can check also here the steps used to generate the sample. For this one, we need to add customize permissions but it has NOT the finalizer as well.

You might not be doing the same steps than I am. Please, check all steps bellow with attention and let us know.

Create a Helm Nginx project without a helm-chart (Tutorial)

$ mkdir nginx-operator
$ cd nginx-operator
$ operator-sdk init --plugins=helm --domain=com --group=example --version=v1alpha1 --kind=Nginx
$ export IMG=quay.io/camilamacedo86/nginx-operator:v0.0.1
$ make docker-build docker-push IMG=$IMG
$ make deploy IMG=$IMG

I applied the CR in the same namespace of the Memcached just to make easier get the output for you here

$ kubectl apply -f config/samples/example_v1alpha1_nginx.yaml -n nginx-operator-system
nginx.example.com/nginx-sample created
  • Checking that all is OK
 $ kubectl get deployment -n nginx-operator-system
NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
nginx-operator-controller-manager   1/1     1            1           7m48s
nginx-sample                        1/1     1            1           7m9s
  • Looking for an error in the Logs:

Screen Shot 2020-08-21 at 12 55 45

  • See the full logs
` $ kubectl logs deployment.apps/nginx-operator-controller-manager -n nginx-operator-system -c manager`
{"level":"info","ts":1598010428.590873,"logger":"cmd","msg":"Version","Go Version":"go1.13.15","GOOS":"linux","GOARCH":"amd64","helm-operator":"v1.0.0"}
{"level":"info","ts":1598010428.591136,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1598010429.489638,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1598010429.4903362,"logger":"helm.controller","msg":"Watching resource","apiVersion":"example.com/v1alpha1","kind":"Nginx","namespace":"","reconcilePeriod":"1m0s"}
I0821 11:47:09.490923       1 leaderelection.go:242] attempting to acquire leader lease  nginx-operator-system/nginx-operator...
{"level":"info","ts":1598010429.491019,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0821 11:47:09.510765       1 leaderelection.go:252] successfully acquired lease nginx-operator-system/nginx-operator
{"level":"info","ts":1598010429.5113006,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: example.com/v1alpha1, Kind=Nginx"}
{"level":"info","ts":1598010429.689679,"logger":"controller","msg":"Starting Controller","controller":"nginx-controller"}
{"level":"info","ts":1598010429.689745,"logger":"controller","msg":"Starting workers","controller":"nginx-controller","worker count":6}
{"level":"info","ts":1598010432.5003393,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: /v1, Kind=ServiceAccount"}
{"level":"info","ts":1598010432.6018758,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.com/v1alpha1","ownerKind":"Nginx","apiVersion":"v1","kind":"ServiceAccount"}
{"level":"info","ts":1598010432.6023726,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: /v1, Kind=Service"}
{"level":"info","ts":1598010432.7882905,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.com/v1alpha1","ownerKind":"Nginx","apiVersion":"v1","kind":"Service"}
{"level":"info","ts":1598010432.7891157,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: apps/v1, Kind=Deployment"}
{"level":"info","ts":1598010432.9898663,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.com/v1alpha1","ownerKind":"Nginx","apiVersion":"apps/v1","kind":"Deployment"}
{"level":"info","ts":1598010432.989943,"logger":"helm.controller","msg":"Installed release","namespace":"nginx-operator-system","name":"nginx-sample","apiVersion":"example.com/v1alpha1","kind":"Nginx","release":"nginx-sample"}
+---
+# Source: nginx/templates/serviceaccount.yaml
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: nginx-sample
+  labels:
+    helm.sh/chart: nginx-0.1.0
+    app.kubernetes.io/name: nginx
+    app.kubernetes.io/instance: nginx-sample
+    app.kubernetes.io/version: "1.16.0"
+    app.kubernetes.io/managed-by: Helm
+---
+# Source: nginx/templates/service.yaml
+apiVersion: v1
+kind: Service
+metadata:
+  name: nginx-sample
+  labels:
+    helm.sh/chart: nginx-0.1.0
+    app.kubernetes.io/name: nginx
+    app.kubernetes.io/instance: nginx-sample
+    app.kubernetes.io/version: "1.16.0"
+    app.kubernetes.io/managed-by: Helm
+spec:
+  type: ClusterIP
+  ports:
+    - port: 80
+      targetPort: http
+      protocol: TCP
+      name: http
+  selector:
+    app.kubernetes.io/name: nginx
+    app.kubernetes.io/instance: nginx-sample
+---
+# Source: nginx/templates/deployment.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: nginx-sample
+  labels:
+    helm.sh/chart: nginx-0.1.0
+    app.kubernetes.io/name: nginx
+    app.kubernetes.io/instance: nginx-sample
+    app.kubernetes.io/version: "1.16.0"
+    app.kubernetes.io/managed-by: Helm
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app.kubernetes.io/name: nginx
+      app.kubernetes.io/instance: nginx-sample
+  template:
+    metadata:
+      labels:
+        app.kubernetes.io/name: nginx
+        app.kubernetes.io/instance: nginx-sample
+    spec:
+      serviceAccountName: nginx-sample
+      securityContext:
+        {}
+      containers:
+        - name: nginx
+          securityContext:
+            {}
+          image: "nginx:1.16.0"
+          imagePullPolicy: IfNotPresent
+          ports:
+            - name: http
+              containerPort: 80
+              protocol: TCP
+          livenessProbe:
+            httpGet:
+              path: /
+              port: http
+          readinessProbe:
+            httpGet:
+              path: /
+              port: http
+          resources:
+            {}

{"level":"info","ts":1598010435.7897434,"logger":"helm.controller","msg":"Reconciled release","namespace":"nginx-operator-system","name":"nginx-sample","apiVersion":"example.com/v1alpha1","kind":"Nginx","release":"nginx-sample"}
{"level":"info","ts":1598010438.089628,"logger":"helm.controller","msg":"Reconciled release","namespace":"nginx-operator-system","name":"nginx-sample","apiVersion":"example.com/v1alpha1","kind":"Nginx","release":"nginx-sample"}
  • Now let's checks the default roles generated: ($ cat config/rbac/role.yaml)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: manager-role
rules:
##
## Base operator rules
##
# We need to get namespaces so the operator can read namespaces to ensure they exist
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
# We need to manage Helm release secrets
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - "*"
# We need to create events on CRs about things happening during reconciliation
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create

##
## Rules for example.com/v1alpha1, Kind: Nginx
##
- apiGroups:
  - example.com
  resources:
  - nginxes
  - nginxes/status
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- verbs:
  - "*"
  apiGroups:
  - ""
  resources:
  - "serviceaccounts"
  - "services"
- verbs:
  - "*"
  apiGroups:
  - "apps"
  resources:
  - "deployments"

# +kubebuilder:scaffold:rules

See that has no reason at all for we need the finalizer in this scenario.

Create a Helm Nginx project with helm-charts/nginx-ingress

  • Create the project and API
$ mkdir nginx-operator
$ cd nginx-operator
$ operator-sdk init --plugins=helm
$ operator-sdk create api --group=example --version=v1alpha1 --kind=Nginx --helm-chart=stable/nginx-ingress
$ export IMG=quay.io/camilamacedo86/nginx-operator:v0.0.2
$ make docker-build docker-push IMG=$IMG
$ make deploy IMG=$IMG
  • Check operator
$ kubectl get deployment -n nginx-operator-system
NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
nginx-operator-controller-manager   1/1     1            1           24s
  • Applied the Sample in another namespace (default)
$ kubectl apply -f config/samples/example_v1alpha1_nginx.yaml 
nginx.example.my.domain/nginx-sample created
  • Let's check all resources :
 $ kubectl get all
NAME                                                             READY   STATUS    RESTARTS   AGE
pod/nginx-sample-nginx-ingress-controller-769bb677d8-zfrjc       1/1     Running   0          38s
pod/nginx-sample-nginx-ingress-default-backend-c5bfbb7b7-tdfhp   1/1     Running   0          38s

NAME                                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/kubernetes                                   ClusterIP      10.96.0.1       <none>        443/TCP                      3m17s
service/nginx-sample-nginx-ingress-controller        LoadBalancer   10.98.151.75    <pending>     80:31673/TCP,443:31868/TCP   38s
service/nginx-sample-nginx-ingress-default-backend   ClusterIP      10.99.190.214   <none>        80/TCP                       38s

NAME                                                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-sample-nginx-ingress-controller        1/1     1            1           38s
deployment.apps/nginx-sample-nginx-ingress-default-backend   1/1     1            1           38s

NAME                                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-sample-nginx-ingress-controller-769bb677d8       1         1         1       38s
replicaset.apps/nginx-sample-nginx-ingress-default-backend-c5bfbb7b7   1         1         1       38s
camilamacedo@Camilas-MacBook-Pro ~/go/src/nginx-operator (master) $ kubectl get all -n nginx-operator-system
NAME                                                     READY   STATUS    RESTARTS   AGE
pod/nginx-operator-controller-manager-7d474697cb-nxljh   2/2     Running   0          2m23s

NAME                                                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/nginx-operator-controller-manager-metrics-service   ClusterIP   10.107.189.44   <none>        8443/TCP   2m23s

NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-operator-controller-manager   1/1     1            1           2m23s

NAME                                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-operator-controller-manager-7d474697cb   1         1         1       2m23s
  • Let's see if the logs has an error: (Again not found)
 $ kubectl logs deployment.apps/nginx-operator-controller-manager  -n nginx-operator-system -c manager | grep error
  • Let's get the full log:
` $ kubectl logs deployment.apps/nginx-operator-controller-manager -n nginx-operator-system -c manager`
 $ kubectl logs deployment.apps/nginx-operator-controller-manager  -n nginx-operator-system -c manager 
{"level":"info","ts":1598012257.9026465,"logger":"cmd","msg":"Version","Go Version":"go1.13.15","GOOS":"linux","GOARCH":"amd64","helm-operator":"v1.0.0"}
{"level":"info","ts":1598012257.9032013,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1598012258.6016066,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1598012258.6034672,"logger":"helm.controller","msg":"Watching resource","apiVersion":"example.my.domain/v1alpha1","kind":"Nginx","namespace":"","reconcilePeriod":"1m0s"}
I0821 12:17:38.604255       1 leaderelection.go:242] attempting to acquire leader lease  nginx-operator-system/nginx-operator...
{"level":"info","ts":1598012258.6051364,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0821 12:17:38.709600       1 leaderelection.go:252] successfully acquired lease nginx-operator-system/nginx-operator
{"level":"info","ts":1598012258.7101042,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: example.my.domain/v1alpha1, Kind=Nginx"}
{"level":"info","ts":1598012258.8109338,"logger":"controller","msg":"Starting Controller","controller":"nginx-controller"}
{"level":"info","ts":1598012258.8110785,"logger":"controller","msg":"Starting workers","controller":"nginx-controller","worker count":6}
{"level":"info","ts":1598012299.5317564,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: /v1, Kind=ServiceAccount"}
{"level":"info","ts":1598012300.032175,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.my.domain/v1alpha1","ownerKind":"Nginx","apiVersion":"v1","kind":"ServiceAccount"}
{"level":"info","ts":1598012300.033604,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"}
{"level":"info","ts":1598012300.4323888,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.my.domain/v1alpha1","ownerKind":"Nginx","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding"}
{"level":"info","ts":1598012300.4328387,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: /v1, Kind=Service"}
{"level":"info","ts":1598012300.634123,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.my.domain/v1alpha1","ownerKind":"Nginx","apiVersion":"v1","kind":"Service"}
{"level":"info","ts":1598012300.6351554,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: apps/v1, Kind=Deployment"}
{"level":"info","ts":1598012300.8316984,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.my.domain/v1alpha1","ownerKind":"Nginx","apiVersion":"apps/v1","kind":"Deployment"}
{"level":"info","ts":1598012300.8321147,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRole"}
{"level":"info","ts":1598012301.0329847,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.my.domain/v1alpha1","ownerKind":"Nginx","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole"}
{"level":"info","ts":1598012301.0337036,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: rbac.authorization.k8s.io/v1, Kind=Role"}
{"level":"info","ts":1598012301.1344287,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.my.domain/v1alpha1","ownerKind":"Nginx","apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role"}
{"level":"info","ts":1598012301.134922,"logger":"controller","msg":"Starting EventSource","controller":"nginx-controller","source":"kind source: rbac.authorization.k8s.io/v1, Kind=RoleBinding"}
{"level":"info","ts":1598012301.3326752,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"example.my.domain/v1alpha1","ownerKind":"Nginx","apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding"}
{"level":"info","ts":1598012301.334507,"logger":"helm.controller","msg":"Installed release","namespace":"default","name":"nginx-sample","apiVersion":"example.my.domain/v1alpha1","kind":"Nginx","release":"nginx-sample"}
+---
+# Source: nginx-ingress/templates/controller-serviceaccount.yaml
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    heritage: Helm
+    release: nginx-sample
+  name: nginx-sample-nginx-ingress
+---
+# Source: nginx-ingress/templates/default-backend-serviceaccount.yaml
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    heritage: Helm
+    release: nginx-sample
+  name: nginx-sample-nginx-ingress-backend
+---
+# Source: nginx-ingress/templates/clusterrole.yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    heritage: Helm
+    release: nginx-sample
+  name: nginx-sample-nginx-ingress
+rules:
+  - apiGroups:
+      - ""
+    resources:
+      - configmaps
+      - endpoints
+      - nodes
+      - pods
+      - secrets
+    verbs:
+      - list
+      - watch
+  - apiGroups:
+      - ""
+    resources:
+      - nodes
+    verbs:
+      - get
+  - apiGroups:
+      - ""
+    resources:
+      - services
+    verbs:
+      - get
+      - list
+      - update
+      - watch
+  - apiGroups:
+      - extensions
+      - "networking.k8s.io" # k8s 1.14+
+    resources:
+      - ingresses
+    verbs:
+      - get
+      - list
+      - watch
+  - apiGroups:
+      - ""
+    resources:
+      - events
+    verbs:
+      - create
+      - patch
+  - apiGroups:
+      - extensions
+      - "networking.k8s.io" # k8s 1.14+
+    resources:
+      - ingresses/status
+    verbs:
+      - update
+---
+# Source: nginx-ingress/templates/clusterrolebinding.yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    heritage: Helm
+    release: nginx-sample
+  name: nginx-sample-nginx-ingress
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: nginx-sample-nginx-ingress
+subjects:
+  - kind: ServiceAccount
+    name: nginx-sample-nginx-ingress
+    namespace: default
+---
+# Source: nginx-ingress/templates/controller-role.yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    heritage: Helm
+    release: nginx-sample
+  name: nginx-sample-nginx-ingress
+rules:
+  - apiGroups:
+      - ""
+    resources:
+      - namespaces
+    verbs:
+      - get
+  - apiGroups:
+      - ""
+    resources:
+      - configmaps
+      - pods
+      - secrets
+      - endpoints
+    verbs:
+      - get
+      - list
+      - watch
+  - apiGroups:
+      - ""
+    resources:
+      - services
+    verbs:
+      - get
+      - list
+      - update
+      - watch
+  - apiGroups:
+      - extensions
+      - "networking.k8s.io" # k8s 1.14+
+    resources:
+      - ingresses
+    verbs:
+      - get
+      - list
+      - watch
+  - apiGroups:
+      - extensions
+      - "networking.k8s.io" # k8s 1.14+
+    resources:
+      - ingresses/status
+    verbs:
+      - update
+  - apiGroups:
+      - ""
+    resources:
+      - configmaps
+    resourceNames:
+      - ingress-controller-leader-nginx
+    verbs:
+      - get
+      - update
+  - apiGroups:
+      - ""
+    resources:
+      - configmaps
+    verbs:
+      - create
+  - apiGroups:
+      - ""
+    resources:
+      - endpoints
+    verbs:
+      - create
+      - get
+      - update
+  - apiGroups:
+      - ""
+    resources:
+      - events
+    verbs:
+      - create
+      - patch
+---
+# Source: nginx-ingress/templates/controller-rolebinding.yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    heritage: Helm
+    release: nginx-sample
+  name: nginx-sample-nginx-ingress
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: Role
+  name: nginx-sample-nginx-ingress
+subjects:
+  - kind: ServiceAccount
+    name: nginx-sample-nginx-ingress
+    namespace: default
+---
+# Source: nginx-ingress/templates/controller-service.yaml
+apiVersion: v1
+kind: Service
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    component: "controller"
+    heritage: Helm
+    release: nginx-sample
+  name: nginx-sample-nginx-ingress-controller
+spec:
+  ports:
+    - name: http
+      port: 80
+      protocol: TCP
+      targetPort: http
+    - name: https
+      port: 443
+      protocol: TCP
+      targetPort: https
+  selector:
+    app: nginx-ingress
+    release: nginx-sample
+    app.kubernetes.io/component: controller
+  type: "LoadBalancer"
+---
+# Source: nginx-ingress/templates/default-backend-service.yaml
+apiVersion: v1
+kind: Service
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    component: "default-backend"
+    heritage: Helm
+    release: nginx-sample
+  name: nginx-sample-nginx-ingress-default-backend
+spec:
+  ports:
+    - name: http
+      port: 80
+      protocol: TCP
+      targetPort: http
+  selector:
+    app: nginx-ingress
+    release: nginx-sample
+    app.kubernetes.io/component: default-backend
+  type: "ClusterIP"
+---
+# Source: nginx-ingress/templates/controller-deployment.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    heritage: Helm
+    release: nginx-sample
+    app.kubernetes.io/component: controller
+  name: nginx-sample-nginx-ingress-controller
+  annotations:
+    {}
+spec:
+  selector:
+    matchLabels:
+      app: nginx-ingress
+      release: nginx-sample
+  replicas: 1
+  revisionHistoryLimit: 10
+  strategy:
+    {}
+  minReadySeconds: 0
+  template:
+    metadata:
+      labels:
+        app: nginx-ingress
+        release: nginx-sample
+        component: "controller"
+        app.kubernetes.io/component: controller
+    spec:
+      dnsPolicy: ClusterFirst
+      containers:
+        - name: nginx-ingress-controller
+          image: "us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1"
+          imagePullPolicy: "IfNotPresent"
+          args:
+            - /nginx-ingress-controller
+            - --default-backend-service=default/nginx-sample-nginx-ingress-default-backend
+            - --election-id=ingress-controller-leader
+            - --ingress-class=nginx
+            - --configmap=default/nginx-sample-nginx-ingress-controller
+          securityContext:
+            capabilities:
+                drop:
+                - ALL
+                add:
+                - NET_BIND_SERVICE
+            runAsUser: 101
+            allowPrivilegeEscalation: true
+          env:
+            - name: POD_NAME
+              valueFrom:
+                fieldRef:
+                  fieldPath: metadata.name
+            - name: POD_NAMESPACE
+              valueFrom:
+                fieldRef:
+                  fieldPath: metadata.namespace
+          livenessProbe:
+            httpGet:
+              path: /healthz
+              port: 10254
+              scheme: HTTP
+            initialDelaySeconds: 10
+            periodSeconds: 10
+            timeoutSeconds: 1
+            successThreshold: 1
+            failureThreshold: 3
+          ports:
+            - name: http
+              containerPort: 80
+              protocol: TCP
+            - name: https
+              containerPort: 443
+              protocol: TCP
+          readinessProbe:
+            httpGet:
+              path: /healthz
+              port: 10254
+              scheme: HTTP
+            initialDelaySeconds: 10
+            periodSeconds: 10
+            timeoutSeconds: 1
+            successThreshold: 1
+            failureThreshold: 3
+          resources:
+            {}
+      hostNetwork: false
+      serviceAccountName: nginx-sample-nginx-ingress
+      terminationGracePeriodSeconds: 60
+---
+# Source: nginx-ingress/templates/default-backend-deployment.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  labels:
+    app: nginx-ingress
+    chart: nginx-ingress-1.41.2
+    heritage: Helm
+    release: nginx-sample
+    app.kubernetes.io/component: default-backend
+  name: nginx-sample-nginx-ingress-default-backend
+spec:
+  selector:
+    matchLabels:
+      app: nginx-ingress
+      release: nginx-sample
+  replicas: 1
+  revisionHistoryLimit: 10
+  template:
+    metadata:
+      labels:
+        app: nginx-ingress
+        release: nginx-sample
+        app.kubernetes.io/component: default-backend
+    spec:
+      containers:
+        - name: nginx-ingress-default-backend
+          image: "k8s.gcr.io/defaultbackend-amd64:1.5"
+          imagePullPolicy: "IfNotPresent"
+          args:
+          securityContext:
+            runAsUser: 65534
+          livenessProbe:
+            httpGet:
+              path: /healthz
+              port: 8080
+              scheme: HTTP
+            initialDelaySeconds: 30
+            periodSeconds: 10
+            timeoutSeconds: 5
+            successThreshold: 1
+            failureThreshold: 3
+          readinessProbe:
+            httpGet:
+              path: /healthz
+              port: 8080
+              scheme: HTTP
+            initialDelaySeconds: 0
+            periodSeconds: 5
+            timeoutSeconds: 5
+            successThreshold: 1
+            failureThreshold: 6
+          ports:
+            - name: http
+              containerPort: 8080
+              protocol: TCP
+          resources:
+            {}
+      serviceAccountName: nginx-sample-nginx-ingress-backend
+      terminationGracePeriodSeconds: 60

{"level":"info","ts":1598012304.6328404,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"nginx-sample","apiVersion":"example.my.domain/v1alpha1","kind":"Nginx","release":"nginx-sample"}
{"level":"info","ts":1598012328.5364726,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"nginx-sample","apiVersion":"example.my.domain/v1alpha1","kind":"Nginx","release":"nginx-sample"}
{"level":"info","ts":1598012340.2010384,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"nginx-sample","apiVersion":"example.my.domain/v1alpha1","kind":"Nginx","release":"nginx-sample"}
{"level":"info","ts":1598012364.9758093,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"nginx-sample","apiVersion":"example.my.domain/v1alpha1","kind":"Nginx","release":"nginx-sample"}
{"level":"info","ts":1598012429.8033478,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"nginx-sample","apiVersion":"example.my.domain/v1alpha1","kind":"Nginx","release":"nginx-sample"}
  • Now, let's check the roles created;
 $ cat config/rbac/role.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: manager-role
rules:
##
## Base operator rules
##
# We need to get namespaces so the operator can read namespaces to ensure they exist
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
# We need to manage Helm release secrets
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - "*"
# We need to create events on CRs about things happening during reconciliation
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create

##
## Rules for example.my.domain/v1alpha1, Kind: Nginx
##
- apiGroups:
  - example.my.domain
  resources:
  - nginxes
  - nginxes/status
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- verbs:
  - "*"
  apiGroups:
  - "rbac.authorization.k8s.io"
  resources:
  - "clusterrolebindings"
  - "clusterroles"
- verbs:
  - "*"
  apiGroups:
  - ""
  resources:
  - "serviceaccounts"
  - "services"
- verbs:
  - "*"
  apiGroups:
  - "rbac.authorization.k8s.io"
  resources:
  - "rolebindings"
  - "roles"
- verbs:
  - "*"
  apiGroups:
  - "apps"
  resources:
  - "deployments"

# +kubebuilder:scaffold:rules

see that was not required the finalizer again

Conclusion

  • I did not need the finalizer RBAC and all worked fine.

Next Steps (provide more info)

@nmasse-itix, could you please:

  • check the steps performed here
  • check if you are or not doing the same
  • provide ALL steps for we are able to reproduce your scenario. Such as:
$ mkdir nginx-operator
$ cd nginx-operator
$ operator-sdk init --plugins=helm --domain=com --group=example --version=v1alpha1 --kind=Nginx
$ export IMG=quay.io/camilamacedo86/nginx-operator:v0.0.1
$ make docker-build docker-push IMG=$IMG
$ make deploy IMG=$IMG

And then, the steps performed to check the error. Note that we might need to add the role suggested by you in the default scenario. However, I'd like to know what is the step/command performed that you made which requires this permission.

@lbroudoux
Copy link

lbroudoux commented Aug 24, 2020

Hi @camilamacedo86 and @nmasse-itix

I've re-run all the tests from scratch on both Minikube and OpenShift and comes to the conclusion that Nicolas suggestion is mandatory for OpenShift.

On Minikube

On a fresh install, apply the following commands:

mkdir nginx-operator
cd nginx-operator
operator-sdk init --plugins=helm --domain=com --group=example --version=v1alpha1 --kind=Nginx
export IMG=quay.io/lbroudoux/nginx-operator:v0.0.1
make docker-build docker-push IMG=$IMG
make deploy IMG=$IMG
kubectl apply -f config/samples/example_v1alpha1_nginx.yaml -n nginx-operator-system

Operator deploys correctly and is able to deploy the nginx-sample custom resource. Everything is running fine.

Checking the logs:

$ kubectl logs deployment.apps/nginx-operator-controller-manager  -n nginx-operator-system -c manager | grep error
[EMPTY]

Clean-up everything

make undeploy
cd .. && rm -rf nginx-operator

On OpenShift

On a fresh 4.5 cluster, apply the same following commands:

mkdir nginx-operator
cd nginx-operator
operator-sdk init --plugins=helm --domain=com --group=example --version=v1alpha1 --kind=Nginx
export IMG=quay.io/lbroudoux/nginx-operator:v0.0.1
make docker-build docker-push IMG=$IMG
make deploy IMG=$IMG
kubectl apply -f config/samples/example_v1alpha1_nginx.yaml -n nginx-operator-system

Operator deploys correctly but is not able to deploy the nginx-sample custom resource.

Checking the logs:

$ kubectl logs deployment.apps/nginx-operator-controller-manager  -n nginx-operator-system -c manager | grep error
{"level":"error","ts":1598273662.1000674,"logger":"helm.controller","msg":"Release failed","namespace":"nginx-operator-system","name":"nginx-sample","apiVersion":"example.com/v1alpha1","kind":"Nginx","release":"nginx-sample","error":"failed to install release: serviceaccounts \"nginx-sample\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128\ngithub.com/operator-framework/operator-sdk/internal/helm/controller.HelmOperatorReconciler.Reconcile\n\tsrc/github.com/operator-framework/operator-sdk/internal/helm/controller/reconcile.go:181\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:90"}

Clean-up everything

make undeploy
cd .. && rm -rf nginx-operator

On OpenShift with modified RBAC on finalizers

On a fresh 4.5 cluster, apply the same following commands:

mkdir nginx-operator
cd nginx-operator
operator-sdk init --plugins=helm --domain=com --group=example --version=v1alpha1 --kind=Nginx
export IMG=quay.io/lbroudoux/nginx-operator:v0.0.1

before pursuing, edit config/rbac/role.yaml and add the nginxes/finalizers resource like below:

[...]
##
## Rules for example.com/v1alpha1, Kind: Nginx
##
- apiGroups:
  - example.com
  resources:
  - nginxes
  - nginxes/status
  - nginxes/finalizers
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- verbs:
  - "*"
  apiGroups:
  - ""
  resources:
  - "serviceaccounts"
  - "services"
- verbs:
  - "*"
  apiGroups:
  - "apps"
  resources:
  - "deployments"

# +kubebuilder:scaffold:rules

Finish with the last commands:

make docker-build docker-push IMG=$IMG
make deploy IMG=$IMG
kubectl apply -f config/samples/example_v1alpha1_nginx.yaml -n nginx-operator-system

Operator deploys correctly and is able to deploy the nginx-sample custom resource.

Checking the logs:

$ kubectl logs deployment.apps/nginx-operator-controller-manager  -n nginx-operator-system -c manager | grep error
EMPTY

There's a remaining issue when deploying the Pod because of security conflict with OpenShift SCC but it's not related to Operator SDK:

2020/08/24 13:13:19 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
2020/08/24 13:13:19 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)

Conclusion

As a conclusion, I would say that the tests confirm that Operator SDK for Helm does not work out-of-the-box on OpenShift. Finalizer permission on Custom Resource is required for OpenShift because of more restrictive default RBAC permissions on serviceaccount that should not be there on upstream Kubernetes. As this permission is not required by custom code logic and Operator SDK should be the preferred way for customers to develop operators on Red Hat's OpenShift, I suggest adding this permission by default when generating scaffold resources for a new Helm backed API.

@camilamacedo86
Copy link
Contributor

Hi @lbroudoux
really thank you for your comment;

So, it means that we need to:

[...]
##
## Rules for example.com/v1alpha1, Kind: Nginx
##
- apiGroups:
  - example.com
  resources:
  - nginxes
  - nginxes/status
  - nginxes/finalizers
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- verbs:
  - "*"
  apiGroups:
  - ""
  resources:
  - "serviceaccounts"
  - "services"
- verbs:
  - "*"
  apiGroups:
  - "apps"
  resources:
  - "deployments"

# +kubebuilder:scaffold:rules

@camilamacedo86 camilamacedo86 added language/ansible Issue is related to an Ansible operator project kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. and removed kind/documentation Categorizes issue or PR as related to documentation. labels Aug 24, 2020
@lbroudoux
Copy link

I've made some tests with Ansible but the permission is not required as the SDK does not place a finalizer on the Custom Resource. This seems to be done only with Helm.

@camilamacedo86 camilamacedo86 added language/helm Issue is related to a Helm operator project and removed language/helm Issue is related to a Helm operator project language/ansible Issue is related to an Ansible operator project labels Aug 24, 2020
@joelanford
Copy link
Member

update the docs as described in; #3733 (comment)

@camilamacedo86 These docs are not related to this issue AFAICT. The operator developer needs cluster admin to be able to create the CRDs and deploy the operator into the cluster. The operator does NOT need cluster admin (it uses the Role defined in config/rbac/role.yaml)

I think the solution here is just the simple addition of the finalizers permission in the default Helm role scaffold here:

resources:
- {{.Resource.Plural}}
- {{.Resource.Plural}}/status

@lbroudoux or @nmasse-itix Would either of you be interested in submitting a PR to fix this issue?

@joelanford joelanford added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. and removed triage/needs-information Indicates an issue needs more information in order to work on it. labels Aug 24, 2020
@camilamacedo86
Copy link
Contributor

camilamacedo86 commented Aug 24, 2020

Hi @joelanford,

You are right. The issue raised in both are The operator logs exhibit an error failed to install release: serviceaccounts \"nginx-sample\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on which means that are just related to the RBAC. I will raise a new issue for we track the doc needs.

done: #3778
and: #3777

@camilamacedo86
Copy link
Contributor

camilamacedo86 commented Aug 24, 2020

Done PR: #3779

Hi @lbroudoux,

could you help to review it?

@joelanford
Copy link
Member

Re-opening because PR #3779 did not resolve this issue for Helm operators. It applied the change to the Ansible operator scaffolding, but this permission change is not required for Ansible operators because the Ansible operator does not use a finalizer on the main CR.

I will submit a PR shortly that makes this change for Helm and reverts the change for Ansible.

@joelanford joelanford reopened this Oct 26, 2020
@joelanford joelanford modified the milestones: v1.1.0, v1.2.0 Oct 26, 2020
@camilamacedo86
Copy link
Contributor

For Helm, the finalizer permission will be added in #4105
For Ansible, the finalizer permission was added in #3779

Also, we need to cherry-pick the #4105 to 1.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. language/helm Issue is related to a Helm operator project
Projects
None yet
4 participants