Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JIVA : Installation issue: all control plane pods are missing - k8s v1.28.1 #3662

Closed
Mia-Dan opened this issue Oct 16, 2023 · 8 comments
Closed

Comments

@Mia-Dan
Copy link

Mia-Dan commented Oct 16, 2023

Description

I was trying to install OpenEBS on a newly set up Kubernetes cluster, following https://openebs.io/docs/user-guides/installation step-by-step. However, something seems to go wrong: all openebs control plane pods and some storage class (openebs-jiva-default, openebs-snapshot-promoter) are missing. Both installing through kubectl and through helm have been tried. I searched the web but didn't see any post describing similar problem.

Expected Behavior

The control plane pods openebs-provisioner, maya-apiserver and openebs-snapshot-operator should be running. Storage class openebs-jiva-default, openebs-snapshot-promoter should be there.

Current Behavior

Control plane pods are all missing. Storage class openebs-jiva-default, openebs-snapshot-promoter are missing.

install through kubectl

root@master:~# kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
Warning: resource namespaces/openebs is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/openebs configured
serviceaccount/openebs-maya-operator created
clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
Warning: resource customresourcedefinitions/blockdevices.openebs.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/blockdevices.openebs.io configured
Warning: resource customresourcedefinitions/blockdeviceclaims.openebs.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/blockdeviceclaims.openebs.io configured
configmap/openebs-ndm-config created
daemonset.apps/openebs-ndm created
deployment.apps/openebs-ndm-operator created
deployment.apps/openebs-ndm-cluster-exporter created
service/openebs-ndm-cluster-exporter-service created
daemonset.apps/openebs-ndm-node-exporter created
service/openebs-ndm-node-exporter-service created
deployment.apps/openebs-localpv-provisioner created
storageclass.storage.k8s.io/openebs-hostpath created
storageclass.storage.k8s.io/openebs-device created

root@master:~# kubectl get pods -n openebs
NAME                                           READY   STATUS    RESTARTS   AGE
openebs-localpv-provisioner-74cbc5d5b5-kh9vm   1/1     Running   0          7m19s
openebs-ndm-6twm4                              1/1     Running   0          7m19s
openebs-ndm-bb67w                              1/1     Running   0          7m19s
openebs-ndm-cluster-exporter-cf48c9589-brtdd   1/1     Running   0          7m19s
openebs-ndm-h5kht                              1/1     Running   0          7m19s
openebs-ndm-hwwvv                              1/1     Running   0          7m19s
openebs-ndm-node-exporter-bwr5k                1/1     Running   0          7m19s
openebs-ndm-node-exporter-nv6pq                1/1     Running   0          7m19s
openebs-ndm-node-exporter-v7s8x                1/1     Running   0          7m19s
openebs-ndm-node-exporter-zk5wh                1/1     Running   0          7m19s
openebs-ndm-operator-745b79d6bd-q9tmc          1/1     Running   0          7m19s

install through helm

root@master:~# # installing OpenEBS
root@master:~# helm repo add openebs https://openebs.github.io/charts
"openebs" has been added to your repositories
root@master:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "openebs" chart repository
Update Complete. ⎈Happy Helming!⎈
root@master:~# helm install openebs --namespace openebs openebs/openebs --create-namespace
NAME: openebs
LAST DEPLOYED: Mon Oct 16 06:37:42 2023
NAMESPACE: openebs
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Successfully installed OpenEBS.

Check the status by running: kubectl get pods -n openebs
...
root@master:~# helm ls -n openebs
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
openebs openebs         1               2023-10-16 06:37:42.471886181 +0000 UTC deployed        openebs-3.9.0   3.9.0      

root@master:~# kubectl get pods -n openebs
NAME                                           READY   STATUS    RESTARTS   AGE
openebs-localpv-provisioner-667f7d88df-ppdtf   1/1     Running   0          7m53s
openebs-ndm-7bznz                              1/1     Running   0          7m53s
openebs-ndm-9khl4                              1/1     Running   0          7m53s
openebs-ndm-bwpvc                              1/1     Running   0          7m53s
openebs-ndm-operator-749c89d955-xdjw2          1/1     Running   0          7m53s
openebs-ndm-ts7td                              1/1     Running   0          7m53s

Steps to Reproduce

Follow https://openebs.io/docs/user-guides/installation on a new Kubernetes 1.28.1 cluster

Your Environment

  • kubectl get nodes:
root@master:~/train-ticket# kubectl get nodes
NAME           STATUS   ROLES    AGE    VERSION
conrtolplane   Ready    master   6d1h   v1.28.1
node01         Ready    node     6d1h   v1.28.1
node02         Ready    node     6d1h   v1.28.1
node03         Ready    node     6d1h   v1.28.1
  • kubectl get pods --all-namespaces:
root@master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                           READY   STATUS    RESTARTS        AGE
default       hello-local-hostpath-pod                       1/1     Running   0               64m
default       tsdb-mysql-0                                   0/3     Pending   0               56m
kube-system   calico-kube-controllers-86b55cf789-pwstb       1/1     Running   1 (5d23h ago)   6d
kube-system   calico-node-7vkvs                              1/1     Running   3 (102m ago)    6d1h
kube-system   calico-node-lxvhs                              1/1     Running   2 (102m ago)    6d1h
kube-system   calico-node-ntp78                              1/1     Running   1 (108m ago)    6d1h
kube-system   calico-node-wmhwc                              1/1     Running   4 (102m ago)    6d1h
kube-system   coredns-7bc88ddb8b-w8dkd                       1/1     Running   2 (104m ago)    6d1h
kube-system   dashboard-metrics-scraper-77b667b99d-7t568     1/1     Running   1 (104m ago)    6d
kube-system   kubernetes-dashboard-74fb9f77fb-bxjcw          1/1     Running   2 (104m ago)    6d1h
kube-system   metrics-server-dfb478476-ms4pp                 1/1     Running   3 (101m ago)    6d
kube-system   node-local-dns-8cbr9                           1/1     Running   3 (105m ago)    6d1h
kube-system   node-local-dns-jrkfh                           1/1     Running   1 (104m ago)    6d1h
kube-system   node-local-dns-mlst5                           1/1     Running   1 (108m ago)    6d1h
kube-system   node-local-dns-tmxxc                           1/1     Running   2 (104m ago)    6d1h
openebs       openebs-localpv-provisioner-74cbc5d5b5-kh9vm   1/1     Running   0               21m
openebs       openebs-ndm-6twm4                              1/1     Running   0               21m
openebs       openebs-ndm-bb67w                              1/1     Running   0               21m
openebs       openebs-ndm-cluster-exporter-cf48c9589-brtdd   1/1     Running   0               21m
openebs       openebs-ndm-h5kht                              1/1     Running   0               21m
openebs       openebs-ndm-hwwvv                              1/1     Running   0               21m
openebs       openebs-ndm-node-exporter-bwr5k                1/1     Running   0               21m
openebs       openebs-ndm-node-exporter-nv6pq                1/1     Running   0               21m
openebs       openebs-ndm-node-exporter-v7s8x                1/1     Running   0               21m
openebs       openebs-ndm-node-exporter-zk5wh                1/1     Running   0               21m
openebs       openebs-ndm-operator-745b79d6bd-q9tmc          1/1     Running   0               21m
  • kubectl get sc:
root@master:~# kubectl get sc
NAME               PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device     openebs.io/local   Delete          WaitForFirstConsumer   false                  22m
openebs-hostpath   openebs.io/local   Delete          WaitForFirstConsumer   false                  22m
  • kubectl get pv:
  • kubectl get pvc:
  • OS (from /etc/os-release):
root@master:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Kernel (from uname -a):
Linux master 5.4.0-164-generic #181-Ubuntu SMP Fri Sep 1 13:41:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
  • Others:
    Iscsid is running on all the 4 nodes. This is what the output is like on node01.
root@node01:~# systemctl status iscsid
● iscsid.service - iSCSI initiator daemon (iscsid)
     Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-10-16 06:34:07 UTC; 1min 45s ago
TriggeredBy: ● iscsid.socket
       Docs: man:iscsid(8)
    Process: 20433 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, status=0/SUCCESS)
    Process: 20447 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
   Main PID: 20451 (iscsid)
      Tasks: 2 (limit: 4558)
     Memory: 3.5M
     CGroup: /system.slice/iscsid.service
             ├─20450 /sbin/iscsid
             └─20451 /sbin/iscsid

cluster-admin context:

root@master:~# kubectl auth can-i 'create' 'namespace' -A
yes
root@master:~# kubectl auth can-i 'create' 'crd' -A
yes
root@master:~# kubectl auth can-i 'create' 'sa' -A
yes
root@master:~# kubectl auth can-i 'create' 'clusterrole' -A
yes
@DivyanshuSaxena
Copy link

I am also facing this exact same error. Any resolution so far?

@Mia-Dan
Copy link
Author

Mia-Dan commented Jan 9, 2024

I am also facing this exact same error. Any resolution so far?

Unfortunately, not yet :(

@Mia-Dan Mia-Dan closed this as completed Jan 9, 2024
@Mia-Dan Mia-Dan reopened this Jan 9, 2024
@niladrih
Copy link
Member

Looks like a documentation issue. Good find @Mia-Dan!

@niladrih
Copy link
Member

@avishnu could you please take a look.

@avishnu
Copy link
Member

avishnu commented Jan 20, 2024

@avishnu could you please take a look.

@niladrih can you provide more details on this?

@avishnu avishnu added the Need INFO Need info from the user label Jan 21, 2024
@niladrih
Copy link
Member

The page linkedin above (https://openebs.io/docs/user-guides/installation) presents sample ouputs from an outdated openebs/openebs helm chart and openebs-operator (v2.x). v2.x shipped the maya-apiserver, a deprecated 'openebs-provisioner' component, etc. v3.x ships with localpv-provisioner and ndm. The sample outputs need an update.

cc: @vikgaur

Copy link

Issues go stale after 90d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

@orville-wright orville-wright self-assigned this Apr 27, 2024
@orville-wright orville-wright changed the title Installation issue: all control plane pods are missing - k8s v1.28.1 JIVA : Installation issue: all control plane pods are missing - k8s v1.28.1 Apr 27, 2024
@orville-wright
Copy link
Contributor

closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants