-
Notifications
You must be signed in to change notification settings - Fork 929
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIVA : Installation issue: all control plane pods are missing - k8s v1.28.1 #3662
Comments
I am also facing this exact same error. Any resolution so far? |
Unfortunately, not yet :( |
Looks like a documentation issue. Good find @Mia-Dan! |
@avishnu could you please take a look. |
The page linkedin above (https://openebs.io/docs/user-guides/installation) presents sample ouputs from an outdated openebs/openebs helm chart and openebs-operator (v2.x). v2.x shipped the maya-apiserver, a deprecated 'openebs-provisioner' component, etc. v3.x ships with localpv-provisioner and ndm. The sample outputs need an update. cc: @vikgaur |
Issues go stale after 90d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed. |
closing |
Description
I was trying to install OpenEBS on a newly set up Kubernetes cluster, following https://openebs.io/docs/user-guides/installation step-by-step. However, something seems to go wrong: all openebs control plane pods and some storage class (openebs-jiva-default, openebs-snapshot-promoter) are missing. Both installing through kubectl and through helm have been tried. I searched the web but didn't see any post describing similar problem.
Expected Behavior
The control plane pods openebs-provisioner, maya-apiserver and openebs-snapshot-operator should be running. Storage class openebs-jiva-default, openebs-snapshot-promoter should be there.
Current Behavior
Control plane pods are all missing. Storage class openebs-jiva-default, openebs-snapshot-promoter are missing.
install through kubectl
install through helm
Steps to Reproduce
Follow https://openebs.io/docs/user-guides/installation on a new Kubernetes 1.28.1 cluster
Your Environment
kubectl get nodes
:kubectl get pods --all-namespaces
:kubectl get sc
:kubectl get pv
:kubectl get pvc
:/etc/os-release
):uname -a
):Iscsid is running on all the 4 nodes. This is what the output is like on node01.
cluster-admin context:
The text was updated successfully, but these errors were encountered: