You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Crashing pod(s) logs, if necessary
kubectl -n rook-ceph logs rook-ceph-agent-828t7 on k8s-node-4 (if it matters, k8s-node-4 is not one of the nodes listed in the cluster.yaml file above):
Main command for Ceph operator and daemons.
Usage:
rook ceph [command]
Available Commands:
clean Starts the cleanup process on the disks after ceph cluster is deleted
config-init Generates basic Ceph config
mgr
operator Runs the Ceph operator for orchestrating and managing Ceph storage in a Kubernetes cluster
osd Provisions and runs the osd daemon
Flags:
-h, --help help for ceph
Global Flags:
--log-level string logging level for logging/tracing output (valid values: ERROR,WARNING,INFO,DEBUG) (default "INFO")
--operator-image string Override the image url that the operator uses. The default is read from the operator pod.
--service-account string Override the service account that the operator uses. The default is read from the operator pod.
Use "rook ceph [command] --help" for more information about a command.
Environment:
OS (e.g. from /etc/os-release):
Ubuntu 20.04.3 LTS
Kernel (e.g. uname -a): Linux k8s-node-4 5.4.0-1046-raspi #50-Ubuntu SMP PREEMPT Thu Oct 28 05:32:10 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
Cloud provider or hardware configuration:
Raspberry Pi 4 8GB
Rook version (use rook version inside of a Rook Pod):
From one of the working agents:
k -n rook-ceph exec rook-ceph-agent-4f25q -it -- rook version
rook: v1.5.0-alpha.0.590.g2a19230
go: go1.15.7
This surprised me since I thought I had upgraded to pacific some time ago. The rook-ceph-agent daemonset shows the pod image as rook-ceph-agent:master, but oddly I can't find the yaml file in this github repository that creates this daemonset?
Storage backend version (e.g. for ceph do ceph -v):
From the toolbox:
ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
From one of the ubuntu nodes:
ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
I assume Ubuntu just has an older apt package -- is it possible to run this from one of the running rook-ceph pods?
I manually edited the daemonset and changed the version from master to v1.7.8 to fix this. I just wish I remember how I installed it to begin with since the quickstart doesn't show these pods getting created and I can't seem to find any yaml file in the repo (maybe it was deprecated and just doesn't show for 1.7.8 release?). Do I even need it? I am using a FlexVolume for cifs, so maybe it came from instructions from somewhere else.
@zestysoft The rook agent (flex driver) no longer exists in master, it is being removed in Rook v1.8 due out in a couple weeks. See #8076 for more details. All Rook flex volumes will need to be converted to csi before upgrading to v1.8. A tool is just being finished that will help with the migration. It's not quite ready yet, but you can track its progress with #9222 and https://github.com/ceph/persistent-volume-migrator
Is this a bug report or feature request?
Deviation from expected behavior:
rook-ceph-agent's status is CrashLoopBackOff
Expected behavior:
Pod starts successfully, or shouldn't run on node if not necessary.
How to reproduce it (minimal and precise):
Joined kubernetes node to cluster
File(s) to submit:
cluster.yaml
, if necessaryDiff of custer.yaml file from release-1.7 (pulled today so 1.7.8?)
kubectl -n rook-ceph logs rook-ceph-agent-828t7 on k8s-node-4 (if it matters, k8s-node-4 is not one of the nodes listed in the cluster.yaml file above):
Environment:
OS (e.g. from /etc/os-release):
Ubuntu 20.04.3 LTS
Kernel (e.g.
uname -a
):Linux k8s-node-4 5.4.0-1046-raspi #50-Ubuntu SMP PREEMPT Thu Oct 28 05:32:10 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
Cloud provider or hardware configuration:
Raspberry Pi 4 8GB
Rook version (use
rook version
inside of a Rook Pod):From one of the working agents:
k -n rook-ceph exec rook-ceph-agent-4f25q -it -- rook version
rook: v1.5.0-alpha.0.590.g2a19230
go: go1.15.7
This surprised me since I thought I had upgraded to pacific some time ago. The rook-ceph-agent daemonset shows the pod image as rook-ceph-agent:master, but oddly I can't find the yaml file in this github repository that creates this daemonset?
Storage backend version (e.g. for ceph do
ceph -v
):From the toolbox:
ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
From one of the ubuntu nodes:
ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
I assume Ubuntu just has an older apt package -- is it possible to run this from one of the running rook-ceph pods?
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:34:20Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/arm64"}
Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
k8sadmin
Storage backend status (e.g. for Ceph use
ceph health
in the Rook Ceph toolbox):Versions of all the container images currently running from deployments:
same but for daemonsets:
The text was updated successfully, but these errors were encountered: