New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rook volume.go initializeDevicesLVMMode() incompatible with ceph-volume #8266
Comments
Rook expects raw devices and partitions as specified in the prerequisites, rather than providing an LV. |
Ceph-volume itself supports passing LVs and I am using 4 LVs on a GPT partition on a fast device as I did try one GPT partition as I had to specify the data devices separately in CephCluster to force ceph-volume to accept the Being able to force the use of one LV per data device as metadataDevice was a life-saver, as I was able to get my setup running as planned. The only change missing was the linked PR (rook-ceph |
I tried to use a GPT partition as Partition metadataDevice storage:
useAllNodes: true
useAllDevices: false
deviceFilter: "^(vdb)"
location:
config:
storeType: bluestore
osdsPerDevice: "1"
encryptedDevice: "true"
metadataDevice:
config:
storeType: bluestore
osdsPerDevice: "1"
encryptedDevice: "true"
metadataDevice:
devices:
- name: 'vdb'
config:
metadataDevice: "/dev/vdc1"
deviceClass: "hdd" # forces block.db device (journal on NVMe) to be used
encryptedDevice: "true"
OHOH! LV metadataDevice storage:
useAllNodes: true
useAllDevices: false
deviceFilter: "^(vdb)"
location:
config:
storeType: bluestore
osdsPerDevice: "1"
encryptedDevice: "true"
metadataDevice:
config:
storeType: bluestore
osdsPerDevice: "1"
encryptedDevice: "true"
metadataDevice:
devices:
- name: 'vdb'
config:
metadataDevice: "vg-metadata-0/metadata-0-0"
deviceClass: "hdd" # forces block.db device (journal on NVMe) to be used
encryptedDevice: "true"
As evident, the later case (with LV) works, while the former (partition) doesn't. With Ceph v16.2.4, that is. This means I can only fully utilize the hardware I have by specifying the LV metadataDevice while managing Ceph using rook-ceph. (These are just VM config/test results, as testing on bare-metal takes takes too much time. Kernel/userland/rook-ceph/kubernetes/application is identical.) |
I know current/future cluster builds may not wish to use LVM anymore, especially on NVMe-only nodes. But for the time being, maybe a little fix here (see PR) can allow lower-cost builds to operate. |
Thanks for the all the detailed background, makes sense to go with #8267. At least it's a simple fix! |
Is this a bug report or feature request?
Deviation from expected behavior:
Bogus error message emitted by pod/rook-ceph-osd-prepare-node-*, prevents OSD initialization:
Expected behavior:
Configuration with LVM metadataDevice is applied and OSD initialized.
How to reproduce it (minimal and precise):
Configure a dedicated, pre-assembled LV (logical volume) as a metadataDevice for a whole-device OSD.
Environment:
The text was updated successfully, but these errors were encountered: