-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Congratulations and suggestions :-) #58
Comments
Hello, the initial idea was to create something similar to Just like in popular cloud platforms where they offer zone/regional network disks, Proxmox also provides similar functionality. This makes it easy to implement anti-affinity rules based on labels such as region, zone, and hostname. Zone anti-affinity ensures that pods won't run on a single hypervisor. This is a reason I have never used VM migrations for Kubernetes before; I usually use the drain technique. That's why I didn't initially consider online migrations. I have an idea, how to migrate PVCs to another Proxmox node (regional-PVC) - create a vm, attach pvc, migrate vm with pvc to another node, delete vm. Second thought was - if you already have SMB, NFS, Ceph, etc, it might be more advisable to opt for another well-tested and maintained CSI plugins https://kubernetes-csi.github.io/docs/drivers.html |
Hello @rdegez i do not know that it helps to you or not. I do not have network file system to check. But you can try this tag
It support shared storages like Ceph/RBD, iSCSI PS, i am going to merge it on next week. |
Hi!
First of all, thank you for this project! (and for https://github.com/sergelogvinov/proxmox-cloud-controller-manager as well!)
As big Proxmox & Kubernetes (and Talos!) users we have been waiting for some solution to tacle dynamic PersisentVolume management in Proxmox environnement in an elegant way that would mimic cloud-provider CSI behaviour (i.e. allocate a block-device for each PV and attach it to the VM).
Nevertheless, after looking at the Readme, I wonder why you are defining a single PVE hosts as a "zone" boundary and not the whole cluster (or at at least a group of PVE hosts in a cluster).
Since VMs (i.e. k8s workers) can be cold or live-migrated on others PVE hosts in the cluster whatever the underlaying Proxmox Storage type is (LVM volume, ZFS, iSCSI, Ceph RBD...) this feels a bit odd to me and an (apparently) unnecessary limitation ?
Maybe I'm missing something here but I think you should reconsider this design choice.
Cheers,
The text was updated successfully, but these errors were encountered: