Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Congratulations and suggestions :-) #58

Open
rdegez opened this issue Aug 8, 2023 · 2 comments
Open

Congratulations and suggestions :-) #58

rdegez opened this issue Aug 8, 2023 · 2 comments

Comments

@rdegez
Copy link

rdegez commented Aug 8, 2023

Hi!

First of all, thank you for this project! (and for https://github.com/sergelogvinov/proxmox-cloud-controller-manager as well!)

As big Proxmox & Kubernetes (and Talos!) users we have been waiting for some solution to tacle dynamic PersisentVolume management in Proxmox environnement in an elegant way that would mimic cloud-provider CSI behaviour (i.e. allocate a block-device for each PV and attach it to the VM).

Nevertheless, after looking at the Readme, I wonder why you are defining a single PVE hosts as a "zone" boundary and not the whole cluster (or at at least a group of PVE hosts in a cluster).

Since VMs (i.e. k8s workers) can be cold or live-migrated on others PVE hosts in the cluster whatever the underlaying Proxmox Storage type is (LVM volume, ZFS, iSCSI, Ceph RBD...) this feels a bit odd to me and an (apparently) unnecessary limitation ?

Maybe I'm missing something here but I think you should reconsider this design choice.

Cheers,

@sergelogvinov
Copy link
Owner

Hello, the initial idea was to create something similar to rancher/local-path but for Proxmox, as managing and resizing disks manually within VMs can be painful.

Just like in popular cloud platforms where they offer zone/regional network disks, Proxmox also provides similar functionality. This makes it easy to implement anti-affinity rules based on labels such as region, zone, and hostname. Zone anti-affinity ensures that pods won't run on a single hypervisor. This is a reason zone == proxmox node.

I have never used VM migrations for Kubernetes before; I usually use the drain technique. That's why I didn't initially consider online migrations. I have an idea, how to migrate PVCs to another Proxmox node (regional-PVC) - create a vm, attach pvc, migrate vm with pvc to another node, delete vm.
As an option we can add support shared disk storages, providing a solution that is agnostic to zones.

Second thought was - if you already have SMB, NFS, Ceph, etc, it might be more advisable to opt for another well-tested and maintained CSI plugins https://kubernetes-csi.github.io/docs/drivers.html

@sergelogvinov
Copy link
Owner

sergelogvinov commented Aug 19, 2023

Hello @rdegez i do not know that it helps to you or not.

I do not have network file system to check. But you can try this tag region-storage

  • ghcr.io/sergelogvinov/proxmox-csi-controller:region-storage
  • ghcr.io/sergelogvinov/proxmox-csi-node:region-storage

It support shared storages like Ceph/RBD, iSCSI

PS, i am going to merge it on next week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants