Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migration between nodes #157

Open
sbogomolov opened this issue Feb 19, 2024 · 9 comments
Open

Migration between nodes #157

sbogomolov opened this issue Feb 19, 2024 · 9 comments

Comments

@sbogomolov
Copy link

We already support shared storage. However, PV backed by shared storage is bound to the node it was created on. If pod is killed and recreated on a different node - it cannot use that PV. Have anyone already looked into making this possible?

@sergelogvinov
Copy link
Owner

Hello,

shared storage is not properly tested. Because I do not have any of them.
There were some bugs, i do not remember when we fixed it.
But I heard it works now...

Please, try latest release (or edge).
We have a plan to make refactoring and add more tests of shared storages in future releases.

PS. Scheduler is responsible for migration of pods. IF it happens => PVC has right affinity already. Probably the issue in another component of proxmox/kubernetes. Try to check the logs.

@sbogomolov
Copy link
Author

When pod migrates to a different node we would need to detach the virtual disk from one VM and attach it to another one. Are you saying that this logic is already there?

@taylor-madeak
Copy link

@sbogomolov I just tested this in my homelab and can verify that this CSI driver will detach the volume and re-attach it on the appropriate node when the scheduler migrates the pod to a different node. This is fairly straightforward to test by just cordoning the node and restarting the pod.

@sbogomolov
Copy link
Author

@sbogomolov I just tested this in my homelab and can verify that this CSI driver will detach the volume and re-attach it on the appropriate node when the scheduler migrates the pod to a different node. This is fairly straightforward to test by just cordoning the node and restarting the pod.

This is great news! I'll try to test this on my cluster.

@christiaangoossens
Copy link

Can confirm this works, at least with my iSCSI volume. Pod can be created on all workers spread over the Proxmox cluster.

@vehagn
Copy link

vehagn commented Jun 7, 2024

@taylor-madeak @christiaangoossens Could you please provide more details on how you got volume migration to work?

I've tried both v0.6.1 and edge and can't get volume migration to work across zones/hypervisor host machines.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - https://raw.githubusercontent.com/sergelogvinov/proxmox-csi-plugin/v0.6.1/docs/deploy/proxmox-csi-plugin-release.yml
  - proxmox-csi-secret.yaml
  - sc.yaml

images:
  - name: ghcr.io/sergelogvinov/proxmox-csi-node
    newTag: edge
  - name: ghcr.io/sergelogvinov/proxmox-csi-controller
    newTag: edge

with the following StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: proxmox-csi
allowVolumeExpansion: true
parameters:
  csi.storage.k8s.io/fstype: ext4
  storage: local-zfs
  cache: writethrough
  ssd: "true"
mountOptions:
  - noatime
provisioner: csi.proxmox.sinextra.dev
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

I've tried with both a StatefulSet

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: stateful
  namespace: pve-csi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: stateful-pv
  template:
    metadata:
      labels:
        app: stateful-pv
    spec:
      containers:
        - name: alpine
          image: alpine
          command: [ "sleep","1d" ]
          volumeMounts:
            - name: stateful
              mountPath: /mnt
  volumeClaimTemplates:
     - metadata:
         name: stateful
       spec:
         storageClassName: proxmox-csi
         accessModes: [ "ReadWriteOnce" ]
         resources:
           requests:
             storage: 3Gi
  serviceName: stateful

and a Deployment with an ephemeral volume (this alone works, but the data is of course lost on each restart of the pod) and a pvc

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pv-deploy
  namespace: pve-csi
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 0
  selector:
    matchLabels:
      app: pv-deploy
  template:
    metadata:
      labels:
        app: pv-deploy
    spec:
      containers:
        - name: alpine
          image: alpine
          command: [ "sleep","1d" ]
          volumeMounts:
            - name: deploy
              mountPath: /mnt
            - name: pvc
              mountPath: /tmp
      volumes:
        - name: pvc
          persistentVolumeClaim:
            claimName: pvc
        - name: deploy
          ephemeral:
            volumeClaimTemplate:
              spec:
                storageClassName: proxmox-csi
                accessModes: [ "ReadWriteOnce" ]
                resources:
                  requests:
                    storage: 1.5Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
  namespace: pve-csi
spec:
  storageClassName: proxmox-csi
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi

I've tried both changing nodeAffinity and cordoning the node the pods are running on before restarting them.

I'm running a three node Proxmox cluster (homelab with machines abel, cantor, and euclid) running Kubernetes in two separate VMs , each on their own physical node (k8s-ctrl-01 on abel and k8s-ctrl-02 on euclid).

The k8s-nodes are manually labelled

kubectl label node k8s-ctrl-01 topology.kubernetes.io/region=homelab
kubectl label node k8s-ctrl-01 topology.kubernetes.io/zone=abel

kubectl label node k8s-ctrl-02 topology.kubernetes.io/region=homelab
kubectl label node k8s-ctrl-02 topology.kubernetes.io/zone=euclid

Migrating k8s-ctrl-02 to abel (so that both VMs are on the same physical host/zone) and relabelling it

kubectl label node k8s-ctrl-02 topology.kubernetes.io/zone=abel --overwrite

The PVs are migrated flawlessly from one node to the other and back again.

I see in the README.md that

The Pod cannot migrate to another zone (another Proxmox node)

but the above comments led me to believe that a pod is able to migrate to another zone/hypervisor host machine.

Am I doing something wrong, or is pv-migration to a different zone not supported yet? If no, is it a planned feature?

I'm nevertheless impressed by this plugin and I'm going to make good use of it in my homelab!

@sergelogvinov
Copy link
Owner

Hi, local storages cannot migrate to another proxmox node. It works only with shared storages like ceph.

But you can migrate pv/pvc to another node manually, https://github.com/sergelogvinov/proxmox-csi-plugin/blob/main/docs/pvecsictl.md brew version has a bug, try edge version...

docker pull ghcr.io/sergelogvinov/pvecsictl:edge

@vehagn
Copy link

vehagn commented Jun 9, 2024

@sergelogvinov Awesome! I'll have to try it.

Would it be possible to port this functionality to proxmox-csi-plugin?

@sergelogvinov
Copy link
Owner

It is not easy to implement this. There are many limitations on the Kubernetes side. That's why I created this CLI tool. We cannot share with the Kubernetes scheduler the cost of launching pods on a non-local Proxmox node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants