Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Live snapshots of previously live updated VM captures the original state of the VM from launch #11827

Open
lyarwood opened this issue Apr 30, 2024 · 0 comments
Labels

Comments

@lyarwood
Copy link
Member

lyarwood commented Apr 30, 2024

/cc @ShellyKa13
/cc @enp0s3
/cc @vladikr

What happened:

Live snapshots of previously live updated VM captures the original state of the VM from launch.

What you expected to happen:

Creating a snapshot of a running but previously live updated VM captures the current state of the VM if there is no RestartRequired condition present implying that there are outstanding changes that have yet to be applied to the running VMI/Guest.

How to reproduce it (as minimally and precisely as possible):

$ env | grep KUBEVIRT
KUBEVIRT_PROVIDER=k8s-1.28
KUBEVIRT_MEMORY_SIZE=16384
KUBEVIRT_STORAGE=rook-ceph-default
KUBEVIRT_NUM_NODES=2
$ ./cluster-up/kubectl.sh patch kv/kubevirt -n kubevirt --type merge -p '{"spec":{"workloadUpdateStrategy":{"workloadUpdateMethods":["LiveMigrate"]},"configuration":{"vmRolloutStrategy":"LiveUpdate", "developerConfiguration":{"featureGates": ["Snapshot","VMLiveUpdateFeatures"]}}}}'
[..]
$ ./cluster-up/kubectl.sh apply -f - <<EOF
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  creationTimestamp: null
  name: fedora
spec:
  runStrategy: Always
  template:
    metadata:
      creationTimestamp: null
    spec:
      domain:
        cpu:
          sockets: 1
        memory:
          guest: 2Gi
        devices:
          disks:
          - disk:
              bus: virtio
            name: fedora
          - disk:
              bus: virtio
            name: cloudinitdisk
          interfaces:
          - name: internal
            masquerade: {}
        resources: {}
      terminationGracePeriodSeconds: 180
      volumes:
      - containerDisk:
          image: quay.io/containerdisks/fedora:39
        name: fedora
      - cloudInitNoCloud:
          userData: |
            #!/bin/sh
            mkdir -p  /home/fedora/.ssh
            curl https://github.com/lyarwood.keys > /home/fedora/.ssh/authorized_keys
            chown -R fedora: /home/fedora/.ssh
        name: cloudinitdisk
      networks:
      - name: internal
        pod: {}
status: {}
EOF
[..]
$ ./cluster-up/virtctl.sh ssh -lfedora fedora
  # lscpu | grep Socket
     Socket(s):                          1
[..]
$ ./cluster-up/kubectl.sh patch vm/fedora --type merge -p '{"spec":{"template":{"spec":{"domain":{"cpu":{"sockets":2}}}}}}'
[..]
$ ./cluster-up/kubectl.sh get vm/fedora -o json | jq .spec.template.spec.domain.cpu.sockets
2
[..]
$ ./cluster-up/kubectl.sh get vmis/fedora -o json | jq .status.currentCPUTopology
selecting podman as container runtime
{
  "sockets": 2
}
[..]
$ ./cluster-up/virtctl.sh ssh -lfedora fedora
  # sudo su -
  # echo 1 > /sys/devices/system/cpu/cpu1/online
  # lscpu | grep Socket
     Socket(s):                          2
[..]
$ ./cluster-up/kubectl.sh apply -f -<<EOF                                                                                                                                                                              
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
metadata:
  name: snap-fedora
spec:
  source:
    apiGroup: kubevirt.io
    kind: VirtualMachine
    name: fedora
EOF
[..]
$ ./cluster-up/kubectl.sh apply -f -<<EOF
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
  name: restore-fedora
spec:
  target:
    apiGroup: kubevirt.io
    kind: VirtualMachine
    name: fedora-new
  virtualMachineSnapshotName: snap-fedora
EOF
[..]
$ ./cluster-up/kubectl.sh get vm/fedora-new -o json | jq .spec.template.spec.domain.cpu.sockets
1

Additional context:

I noticed this while working on #10229 & #11455.

Environment:

  • KubeVirt version (use virtctl version): N/A
  • Kubernetes version (use kubectl version): N/A
  • VM or VMI specifications: N/A
  • Cloud provider or hardware configuration: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Kernel (e.g. uname -a): N/A
  • Install tools: N/A
  • Others: N/A
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant