Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any chance to directly change Docker VM to KVM? #674

Open
0Knot opened this issue Apr 2, 2024 · 10 comments
Open

Is there any chance to directly change Docker VM to KVM? #674

0Knot opened this issue Apr 2, 2024 · 10 comments

Comments

@0Knot
Copy link

0Knot commented Apr 2, 2024

This is an outstanding project, and I'm hugely appreciative of the developers' contributions. I noticed that the environment employs Docker to invoke KVM. I'm wondering if anyone has ever tried to bypass Docker and run the application directly in KVM? For instance, utilizing Virtual-dsm directly in a ProxmoxVE or ESXi VPS.

Why do I propose this?
Upgrading the system using arpl equates to a ticking time-bomb.Moreover,
arpl isn't very accommodating to VM installations.
Perhaps, arpl is more apt for installations on physical machines

@kroese
Copy link
Collaborator

kroese commented Apr 2, 2024

This container runs fine on Proxmox and ESXi. It is theoreticly possible to run the scripts outside of a container, but it would become very complicated to install, because it is not just a VM, but there are multiple additional scripts involved (for example, for gracefully shutting down DSM, etc). So the container just acts as a way to bundle all dependencies and have an easy way to start/stop the VM with all those additional scripts executed at the right times.

@mndti
Copy link

mndti commented Apr 2, 2024

Excellent job. Congratulations.

I would also like to run directly in a VM. I studied the command and managed to make it work in a VM after docker generated the .img files and virtual disks.

  1. Generate the vdsm via docker with the basic 6GB disk.
  2. Wait for it to install.
  3. Copy the files to the proxmox> ISO
    DSM_VirtualDSM_69057.boot.img
    DSM_VirtualDSM_69057.system.img
    data.img - 6GB virtual disk

Create VM and inside args, put:
-nodefaults -boot strict=on -cpu host,kvm=on,l3-cache=on,migratable=no -smp 4 -m 6G -machine type=q35,usb=off,vmport=off,dump-guest-core=off,hpet=off,accel=kvm -enable-kvm -global kvm-pit.lost_tick_policy=discard -object iothread,id=io2 -device virtio-scsi-pci,id=hw-synoboot,iothread=io2,bus=pcie.0,addr=0xa -drive file=/var/lib/vz/template/iso/DSM_VirtualDSM_69057.boot.img,if=none,id=drive-synoboot,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device scsi-hd,bus=hw-synoboot.0,channel=0,scsi-id=0,lun=0,drive=drive-synoboot,id=synoboot0,rotation_rate=1,bootindex=1 -device virtio-scsi-pci,id=hw-synosys,iothread=io2,bus=pcie.0,addr=0xb -drive file=/var/lib/vz/template/iso/DSM_VirtualDSM_69057.system.img,if=none,id=drive-synosys,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device scsi-hd,bus=hw-synosys.0,channel=0,scsi-id=0,lun=0,drive=drive-synosys,id=synosys0,rotation_rate=1,bootindex=2 -drive file=/var/lib/vz/template/iso/data.img,if=none,id=drive-userdata,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device virtio-scsi-pci,id=hw-userdata,iothread=io2,bus=pcie.0,addr=0xc -device scsi-hd,bus=hw-userdata.0,channel=0,scsi-id=0,lun=0,drive=drive-userdata,id=userdata,rotation_rate=1,bootindex=3

After that, the VM will boot as it was generated in docker, and extra disks can be attached that dsm will read automatically.

Note: however, despite loading and even being able to use it, I noticed that the widget takes a long time to load and the information center > General does not load. I believe it has something to do with host.bin?

Any idea?

Sorry for the bad English

@mndti
Copy link

mndti commented Apr 4, 2024

Update - tested on proxmox:

I managed to make it work in the VM, maybe it will help someone. It was already working, but without host.bin it was not loading information and it took a while for widgets and system information to appear.

  1. Create a folder: mkdir /mnt/hdd
  2. Run it, yes with 2gb, it will give an error, but it will generate the images we need.
docker run -it --rm --name dsm \
-p 5000:5000 --device=/dev/kvm \
-v /mnt/hdd:/storage\
-e DISK_SIZE="2G" \
--cap-add NET_ADMIN \
--stop-timeout 120\
vdsm/virtual-dsm
  1. Go to /mnt/hdd and see if you have the two files:
    DSM_VirtualDSM_69057.boot.img
    DSM_VirtualDSM_69057.system.img

Note: If you want host.bin, you need to look for the container file
find / -name '*host.bin' | copy this file also to the /mnt/hdd folder to copy to proxmox.

  1. Install samba/sftp and copy these files to proxmox, choose the way you think is best, there is a lot of information on the internet.
  2. In the proxmox shell, we will create and place the files in a folder.
mkdir /mnt/vdsm
cp /path/DSM_VirtualDSM_69057.boot.img /mnt/vdsm/boot.img
cp /path/DSM_VirtualDSM_69057.system.img /mnt/vdsm/system.img
cp /path/host.bin /mnt/vdsm/host.bin
  1. Run host.bin
    /mnt/vdsm/host.bin -cpu=4 -cpu_arch=processor model> /dev/null 2>&1 &
    This process will die at startup, so create a cron or service to run at system start.

  2. Create a virtual machine Seabios / q35 / VirtIO SCSI or VirtIO SCSI single / Display none / network VirtIO (paravirtualized) / Add serial port 2 / Add VirtIO RNG
    Note: Do not attach any disks for now. Create a VM
    Captura de tela 2024-04-04 133211.

  3. Without shell editing: nano /etc/pve/qemu-server/VMID.conf and add the following:
    args: -serial pty -device virtio-serial-pci,id=virtio-serial0,bus=pcie.0,addr=0x3 -chardev socket,id=charchannel0,host=127.0.0.1,port=12345,reconnect=10 -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=vchannel -object iothread,id=io2 -device virtio-scsi-pci,id=hw-synoboot,iothread=io2,bus=pcie.0,addr=0xa -drive file=/mnt/vdsm/boot.img,if=none,id=drive-synoboot,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device scsi-hd,bus=hw-synoboot.0,channel=0,scsi-id=0,lun=0,drive=drive-synoboot,id=synoboot0,rotation_rate=1,bootindex=1 -device virtio-scsi-pci,id=hw-synosys,iothread=io2,bus=pcie.0,addr=0xb -drive file=/mnt/vdsm/system.img,if=none,id=drive-synosys,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device scsi-hd,bus=hw-synosys.0,channel=0,scsi-id=0,lun=0,drive=drive-synosys,id=synosys0,rotation_rate=1,bootindex=2

Save the file and start a VM.

  1. Access the IP generated via DHCP on your router and check if everything is OK.
  2. Add the desired disks in the VM UI.

I'm using it here and it works fine, test it at your own risk.

@kroese
Copy link
Collaborator

kroese commented Apr 5, 2024

@thiagofperes Impressive work! But I think you are still missing one part: graceful shutdown.. The code in power.sh sends a shutdown signal to vDSM which is absent in your solution. So when you shutdown the VM, it will not exit cleanly.

@Deroy2112
Copy link

Deroy2112 commented Apr 5, 2024

@thiagofperes thanks

You can also import the img into Proxmox.

boot.img has scsi slot 9
system.img has scsi slot 10

#Virtual DSM VM ID
VMID=100 
#Virtual DSM Storage Name
VM_STORAGE=local-zfs

qm importdisk $VMID /mnt/vdsm/DSM_VirtualDSM_69057.boot.img $VM_STORAGE
qm importdisk $VMID /mnt/vdsm/DSM_VirtualDSM_69057.system.img $VM_STORAGE

qm set $VMID --scsi9 $VM_STORAGE:vm-$VMID-disk-0,discard=on,cache=none
qm set $VMID --scsi10 $VM_STORAGE:vm-$VMID-disk-1,discard=on,cache=none

qm set $VMID --boot order=scsi9

Additional hard drives continuously
scsi11, scsi12, scsi13... etc..

Proxmox vm args

ARGS=$(echo -device virtio-serial-pci,id=virtio-serial0,bus=pcie.0,addr=0x3 -chardev socket,id=charchannel0,host=127.0.0.1,port=12345,reconnect=10 -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=vchannel)
qm set $VMID --args "$ARGS"

aio can be set as native or io_uring via gui.

graceful shutdown works via hookscript.

Just an example:

note
hookscript or snippets must be included as content
proxmox gui -> datacenter -> storage

hookscript dir
/var/lib/vz/vdsm.sh

#!/bin/bash
set -e -o errexit -o pipefail -o nounset

vmId="$1"
runPhase="$2"

case "$runPhase" in
    pre-start)
        /mnt/vdsm/host.bin -cpu=2 -cpu_arch=processor -mac=00:00:00:00:00 -hostsn=HostSN -guestsn=GuestSN -addr=0.0.0.0:12345 -api=:2210 &>/dev/null &
    ;;

    post-start)
      ;;

    pre-stop)
        url="http://127.0.0.1:2210/read?command=6&timeout=50"
        curl -sk -m "$(( 50+2 ))" -S "$url"
      ;;
    post-stop)
    	hID=$(pgrep host.bin)
    	kill $hID
      ;;
    *)
      echo "Unknown run phase \"$runPhase\"!"
      ;;
esac
echo "Finished $runPhase on VM=$vmId"

Assign hookscript to the vm

STORAGE:snippets/vdsm.sh

qm set $VMID --hookscript "local:snippets/vdsm.sh"

then host.bin is executed at startup and stopped as soon as the VM is shut down/stopped.

@mndti
Copy link

mndti commented Apr 5, 2024

@kroese

Thank you very much for your project. Your docker project works perfectly, and I tried using the same, but I faced some problems.

Machine:
N5100
12GB RAM
256gb nvme
hdd 1tb
i225 4lan

  • openwrt host -> docker - worked well, almost perfect, but was having uptimes of a maximum of 2 to 3 hours. For a file server it's terrible. I thought it could be a problem with the openwrt kernel, as I faced similar problems, trying to use qemu > haos (home assistant), it restarted periodically. Here it would be perfect, since openwrt uses very little ram.

  • host proxmox -> lxc > docker - worked very well, but the same problem with uptimes of a maximum of 2 to 3 hours. In addition to the DHCP Y issue.

  • host proxmox -> vm (debian) -> docker - worked well, I used DHCP Y - but same periodic restart problem. It seems to be when ram usage increases.

In addition to these problems, exposing services for automatic discovery is quite complicated. DLNA for example.

In the tests I did with the VM, I got an uptime of 23h and I only stopped to make the changes that @Deroy2112 posted.

I will continue testing.

The VM really wasn't 100%, but for my use it wouldn't be a problem to turn it off within the vdsm UI. I tested @Deroy2112's solution and it was excellent.

@Deroy2112 thank you very much, it was excellent. I racked my brains trying to boot and use the proxmox UI, but the problem was that the boot.img and system.img had to be scsi9/scsi10. Furthermore, the next disks must be above (scsi11,scsi12,scsi13).

Your vdsm.sh script too, show. I'm new to proxmox. I made your changes and it's working great. I just don't understand why even specifying the processor in host.bin it is not showing in the dsm panel. But it's just a detail.

For anyone testing, it is important to use VirtIO SCSI single.

With this solution I can now use multiple networks and discovery of samba, dlna and other services works well.

@mndti
Copy link

mndti commented Apr 5, 2024

Do you know how I can pass the gpu of the intel n5100?
I tried -display egl-headless,rendernode=/dev/dri/renderD128 -vga virtio

But unfortunately it doesn't appear in vdsm.

@r0bb10
Copy link

r0bb10 commented Apr 14, 2024

ive-synoboot,id=synoboot0,rotation_rate=1,booti

works almost perfectly but not 100%, args have to be passed in a better way and as far as i tested upgrade wont work, for me at least it sais corrupted.

@mndti
Copy link

mndti commented Apr 15, 2024

@r0bb10

Check previous posts where @Deroy2112 posted a better solution. Topic node has all the information you need.

What exactly is the problem with the update? Here it is working normally, I updated it and had no problems.

UPDATE

Regarding the processor, here is how it must be specified in host.bin to appear on the panel.
/mnt/vdsm/host.bin -cpu=4 -cpu_arch="Intel Celeron N5100,," -mac=00:00:00:00:00 -hostsn=HostSN -guestsn=GuestSN -addr=0.0.0.0:12345 -api=:2210 &>/dev/null &

image

@r0bb10
Copy link

r0bb10 commented Apr 20, 2024

@r0bb10

Check previous posts where @Deroy2112 posted a better solution. Topic node has all the information you need.

What exactly is the problem with the update? Here it is working normally, I updated it and had no problems.

finally managed to make it work, the boot.img file had problems so i recreated it and now it runs fine.

system.img on scsi10 is not technically needed to be imported, a new disk in proxmox with at least 12Gb can be attached and dsm will install itself in it fresh.

every other disk added (scsi11, scsi12) can be hot added while the vm is running and dsm will format them in btrfs without asking permission, did not test to pass an entire disk directly but should work.. virtualdsm has no support for raid so every disk attached will be added as volumeX automagically as single disk, so best way is to aggregate them in proxmox (mdadm?) and pass the disk without any filesystem to the vm and handle the rest in vdsm as single volume with btrfs and snapshots.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants