New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there any chance to directly change Docker VM to KVM? #674
Comments
This container runs fine on Proxmox and ESXi. It is theoreticly possible to run the scripts outside of a container, but it would become very complicated to install, because it is not just a VM, but there are multiple additional scripts involved (for example, for gracefully shutting down DSM, etc). So the container just acts as a way to bundle all dependencies and have an easy way to start/stop the VM with all those additional scripts executed at the right times. |
Excellent job. Congratulations. I would also like to run directly in a VM. I studied the command and managed to make it work in a VM after docker generated the .img files and virtual disks.
Create VM and inside args, put: After that, the VM will boot as it was generated in docker, and extra disks can be attached that dsm will read automatically. Note: however, despite loading and even being able to use it, I noticed that the widget takes a long time to load and the information center > General does not load. I believe it has something to do with host.bin? Any idea? Sorry for the bad English |
@thiagofperes Impressive work! But I think you are still missing one part: graceful shutdown.. The code in |
@thiagofperes thanks You can also import the img into Proxmox.boot.img has scsi slot 9 #Virtual DSM VM ID
VMID=100
#Virtual DSM Storage Name
VM_STORAGE=local-zfs
qm importdisk $VMID /mnt/vdsm/DSM_VirtualDSM_69057.boot.img $VM_STORAGE
qm importdisk $VMID /mnt/vdsm/DSM_VirtualDSM_69057.system.img $VM_STORAGE
qm set $VMID --scsi9 $VM_STORAGE:vm-$VMID-disk-0,discard=on,cache=none
qm set $VMID --scsi10 $VM_STORAGE:vm-$VMID-disk-1,discard=on,cache=none
qm set $VMID --boot order=scsi9 Additional hard drives continuously Proxmox vm args ARGS=$(echo -device virtio-serial-pci,id=virtio-serial0,bus=pcie.0,addr=0x3 -chardev socket,id=charchannel0,host=127.0.0.1,port=12345,reconnect=10 -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=vchannel)
qm set $VMID --args "$ARGS"
graceful shutdown works via hookscript.Just an example:
hookscript dir #!/bin/bash
set -e -o errexit -o pipefail -o nounset
vmId="$1"
runPhase="$2"
case "$runPhase" in
pre-start)
/mnt/vdsm/host.bin -cpu=2 -cpu_arch=processor -mac=00:00:00:00:00 -hostsn=HostSN -guestsn=GuestSN -addr=0.0.0.0:12345 -api=:2210 &>/dev/null &
;;
post-start)
;;
pre-stop)
url="http://127.0.0.1:2210/read?command=6&timeout=50"
curl -sk -m "$(( 50+2 ))" -S "$url"
;;
post-stop)
hID=$(pgrep host.bin)
kill $hID
;;
*)
echo "Unknown run phase \"$runPhase\"!"
;;
esac
echo "Finished $runPhase on VM=$vmId" Assign hookscript to the vm
qm set $VMID --hookscript "local:snippets/vdsm.sh" then host.bin is executed at startup and stopped as soon as the VM is shut down/stopped. |
Thank you very much for your project. Your docker project works perfectly, and I tried using the same, but I faced some problems. Machine:
In addition to these problems, exposing services for automatic discovery is quite complicated. DLNA for example. In the tests I did with the VM, I got an uptime of 23h and I only stopped to make the changes that @Deroy2112 posted. I will continue testing. The VM really wasn't 100%, but for my use it wouldn't be a problem to turn it off within the vdsm UI. I tested @Deroy2112's solution and it was excellent. @Deroy2112 thank you very much, it was excellent. I racked my brains trying to boot and use the proxmox UI, but the problem was that the boot.img and system.img had to be scsi9/scsi10. Furthermore, the next disks must be above (scsi11,scsi12,scsi13). Your vdsm.sh script too, show. I'm new to proxmox. I made your changes and it's working great. I just don't understand why even specifying the processor in host.bin it is not showing in the dsm panel. But it's just a detail. For anyone testing, it is important to use VirtIO SCSI single. With this solution I can now use multiple networks and discovery of samba, dlna and other services works well. |
Do you know how I can pass the gpu of the intel n5100? But unfortunately it doesn't appear in vdsm. |
works almost perfectly but not 100%, args have to be passed in a better way and as far as i tested upgrade wont work, for me at least it sais corrupted. |
Check previous posts where @Deroy2112 posted a better solution. Topic node has all the information you need. What exactly is the problem with the update? Here it is working normally, I updated it and had no problems. UPDATE Regarding the processor, here is how it must be specified in host.bin to appear on the panel. |
finally managed to make it work, the boot.img file had problems so i recreated it and now it runs fine. system.img on scsi10 is not technically needed to be imported, a new disk in proxmox with at least 12Gb can be attached and dsm will install itself in it fresh. every other disk added (scsi11, scsi12) can be hot added while the vm is running and dsm will format them in btrfs without asking permission, did not test to pass an entire disk directly but should work.. virtualdsm has no support for raid so every disk attached will be added as volumeX automagically as single disk, so best way is to aggregate them in proxmox (mdadm?) and pass the disk without any filesystem to the vm and handle the rest in vdsm as single volume with btrfs and snapshots. |
This is an outstanding project, and I'm hugely appreciative of the developers' contributions. I noticed that the environment employs Docker to invoke KVM. I'm wondering if anyone has ever tried to bypass Docker and run the application directly in KVM? For instance, utilizing Virtual-dsm directly in a ProxmoxVE or ESXi VPS.
Why do I propose this?
Upgrading the system using arpl equates to a ticking time-bomb.Moreover,
arpl isn't very accommodating to VM installations.
Perhaps, arpl is more apt for installations on physical machines
The text was updated successfully, but these errors were encountered: