Jump to content
XPEnology Community

Tutorial: DSM 6.x on Proxmox


fonix232

Recommended Posts

After fiddling with it for a day, and getting it to work, I thought it would be nice to share the knowledge, especially since "officially" only ESXi is supported, and ESXi is picky about supporting stuff...

 

The main reason for me to move to a hypervisor was because Synology has not yet introduced NVMe support AT ALL. And even with a kernel driver, the Storage Manager will not see it as an SSD, as an HDD, or anything. Synology is silent about this, even though some have requested it on their forums (although they do not have a model with M.2 NVMe connector yet, they have some models with full sized x16 or x4 PCIe ports, which can be used with an adapter card for NVMe).

 

So I decided to try with a hypervisor. On one hand it makes installation easier, upgrading also, and I don't need a display connected (since most hypervisors provide a VNC connection to the guest OS). On the other hand, I can install the measly 2-3GB hypervisor and all the tools on the NVMe SSD, and have the rest mounted as a VMDK (or any other virtual disk file). The rest of the hard drives would use passthrough of course.

 

I fiddled around with multiple options. XenServer simply refused to boot in UEFI mode, ESXi does not support my network adapter, or my built-in SATA controller (B250 chipset), Microsoft's Hyper-V server has issues if you do not have a domain server on the network, also as soon as the display output goes off, the device drops network connection.

 

It left me with Proxmox. Never used it, and during installation I had some issues with the bootloader (both on 4.4 release and 5.0 beta). Luckily there's a workaround, since it is based on Debian, one can use the Debian netinst image, create a very basic system, and install Proxmox on top. I won't bore you with the details, there are enough guides about installing it to make me think twice before I write an (n+1)th version.

 

So let's begin!

 

Requirements:

  • A working install of Proxmox 5.0 - it can be 4.4 too, but I only tested this on 5.0. Follow the guide to create the bridged network interface!
  • The loader you wish to use. I recommend Jun's Loader, specifically 1.02a2 at the time of the writing of this guide.

 

Steps:

0. Edit the loader (if needed) to your liking - MAC address, serial number, etc. This is especially important if you have multiple XPE systems on the same network.

 

1. Create a new VM in Proxmox.

 

1.1 Set the name, and make sure to note down the VM ID (if it is your first VM, it should be 100). I'll be using {VM_ID} as a placeholder from now on.

 

1.2 OS type should be "Linux 4.x/3.x/2.6 Kernel".

 

1.3 Set the CD/DVD to "Do not use media" (we will remove the virtual disk drive any way later on).

 

1.4 For the HDD, you should create a new virtual disk with the format of your liking (qcow2, vmdk or raw), this will be the initial drive. I made sure that it uses nearly the whole storage of the OS drive it was installed on (in my case it was a 256GB NVMe SSD, which, after setup and partitioning, resulted in a 226GiB root drive, of which I had 211GB free, so I set the virtual disk's size to 200GB). You can set it to any kind of bus, EXCEPT VirtIO. With VirtIO I had performance issues, so I went with SCSI (it supports up to 12 devices any way, so it is better). This is for the virtual disk only, VirtIO works just fine with passthrough devices. So apart from the bus, size and format, you don't need to touch a thing.

 

1.5 For CPU, set kvm64 with as many cores as your host has (incl. virtual cores if you're on a HyperThreading supported CPU!). In my case with the Intel G4560 this is 4.

 

1.6 For RAM, you should leave some slack for the host OS, I went with 7.5GB from the 8GB I have. Ballooning is not required.

 

1.7 Networking. This is where many things can go wrong. The VirtIO paravirtualized network adapter should work, but to be safe I went with the Intel E1000. On the left select Bridged Mode, with the previously created bridge as the first choice. You can also enable Firewall if you do not trust Syno's own. Leave the rest of the settings as default.

 

1.8 On the Confirm page, confirm your settings and create the VM.

 

2. After the VM is created, first thing to do is to remove the virtual disk drive (IDE 2, if everything went right). Then comes the hard part.

 

3. You have to add each and every HDD to the config file that you want to use for passthrough. The command is simple:

qm set {VM_ID} -[protocol][port] /dev/disk/by-id/[disk-id]
 

 

The {VM_ID} part is obvious, but what about the rest?

 

[protocol] is the connection protocol you want to use. This can be sata, ide, scsi or virtio. I'm using SATA here, but you can use anything (IDE is not IDEal for us). SATA supports up to 6 devices (port indexes 0-6), scsi supports up to 12 devices, and virtio does not have a limitation to my knowledge.

 

[port] is the first unused port of said protocol. E.g. if you set the initial disk during setup to SATA0, and you want to use SATA further here, you have to start numbering from 1.

 

[disk-id] is the unique identifier of your HDD. Go to /dev/disk/by-id/ and list the disks you see. For most SATA devices, you'll see entries like "ata-[MANUFACTURER]_[MODEL]_[sERIAL]".

 

So let's say I have 4 disks, with the disk-id's ata-1, ata-2, ata-3, and ata-4 (yours will be a lot longer, but don't worry, you can use the bash autocomplete with the tab key). For this to work I execute the following commands:

qm set 100 -sata1 /dev/disk/by-id/ata-1
qm set 100 -sata2 /dev/disk/by-id/ata-2
qm set 100 -sata3 /dev/disk/by-id/ata-3
qm set 100 -sata4 /dev/disk/by-id/ata-4
 

 

Of course later on you can add further HDDs to a VM config by using the same command, just keep in mind the limitations of the protocols.

 

4. Now comes the hard part, we'll have to add the bootloader image file to the config.

 

The config file is located under /etc/pve/qemu-server/, and is named {VM_ID}.conf. Open it with nano.

 

This config file defines everything about the VM. Disks to mount, which device to use for booting, RAM amount, CPU cores, name of the VM, et cetera. Don't touch anything else than the lines you see here!

 

Copy the synoboot.img to somewhere on your server. If you want to be consistent with the Proxmox setup, copy it under /var/lib/vz/images/{VM_ID}/ - you'll need root for that.

 

After that, come back to the conf file, and open it again. You'll enter a few things here, make sure you pay attention!

 

Enter the following line into the conf file, and make sure you replace the parts in the path!

args: -device 'piix3-usb-uhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/images/{VM_ID}/synoboot.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot'
 

 

Make sure to replace the path to the synoboot.img with your own!

 

One more thing to edit here, the boot device. Find the line that begins with "boot: " and replace it so it looks like this:

boot: synoboot
 

 

Save the file, then start the VM. It should start up without an issue, and you'll be able to use find.synology.com to find your server and install it.

  • Like 5
Link to comment
Share on other sites

Jun's loader doesnt seem to support virtio_net and Virtio_scsi. So with Network: VirtIO paravirtualized doesnt seem to work and HDD: scsi doenst seem to work. I'm using Proxmox 5.0 beta2.

 

Maby someone else (Proxmox user) can try/test to get Network and HDD (VirtIO paravirtualized and scsi) to work ?

Link to comment
Share on other sites

On 05.06.2017 at 8:01 PM, fonix232 said:

4. Now comes the hard part, we'll have to add the bootloader image file to the config.

In fact, everything is much simpler!

1. Create HDD, I did SATA0, RAW format (because synology.img is a RAW disk)

2. Replace the resulting disk**1**.raw file with synology.img (where it lies) with renaming.

3. In the UI, we remove the disk (once!). It will become disabled. Then click on it two times - connect sata0, raw. It will be displayed as 50mb.

4. Profit!!!

On 05.06.2017 at 8:01 PM, fonix232 said:

1.7 Networking.

Only e1000, this has already been discussed - Virtio does not work!

  • Like 2
Link to comment
Share on other sites

Running Xpenology 6.1 in Proxmox 5 Beta 2.  I can't get Synology's Virtualization Manager to work.  When I try to start a virtual machine, the log say the following. Had anyone got it to work? 

 

2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: MemSize: ccc/reservation.cpp:477 Start to op(3), ba03cfbc-00ff-451a-aef5-b814a2623589: +1073741824, orig 0
2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: core/utils.cpp:895 Failed to mkdir: /dev/virtualization/libvirt/qemu, error: No such file or directory
2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: MemSize: ccc/reservation.cpp:537 Failed to mkdir
2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: MemSize: ccc/reservation.cpp:488 Failed to op(1), ba03cfbc-00ff-451a-aef5-b814a2623589: 1073741824
2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: ccc/guest.cpp:1575 Failed to edit reservation resource for [ba03cfbc-00ff-451a-aef5-b814a2623589] ret [-2], mem: 1024 / 0, cpu: 4 / 4
2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: ccc/guest.cpp:1686 Failed to prepare for guest [ba03cfbc-00ff-451a-aef5-b814a2623589]
2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: Guest/guest.cpp:3141 Failed creating guest [ba03cfbc-00ff-451a-aef5-b814a2623589] reason [-7]

 

Link to comment
Share on other sites

Im doing this a bit simplier.Create VM with e1000 networking,and two SATA drives.One is for loader( small =1gb),other for storage(space that i need).Then scp/copy synoboot.img to proxmox and just dd if=synoboot.img of=vm-disk.1gb.raw to write loader at first drive.On install by default all drives will be wiped out,and system is not going to boot until i repeat the dd commant AFTER install.Thats all-xpenology on proxmox is up and running.Once encountered "recovery" state after install-just press "recover" button and it will reboot at normal state.Still no luck with NET virtio drivers-they usualy supported by kernel itself.Also did not yet tryed qemu-agent feature.

Link to comment
Share on other sites

Some little additions ( i did not use this guide, but i think its more or less a good start ):

 

1. you want to add a serial console to the KVM, just add `serial0: socket` to the {VM_ID}.conf and then use qm terminal {VM_ID} to connect to the console

This will let you see the bootstrap, see potential issues, login via tty and also see installation/migration progress steps

 

2. virtio for NIC/drives is not supported for the juni loader, even the current 1.02b one. So you stick to SATA and e1000 here

 

3. You might want to just convert the synoboot.img to a qcow2 image and use it as a drive : qemu-img convert synoboot.img -O qcow2 vm-100-disk-1.qcow2

Just create a qcow2 disk before, no matter which size, then convert the image and replace your disk in /var/lib/vz/images/{VM_ID}/vm-100-disk-1.qcow2

Then edit /etc/pve/qemu-server/{VM_ID}.conf and fix the "size" parameter of the drive to match the actual size: ls -la vm-100-disk-1.qcow2

 

4. Be sure to _also_ install a second test-VM where you do not use passthrough drives but rather images - install the same version  and clone/snapshot/backup this VM

When you plan to upgrade  your production one, snapshot the test-vm, try to upgrade the bootloader/DSM there - if it fails, just revert to your old snapshot and report and the forum / wait until it gets fixed. When it fixed and it worked, make a new backup and now update your production system

 

why this is important? Because VM-backups/snaphots of the production NAS wont help, since you use passthrough drives.

 

---

 

I am running this on PVE 4.x stable ( latest ) and i had this running for 1.0.1 juni + 6.0.2 and today upgraded to 1.02b + 6.1.2 

Edited by mXfored
Link to comment
Share on other sites

On 6/17/2017 at 1:02 AM, wenlez said:

if i add  "boot: synoboot" to the VM's conf file, I can't get it to boot the bootloader.  In the VM's option, you'll see the Boot disk is read as "s,y,Network.."

syno.PNG

 

That is because it is improperly configured.  Hit ESC at boot and find out what boot number your USB synoboot.img is.  In my case it is number 7.

proxmox-usb-boot.png.1673fb00f09df4f7574e00bc063c1ff9.png

 

edit your {VM_ID}.conf using vi or nano /etc/pve/qemu-server/{VM_ID}.conf

 

should look something like this when complete:

args: -device 'piix3-usb-uhci,addr=0x18' -drive id=synoboot,file=/var/lib/vz/images/100/synoboot.img,if=none,format=raw -device usb-storage,id=synoboot,drive=synoboot
boot: 7
cores: 2
ide2: none,media=cdrom
memory: 4096
name: NAS-X2
net0: virtio=EA:5C:48:D3:D9:D5,bridge=vmbr1,tag=1010
net1: virtio=6A:53:32:11:A8:9B,bridge=vmbr2,tag=1020
net2: virtio=46:02:AF:FA:F6:F0,bridge=vmbr2,tag=1060
numa: 0
onboot: 1
ostype: l26
sata0: /dev/disk/by-id/ata-SAMSUNG_HD203WI_S285J1RZ601531,backup=0,cache=writeback,size=1953514584K,serial=S285J1RZ601531
sata1: /dev/disk/by-id/ata-SAMSUNG_HD203WI_S285J1RZ601548,backup=0,cache=writeback,size=1953514584K,serial=S285J1RZ601548
sata2: /dev/disk/by-id/ata-SAMSUNG_HD203WI_S285J1RZ601803,backup=0,cache=writeback,size=1953514584K,serial=S285J1RZ601803
sata3: /dev/disk/by-id/ata-INTEL_SSDSC2BA400G3_BTTV321000E5400HGN,backup=0,cache=writeback,size=390711384K,serial=BTTV321000E5400HGN
sata4: /dev/disk/by-id/ata-MKNSSDCR60GB_MK140429AS1354202,backup=0,cache=writeback,size=58615704K,serial=MK140429AS1354202
sata5: /dev/disk/by-id/ata-MKNSSDCR60GB_MKN1502R000080784,backup=0,cache=writeback,size=58615704K,serial=MKN1502R000080784
scsihw: megasas
serial0: socket
smbios1: uuid=7372bb90-1200-4c79-87cb-5b50379cacb5
sockets: 1

 

Note:  The serial0: socket and sockets: 1 allows you to connect to the vty from proxmox server qm terminal {VM_ID}

Edited by quicknick
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

10 hours ago, mXfored said:

Some little additions ( i did not use this guide, but i think its more or less a good start ):

 

1. you want to add a serial console to the KVM, just add `serial0: socket` to the {VM_ID}.conf and then use qm terminal {VM_ID} to connect to the console

This will let you see the bootstrap, see potential issues, login via tty and also see installation/migration progress steps

 

2. virtio for NIC/drives is not supported for the juni loader, even the current 1.02b one. So you stick to SATA and e1000 here

 

3. You might want to just convert the synoboot.img to a qcow2 image and use it as a drive : qemu-img convert synoboot.img -O qcow2 vm-100-disk-1.qcow2

Just create a qcow2 disk before, no matter which size, then convert the image and replace your disk in /var/lib/vz/images/{VM_ID}/vm-100-disk-1.qcow2

Then edit /etc/pve/qemu-server/{VM_ID}.conf and fix the "size" parameter of the drive to match the actual size: ls -la vm-100-disk-1.qcow2

 

4. Be sure to _also_ install a second test-VM where you do not use passthrough drives but rather images - install the same version  and clone/snapshot/backup this VM

When you plan to upgrade  your production one, snapshot the test-vm, try to upgrade the bootloader/DSM there - if it fails, just revert to your old snapshot and report and the forum / wait until it gets fixed. When it fixed and it worked, make a new backup and now update your production system

 

why this is important? Because VM-backups/snaphots of the production NAS wont help, since you use passthrough drives.

 

---

 

I am running this on PVE 4.x stable ( latest ) and i had this running for 1.0.1 juni + 6.0.2 and today upgraded to 1.02b + 6.1.2 

 

virtio drivers are supported in my loader which will be released once I iron out ESXi upgrades.  KVM and baremetal are solid though.  

Screenshot_20170618_223221.png.85b5a25006e80b3e37d6da9c22242f25.png

 

Also performance is great as iperf3 has proved great performance.

DiskStation> iperf3 -c 192.168.0.2 -i 1 -t 20 -R  
Connecting to host 192.168.0.2, port 5201
Reverse mode, remote host 10.1.0.2 is sending
[  4] local 192.168.0.212 port 44805 connected to 10.1.0.2 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   844 MBytes  7.08 Gbits/sec                  
[  4]   1.00-2.00   sec   879 MBytes  7.38 Gbits/sec                  
[  4]   2.00-3.00   sec   898 MBytes  7.53 Gbits/sec                  
[  4]   3.00-4.00   sec   768 MBytes  6.44 Gbits/sec                  
[  4]   4.00-5.00   sec   883 MBytes  7.41 Gbits/sec                  
[  4]   5.00-6.00   sec   899 MBytes  7.54 Gbits/sec                  
[  4]   6.00-7.00   sec   900 MBytes  7.55 Gbits/sec                  
[  4]   7.00-8.00   sec   878 MBytes  7.37 Gbits/sec                  
[  4]   8.00-9.00   sec   907 MBytes  7.61 Gbits/sec                  
[  4]   9.00-10.00  sec   883 MBytes  7.40 Gbits/sec                  
[  4]  10.00-11.00  sec   906 MBytes  7.60 Gbits/sec                  
[  4]  11.00-12.00  sec   828 MBytes  6.94 Gbits/sec                  
[  4]  12.00-13.00  sec   898 MBytes  7.54 Gbits/sec                  
[  4]  13.00-14.00  sec   910 MBytes  7.64 Gbits/sec                  
[  4]  14.00-15.00  sec   758 MBytes  6.36 Gbits/sec                  
[  4]  15.00-16.00  sec   753 MBytes  6.32 Gbits/sec                  
[  4]  16.00-17.00  sec   841 MBytes  7.05 Gbits/sec                  
[  4]  17.00-18.00  sec   960 MBytes  8.05 Gbits/sec                  
[  4]  18.00-19.00  sec   858 MBytes  7.20 Gbits/sec                  
[  4]  19.00-20.00  sec   933 MBytes  7.83 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-20.00  sec  17.0 GBytes  7.29 Gbits/sec    0             sender
[  4]   0.00-20.00  sec  17.0 GBytes  7.29 Gbits/sec                  receiver

iperf Done.

 

And it is better to use synoboot.img as virtual USB drive because then it properly works with upgrades.

Edited by quicknick
  • Like 3
  • Thanks 1
Link to comment
Share on other sites

3 часа назад, quicknick сказал:

virtio drivers are supported in my loader which will be released once I iron out ESXi upgrades.  KVM and baremetal are solid though.  

Please give me link to forum discussion is yours configarion tools, becose old link is dead
Thank you!

Link to comment
Share on other sites

5 hours ago, quicknick said:

 

virtio drivers are supported in my loader which will be released once I iron out ESXi upgrades.  KVM and baremetal are solid though.  

Screenshot_20170618_223221.png.85b5a25006e80b3e37d6da9c22242f25.png

 

Also performance is great as iperf3 has proved great performance.


DiskStation> iperf3 -c 192.168.0.2 -i 1 -t 20 -R  
Connecting to host 192.168.0.2, port 5201
Reverse mode, remote host 10.1.0.2 is sending
[  4] local 192.168.0.212 port 44805 connected to 10.1.0.2 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   844 MBytes  7.08 Gbits/sec                  
[  4]   1.00-2.00   sec   879 MBytes  7.38 Gbits/sec                  
[  4]   2.00-3.00   sec   898 MBytes  7.53 Gbits/sec                  
[  4]   3.00-4.00   sec   768 MBytes  6.44 Gbits/sec                  
[  4]   4.00-5.00   sec   883 MBytes  7.41 Gbits/sec                  
[  4]   5.00-6.00   sec   899 MBytes  7.54 Gbits/sec                  
[  4]   6.00-7.00   sec   900 MBytes  7.55 Gbits/sec                  
[  4]   7.00-8.00   sec   878 MBytes  7.37 Gbits/sec                  
[  4]   8.00-9.00   sec   907 MBytes  7.61 Gbits/sec                  
[  4]   9.00-10.00  sec   883 MBytes  7.40 Gbits/sec                  
[  4]  10.00-11.00  sec   906 MBytes  7.60 Gbits/sec                  
[  4]  11.00-12.00  sec   828 MBytes  6.94 Gbits/sec                  
[  4]  12.00-13.00  sec   898 MBytes  7.54 Gbits/sec                  
[  4]  13.00-14.00  sec   910 MBytes  7.64 Gbits/sec                  
[  4]  14.00-15.00  sec   758 MBytes  6.36 Gbits/sec                  
[  4]  15.00-16.00  sec   753 MBytes  6.32 Gbits/sec                  
[  4]  16.00-17.00  sec   841 MBytes  7.05 Gbits/sec                  
[  4]  17.00-18.00  sec   960 MBytes  8.05 Gbits/sec                  
[  4]  18.00-19.00  sec   858 MBytes  7.20 Gbits/sec                  
[  4]  19.00-20.00  sec   933 MBytes  7.83 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-20.00  sec  17.0 GBytes  7.29 Gbits/sec    0             sender
[  4]   0.00-20.00  sec  17.0 GBytes  7.29 Gbits/sec                  receiver

iperf Done.

 

And it is better to use synoboot.img as virtual USB drive because then it properly works with upgrades.

 

what is your boot image? I had no issues with upgrades at all using it at a qcow2 image - could you explain a bit more?

Link to comment
Share on other sites

7 часов назад, quicknick сказал:

Also performance is great as iperf3 has proved great performance.

Great nice!!! Pls, give me link on this bootloader! :smile:

 

В 17.06.2017 в 12:02, wenlez сказал:

i add  "boot: synoboot" to the VM's conf file

What for? This point is an auto boot after the proxmox reboot, or more exactly the boot order - 1,2,3,4 ...

Link to comment
Share on other sites

17 hours ago, quicknick said:

 

virtio drivers are supported in my loader which will be released once I iron out ESXi upgrades.  KVM and baremetal are solid though.  

Screenshot_20170618_223221.png.85b5a25006e80b3e37d6da9c22242f25.png

 

Virtio support !!! Nice !!! cant wait for the release of your loader.. You Sir are my hero !

Link to comment
Share on other sites

On 6/19/2017 at 4:02 PM, lemon55 said:

why /dev/disk/... but not /dev/sd* ?

because then if your disk changes names, it will still work.  For example, /dev/sda changes to /dev/sdb, then disk will fail.  Using disks by their ID means disks can change to whatever and will still work properly.

  • Like 1
Link to comment
Share on other sites

On 6/19/2017 at 4:31 AM, mXfored said:

 

what is your boot image? I had no issues with upgrades at all using it at a qcow2 image - could you explain a bit more?

The reason why I recommend using as usb image vs making it a vdisk, because you don't have to waste a sata or scsi port, and disk stays hidden as synoboot and not accessible inside DSM.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...