It's quite easy to convert from bare metal (Physical) to Virtual (P2V) without loosing any data. DSM and it's configuration is stored on your data disks, so all you need to do is virtualize the bootloader (Xpenology) - i.e. replace the USB key with a Vmdk attached to a virtual machine. You then attach your existing disks to the virtual machine and it will retain all of your configuration and data. If you match the bootloader versions on both physical and virtual machines, then you should avoid any DSM upgrade during the P2V.
When trying this for the 1st time, I was in the fortunate position of having several old PCs and disks lying around, so I was able to create a new bare metal server and P2V that to the T20 before attempting this with my main server (with a fully loaded 4x3TB SHR array). If you have access to some spare kit, I would encourage you to do the same.
On vSphere/ESXi, there are two ways to present your existing disks to the virtual machine:
1. DirectPath IO - Install a PCI-e SATA adaptor in the T20, connect your existing disks to the adaptor and pass the adaptor through to the new VM. DSM will see all of the disks natively and will have access to SMART data. This is the route I went down when P2Ving my server several years ago. There are some downsides to this route that affect your ability to manage the VM (e.g. All the RAM you configure for the VM is dedicated to the VM and can't be shared with other VMs, you can't take snapshots for the VM for backup purposes) In practice, these limitations have not been a problem for me. I think this is the PCI-e card i'm using: https://www.amazon.co.uk/gp/product/B00AZ9T3OU?psc=1&redirect=true&ref_=oh_aui_detailpage_o00_s00 (I seem to recall that the Marvell chipset was important for DSM support at the time I bought it)
2. Raw Device Mapping (RDM) - Disks can be attached to the VM individually as RDMs, this involves creating links to the physical disks on the ESXi command line, then attaching these links to the VM as disks. This configuration will give you a bit more flexibility from a ESXi perspective, but you do loose visibility of SMART data in DSM.
My own T20 (24GB RAM) is currently running 5 instances of DSM, with a mixture of both approaches above: e.g. Main file server has PCI-e adapter passed through with several disks attached. Surveillance server has a single 2TB surveillance drive (WD Purple) attached as RDM.
Here's my own notes on how to pass through a drive via RDM...
ls -l /vmfs/devices/disks
Identify 2TB Surveillance Disk:
Create RDM mapping:
vmkfstools -z /vmfs/devices/disks/t10.ATA_____WDC_WD20PURX2D64P6ZY0_________________________WD2DWCC4M3XP0D4A "/vmfs/volumes/Samsung_250GB_SSD/RDM/WD_2TB_RDM_1.vmdk"
Note: 'Samsung_250GB_SSD' is my primary vSphere data store, so this is where I create the RDMs.
Add 'Existing Disk' to VM:
Set RdmFilter.HbalsShared to TRUE in Advanced Configuration
Assign to SATA adapter 1
Set Disk Compatibility to 'Virtual'
Set Disk Mode to 'Independent - persistent'
More here: https://gist.github.com/Hengjie/1520114890bebe8f805d337af4b3a064
I hope this is some help and good luck