Jump to content
XPEnology Community

ESXI physical RDM vs controller passthrough!


Recommended Posts



One quick question:


Is there any performance/stability difference between using the physical raw disk mapping in ESXI vs whole controller passthrouhg in XPEnology?


As i understand the difference between the two is that in the pRDM case you have a virtual controller that is beeing passed all the native sata commands from the physical controller.

So in theory, XPEnology should get all the commands from the pRDM as it would in case of a controller passthrough. It gets the SMART status but not the temperature of the disk when using pRDM. I don't know how to test the disk spindown to check if it works.


As i only have the integrated intel SATA controller i don't want to passthrough because this leaves me with little to none flexibility, and i need to boot the XPEnology VM from a usb stick datastore. And there is no RAM ballooning when passing the whole controller to the VM.


Does anybody use pRDM with XPEnology?


Thank you!

Link to comment
Share on other sites

Mind if I add to this - as I think it's related - but apologies for hijacking! :smile:


On ESXi, what sort of transfer rates are people getting with DSM6.0 installed - for a single Gigabit link (across the LAN) for large files?

Would be useful to know if there was any noticable difference in the RDM vs Controller the OP mentioned too!


Basically, if the performance is up to scratch, I might consider moving from a Bare Metal install to a VM (with DSM on a VM)..

Gives me more flexibility! :smile:





Link to comment
Share on other sites

At the moment I get 45MB/sec if I copy a large MKV from my DSM VM to my desktop PC. The network is all Gigabit and there are 2 or 3 switches between the host and the workstation (there are two paths as I have load balanced NICs). Transfers were faster when I ran bare metal DSM on a considerably slower PC, IIRC, I could move files at somewhere between 80-100MB/s.


However, I can transfer files to a Windows 10 VM running on the same host at over 100MB/Sec. So I don't think that there's an issue with passthrough on the SATA adapter, but I may have a problem with my NIC or vSwitch config. I tried disabling one of the NICs, but it didn't make any difference.

Link to comment
Share on other sites

I tested with pRDM and passthrough. There is little to no performance penalty between the two. I get constant 112-115 MB transfer rate for large files from Windows 10 PC to XPEnology on the same network.


The downside of using pRDM is that is cannot read SMART from the drive (i think it has to do with the virtual scsi controller; tried all option vmwaer paravirt, lsi sas and parrallel; xpe doesn't see the smart and temp of the drive). So i ended up giving the controller to XPEnology and run the bootloader from a USB datastore. Maybe i will buy a cheap pcie sata controller for the datastore.


L.E. : i am using VMXNET3 for the network adapter. With Intel e1000 the transfer speeds are ridiculous low.

Link to comment
Share on other sites

Very odd..

The only reason I asked the above question was because when I first, rather, the last time I built in ESXi, I was only getting 45MB/sec transfer rates..

I can't use RDM as I don't have a SAN and am not sure how I can make use of them (for the home user) when you're trying to create storage in the first place (the NAS)..

So I'm basically using virtual disks on fairly old (250GB) disks that I had lying around - on old hardware (core2quad) - to test it.


It starts off at about 112MB/sec then heads fairly quickly down to 30-45MB/sec

I've tried installing the VMware tools Pat file (seems to improve things slightly - but just takes a bit longer for performance to drop).

Write latency on my two disks on my test box is very high.. Up to 2000ms..




Link to comment
Share on other sites

Well as i said i gave the whole sata controller to XPEnology VM.


You can create a physical RDM following this page


https://kb.vmware.com/selfservice/micro ... Id=1017530


The downside of using passthrough is that if you don't have a second sata controller you are stuck on using a USB stick as a datastore for the XPE bootloader and then create a iscsi target in synology and create a datastore in ESXI from it.


The 112-115MB transfer rate is stable for the whole transfer (16GB raw image).

Link to comment
Share on other sites

When I first started looking at this, tranferring a 5GB iso to the disk, it too about 8-10 seconds before tranfer speeds dropped from 112MB/s to 20-45MB/s.

A few changes (mainly installing 9.4 of VMWare tools using the manual package installer) and this has gone up to about 30 odd seconds before it drops. That's 30 seconds+ of saturating the Gigabit link at about 112MB/sec..


Does anyone have the latest VMWare tools for ESXi 6 please? I think it's on version 10 or something.

The SPK file I tried installing was open-vm-tools_bromolow-5.1_9.4.6-1770165-3.spk


Progress, but I'm hoping the issue is drivers, rather than anything else. I have nothing else running on this ESXi box, it's solely to test DSM6.0.




edit.. Found version open-vm-tools_bromolow-5.1_9.10.0-2476743-1.spk testing it now.

Link to comment
Share on other sites

I did a bit more testing...


Copy 1.2GB MKV from main 'production' DSM VM to workstation = 45MB/Sec (Data is hosted on HDDs connected via Marvell SATA adapter which is passed through with DirectPathIO


Copy 1.2GB MKV from 'test' DSM VM to workstation = 80MB/Sec (Data hosted on SSD data store)


Both DSM VMs are at the same version and are hosted on the same vSwitch. So maybe there is an issue with DirectPathIO on the cheap Marvell adapter that I'm using? I wonder if it's worth swapping it out for a cheap SAS HBA off eBay...

Link to comment
Share on other sites

edit.. Found version open-vm-tools_bromolow-5.1_9.10.0-2476743-1.spk testing it now.


I can confirm that it's the latest one. It works with DSM6, even though I am curious about HOW it works...

On a plain Linux installation you need to have linux headers installed in order to compile open-vm-tools, which is a sign that kernel modules are compiled.


The vmware drivers for RDM only implement a subset of the required scsi/sata commands which are insufficient for the DSM SMART test.


sh-4.3# smartctl --all /dev/sda
smartctl 6.5 (build date May 11 2016) [x86_64-linux-3.10.77] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

Vendor:               VMware
Product:              Virtual
Revision:             0000
User Capacity:        107,374,182,400 bytes [107 GB]
Logical block size:   512 bytes
LU is fully provisioned
Logical Unit id:      0x5000c295a48bbb15
Serial number:        00000000000000000001
Device type:          disk
Local Time is:        Wed Nov  2 20:44:48 2016 CET
SMART support is:     Unavailable - device lacks SMART capability.

Current Drive Temperature:     0 C
Drive Trip Temperature:        0 C

Error Counter logging not supported

[GLTSD (Global Logging Target Save Disable) set. Enable Save with '-S on']
Device does not support Self Test logging


If you want SMART you need either need an on premise installation or forward a HBA controller to your vm using direct-io. With direct-io the vm will require a static ram configuration.


Actually I only use direct-io with DSM5.2 vms and I am constantly able to max out the gbit link (~120MB/s). When I access a 10gbit vnic on DSM5.2 from a VM on the same ESXi host (also 10gbit vnic offcourse) I get speeds arround ~160MB/s from my WD Red drives. Though in the beginning I used RDM and I would remember if the speed would have been dramaticly less. Missing SMART information made me buy an additional HBA and use direct-io with the DSM vm. Working rock solid since then...


Since DSM6 can not be considered stable for time beeing. I just run systems with RDM with it...

Link to comment
Share on other sites

I did a bit more testing...

Copy 1.2GB MKV from main 'production' DSM VM to workstation = 45MB/Sec (Data is hosted on HDDs connected via Marvell SATA adapter which is passed through with DirectPathIO

Try it with a 4 or 5GB file. I can copy 1.2GB files before the issue hits (1.2 would be done in about 10-12 seconds).

Actually, since first checking this, I've spun up a Windows 2008R2 server on the same ESXi implementation.

Same issue with very large files (transfer speed drops after a time).. So I'm going to test with a different NIC.


edit.. Have tested this evening and tried disabling TOE in ESXi - and it made a difference (or so I thought) speeds only dropped slightly from 112MB/s to about 80 and then stayed there. I tried changing it back (enabling TOE) to see if it went back to the way it was before BUT it was the same again (about 80). Very odd but I guess I'm happy with performance (so far).

I did upgrade to the latest build of ESXi yesterday - 6.0.0 Update 2 (Build 4510822) - but at the time it didnt make a difference..



Link to comment
Share on other sites

Interesting, I'll play with the NIC settings. I get the same speeds with larger files. It's always the same speed, it doesn't start fast and then degrade.


I've ordered a 2nd hand HP Smart Array P410 and a couple of SFF-8087 to SATA cables from eBay, i'm hoping that's going to help.

Link to comment
Share on other sites

  • 2 weeks later...

i have installed esxi 6 with USB boot up.

I am thinking of using controller passthru with either below HBA card.


LSI LOGIC SAS 9207-8i Storage Controller LSI00301

Supermicro AOC-SAS2LP-MV8


Which one should I get or have other recommendation?

Is this card supported in xpenology 5.2 or 6?

Can I use the on-broad controller for passthru? (esxi not able to view the on board controller in the passthru option)


I also tried installed as bare-metal but the phpvitualbox not so reliable to use. so i want to change to the route of esxi.

In bare-metal i can do link aggregation and achieve transfer speed of ~100MBps


Hope to hear some advise before I buy.

Link to comment
Share on other sites

  • 2 months later...
  • Create New...