Jump to content
XPEnology Community

vSphere ESXI or bare metal?


Benoire

Recommended Posts

Hi

 

I've got a 16 bay SuperMicro chassis with a dual xeon motherboard (single Westmere L5630 installed presently), currently running a baremetal install of Xpenology. All is working fine, except that I cant get drives to be sorted according to their slot number, but that is an LSI hba well known issue.

 

The purpose of this post is to ask what the benefits of virtualising DSM. I presently run 3 virtual machine hosts, 1 storage host (DSM) and a server essentials 2012 R2 box for TV stuff. Now, I can vrtualise this DSM machine easily and therefore add it to my host pool and load it with more memory, or I could combine a vmhost with this DSM box and remove the need for an additional machine. The answer to this will be decided by the various answers hopefully!

 

So questions:

 

1) Is there any performance loss when utilising ESXi for DSM?

2) Anyone had any trouble with DSM not starting as expected when virtualised? Any issues that may crop up?

3) RDM vs passthrough? I've traditionally used passthrough but having looked at flexraid a while back which suggested RDM was the most flexible approach, what do people prefer and why?

4) If I use the array for VM storage, I assume that ESXi will reconnect to the array very quickly once its booted first?

 

Now so more advanced case questions; My setup also contains vcenter and vsan so I cluster the hosts.

 

5) Any body run clusters with DRS/HA? Any issues there?

6) My primary VM storage array will be the vSAN once I get enough SSDs, presume no issues there too?

 

My main question I guess is number 3, RDM vs Passthrough. ESXi should pick up the drives in the correct bays as I use SGPIO sideband connections from the backplane and using RDM will allow me to assign the drives in the right order to DSM, but I read that SMART is often not passed through to the VM and I would like to know more about that.

 

Any thoughts, comments or suggestions on the above is very much appreciated.

 

Cheers,

 

Chris

Link to comment
Share on other sites

I'm running pass-through for the following reasons:

 

1. I needed to add SATA ports to my T20, using pass-through allowed me to purchase a cheap (£22) SATA adapter, rather than an expensive HBA with ESXi support.

 

2. I wanted to migrate an existing 4 drive array from a bare metal install to ESXi, connecting the drives to a dedicated adapter and passing that through just worked - DSM isn't

even 'aware' that it's been virtualized - everything works exactly as it did before moving the disks.

 

As for performance, I can't really comment. The Xeon in the T20 is an order of magnitude faster than the Celeron SoC in my old bare metal server, DSM and Plex just fly on the new box.

Link to comment
Share on other sites

1) Is there any performance loss when utilizing ESXi for DSM?

 

I've run enough benchmark and test to be confident that there's virtually no performance loss. In fact, there's a "net" overall performance gain in I/O because all the guest VM that connected to the same virtual switch can talk to the DSM at 10Gbit speed.

ESXi also give me the ability to add drives to DSM for temp volume (utilizing SSD space in my datastore) for cache/scratch when needed.

Flexibility and remote management capability are the keys here.

 

2) Anyone had any trouble with DSM not starting as expected when virtualized? Any issues that may crop up?

 

No issue. My ESXi datastore storage are all SSD so DSM loads very fast from boot up.

 

3) RDM vs pass-through? I've traditionally used pass-through but having looked at flexraid a while back which suggested RDM was the most flexible approach, what do people prefer and why?

 

Pass-through. PCI pass-through with HBA controller. It's much more reliable that way. If something is going wrong with the ESXi host, I can pull the drives/controller and plug those into another ESXi host and have everything up and running in no time. In fact, I have spare HBA controllers on my other ESXi hosts so there's no need to pull the controller. The drive order doesn't matter so you/I don't have to keep track which drive connect to which SATA/SAS ports. SMART data are also available with PCI pass-through.

 

4) If I use the array for VM storage, I assume that ESXi will reconnect to the array very quickly once its booted first?

 

Shouldn't be an issue. I don't store my VM on DSM. I backup my VMs onto DSM via Acronis vAppliance running on ESXi host. I should state that my important VM are stored in my Intel S3700 SSDs which I trust 100%.

 

Now so more advanced case questions; My setup also contains vcenter and vsan so I cluster the hosts.

 

5) Any body run clusters with DRS/HA? Any issues there?

 

I don't use HA. I figure ESXi host with server grade hardware should be more reliable than bare-metal. I didn't use redundancy setup for bare-metal, there's no need to start with DSM as guest VM for me. I don't have any need for DRS.

 

6) My primary VM storage array will be the vSAN once I get enough SSDs, presume no issues there too?

 

Shouldn't be.

 

Another advantage of ESXi is power consumption. It's not just about saving $$ in the electric bill, it's the requirement on UPS to have the severs up and running. I have a bunch of must-be-on VMs anyway, so move DSM into the same host saves me around 60 watts or so. I've had 4-5 hrs power outage before so my important VM and DSM are on a single host that can last 9+ hrs on UPS just in case.

Link to comment
Share on other sites

×
×
  • Create New...