ESXi and hard drives


Recommended Posts

Hi everybody,

 

I've got a hopefully quick question that I've been looking for an answer to. I've been using XPEnology from a USB drive on countless desktops for a few years now and I absolutely love it. I've only recently begun looking into XPEnology running on ESXi and I'm curious...how are the hard drives managed for RAID type and failures?

 

Specifically, how does it compare to running XPEnology from a USB and having the OS manage the hard drives? Do you use hardware RAID instead and manage drive failure, replacement, etc at a lower level instead of through the OS?

 

I'm really not sure how different the experience is and specifically curious about how the drives are seen by XPEnology and handled.

Link to post
Share on other sites

Personally I wanted to get the most "native" feel of my XPenology so I got myself a SAS 9201-16i Host Bus Adapter and added that to my VM using pass-through.

That way I have SMART working and it's easy to add/remove drives and even move them to another synology device if necessary. :smile:

 

All of my VM's are stored across SSD drives and ESXi itself it booting from a small USB drive.

Edited by Guest
Link to post
Share on other sites

I guess there are several ways of handling storage, depends on how easy or hard you want the setup and management to be.

I'd say RDM etc as @NeoID runs is the most sophisticated, its also really popular with ESXi/XPE users, but you have to get compatible controllers

Other options might be using native hardware raid (if any) on the mobo and if XPE can see it you have a 'single drive' non SHR

Or you could create data stores on individual disks, create disks inside them and present to the XPE/DSM and create a raid.

I run an ESXi box, with an SSD for local 'high spec' VMs and an NFS share from a bare metal XPE/DSM box for everyday VMs, I like to use NFS as its browsable in File Station so handy for moving VMs around and making backups etc.

Link to post
Share on other sites

Thank you all for your replies. Would it be safe to assume that the main difference and/or benefit to going the ESXi route is that you are able to multi-purpose your computer instead of single-purposing it by solely running XPE?

Link to post
Share on other sites
Thank you all for your replies. Would it be safe to assume that the main difference and/or benefit to going the ESXi route is that you are able to multi-purpose your computer instead of single-purposing it by solely running XPE?

That was my reason for switching to ESXi (or virtualization in general). Now a single computer is capable of running OS X, Windows, Synology and Ubuntu at the same time. :smile:

Link to post
Share on other sites

If you have an ESXi server with a CPU that supports VMDirectPath I/O (Like the Xeon E3 in my T20) you can buy a cheap PCI controller that is supported by Xpenology (Like this http://www.dx.com/p/iocrest-marvell-88s ... een-282997) and pass control of the PCI-e card over to the Synology VM. DSM then has direct control of your data disks (they're not even visible to ESX). This avoids the need to buy an expensive HBA that is supported by ESXi.

 

This also means you can transplant disks from a bare metal Xpenology installation into a virtualized setup - which is what I'm preparing to do right now :smile:

Link to post
Share on other sites

FYI - I tested VMDirectPath as follows...

 

1. Did a bare metal install on a spare PC with a couple of 1TB drives. Copied some test data to volume1

2. Moved 1TB drives to the T20 and connected them to the Marvell SATA controller.

3. Configured PCI-e Pass-through in the vSphere Native Client - connected the Marvell card directly to my test DSM VM.

4. Confirmed that the drives and volume1 was accessible in the test DSM VM - tested both reading and writing to volume1.

 

I haven't moved my 'production' drives to the T20 yet as I need to sort out the config of all my apps first (SABNZBD+, Plex, SickRage, etc.) - the apps will reside on separate DSM VMs in the new setup (I want to separate file serving, downloading and media streaming functions).

Link to post
Share on other sites

Like others suggested in this thread, I too am running xpenology in ESXi 6.0 using PCIe passthrough of LSI 9211-8i + SAS2 Expander. This gives my xpenology guest direct access to bare-metal drives. It also gives me the flexibility to run xpenology anytime as bare metal by booting off a USB stick instead of using VM.

Link to post
Share on other sites
FYI - I tested VMDirectPath as follows...

 

1. Did a bare metal install on a spare PC with a couple of 1TB drives. Copied some test data to volume1

2. Moved 1TB drives to the T20 and connected them to the Marvell SATA controller.

3. Configured PCI-e Pass-through in the vSphere Native Client - connected the Marvell card directly to my test DSM VM.

4. Confirmed that the drives and volume1 was accessible in the test DSM VM - tested both reading and writing to volume1.

 

I haven't moved my 'production' drives to the T20 yet as I need to sort out the config of all my apps first (SABNZBD+, Plex, SickRage, etc.) - the apps will reside on separate DSM VMs in the new setup (I want to separate file serving, downloading and media streaming functions).

 

So, for this setup, if you don't mind me asking: When you set up ESXi what did you do for a primary data store? I've got ESXi 6 installed on a USB flash drive currently and I don't see how you can proceed with configuring a VM for XPEnology without a data store. I'm probably overlooking something simple, but ideally, I'd like ESXi to use the XPEnology volume for VM storage if possible. Is that even possible?

Link to post
Share on other sites
FYI - I tested VMDirectPath as follows...

 

1. Did a bare metal install on a spare PC with a couple of 1TB drives. Copied some test data to volume1

2. Moved 1TB drives to the T20 and connected them to the Marvell SATA controller.

3. Configured PCI-e Pass-through in the vSphere Native Client - connected the Marvell card directly to my test DSM VM.

4. Confirmed that the drives and volume1 was accessible in the test DSM VM - tested both reading and writing to volume1.

 

I haven't moved my 'production' drives to the T20 yet as I need to sort out the config of all my apps first (SABNZBD+, Plex, SickRage, etc.) - the apps will reside on separate DSM VMs in the new setup (I want to separate file serving, downloading and media streaming functions).

 

So, for this setup, if you don't mind me asking: When you set up ESXi what did you do for a primary data store? I've got ESXi 6 installed on a USB flash drive currently and I don't see how you can proceed with configuring a VM for XPEnology without a data store. I'm probably overlooking something simple, but ideally, I'd like ESXi to use the XPEnology volume for VM storage if possible. Is that even possible?

Link to post
Share on other sites
FYI - I tested VMDirectPath as follows...

 

1. Did a bare metal install on a spare PC with a couple of 1TB drives. Copied some test data to volume1

2. Moved 1TB drives to the T20 and connected them to the Marvell SATA controller.

3. Configured PCI-e Pass-through in the vSphere Native Client - connected the Marvell card directly to my test DSM VM.

4. Confirmed that the drives and volume1 was accessible in the test DSM VM - tested both reading and writing to volume1.

 

I haven't moved my 'production' drives to the T20 yet as I need to sort out the config of all my apps first (SABNZBD+, Plex, SickRage, etc.) - the apps will reside on separate DSM VMs in the new setup (I want to separate file serving, downloading and media streaming functions).

 

So, for this setup, if you don't mind me asking: When you set up ESXi what did you do for a primary data store? I've got ESXi 6 installed on a USB flash drive currently and I don't see how you can proceed with configuring a VM for XPEnology without a data store. I'm probably overlooking something simple, but ideally, I'd like ESXi to use the XPEnology volume for VM storage if possible. Is that even possible?

 

I'm running esxi 6 with direct IO as well. I'm booting off thumb drive for esxi. It doesn't natively support onboard sata controllers but I installed a user mod to enable it. I have an SSD off the motherboard sata controller as datastore1. xpenology runs off the SSD configured with small boot vm disk and secondary 32gb vm disk that shows up as volume1 to be the default share for syno apps. I followed the esxi tuturial on here. I have 2 dell 310 8 port cards flashed to IT mode that I passthrough to xpenology. I had to go the sata ssd route since I couldn't use the raid cards as a datastore and as a passthrough device.

Link to post
Share on other sites
  • 3 months later...

If you have an ESXi server with a CPU that supports VMDirectPath I/O (Like the Xeon E3 in my T20) you can buy a cheap PCI controller that is supported by Xpenology (Like this http://www.dx.com/p/iocrest-marvell-88s ... een-282997) and pass control of the PCI-e card over to the Synology VM. DSM then has direct control of your data disks (they're not even visible to ESX). This avoids the need to buy an expensive HBA that is supported by ESXi.

This also means you can transplant disks from a bare metal Xpenology installation into a virtualized setup - which is what I'm preparing to do right now :smile:

Hi berwhale,

I actually have the same setup as you and I'm trying to setup the passthrough PCI to Xpenology but it's giving me some weird issue.  Did you create the xpenology with the passthrough PCI installed or did you added that after you installed xpenology?

 

 

Posted via Xpenology.us

Link to post
Share on other sites

If you have an ESXi server with a CPU that supports VMDirectPath I/O (Like the Xeon E3 in my T20) you can buy a cheap PCI controller that is supported by Xpenology (Like this http://www.dx.com/p/iocrest-marvell-88s ... een-282997) and pass control of the PCI-e card over to the Synology VM. DSM then has direct control of your data disks (they're not even visible to ESX). This avoids the need to buy an expensive HBA that is supported by ESXi.

This also means you can transplant disks from a bare metal Xpenology installation into a virtualized setup - which is what I'm preparing to do right now :smile:

Hi berwhale,

I actually have the same setup as you and I'm trying to setup the passthrough PCI to Xpenology but it's giving me some weird issue.  Did you create the xpenology with the passthrough PCI installed or did you added that after you installed xpenology?

 

 

Posted via Xpenology.us

Link to post
Share on other sites

Hi andyl8u, I setup Xpenology as follows:

 

1. Created a Xpenology VM with a temporary virtual drive on one of my data stores.

2. Added the SATA adapter to the server and then to the VM via pass-through.

3. Removed the virtual drive.

4. Relocated the 3TB HDDs from my physical Xpenology server, connected them to the pass-through SATA adapter.

 

The Xpenology VM picked up the personality of the old physical server (i.e all data, permissions, apps, etc. was functioning as before).

Link to post
Share on other sites