• 0

Migrating Baremetal to ESXi - Passthrough HDDs or Controller?


Go to solution Solved by WiteWulf,

Question

I'm contemplating migrating moving my baremetal install on an HP Gen8 Microserver to ESXi (ESXi because I use it at work and am more familiar with it than Proxmox).

 

It seems pretty simple: just replace the xpenology USB boot stick I'm currently using with an ESXi boot stick, create a VM for DSM with a virtual boot image, pass through the existing disks and boot it up. DSM will do the "I've detected disks from another server, do you want to migrate?" thing, and I'm done, right?

 

My main question before I do this is: given that I'm running the SATA controller on the Gen8 in AHCI mode (ie. no "hardware" RAID), should I pass through the controller to the VM, or the individual disks in Raw Disk Mode? Is there any performance benefit to either?

 

The disks (4x3TB) are full with DSM data, obviously, so I'll not be able to use that set of disks for any other ESXi guests, but I'm considering getting an HBA at some point to add some extra storage.

Link to post
Share on other sites

16 answers to this question

Recommended Posts

  • 0
  • Solution

I'm going to mark this thread/question as answered, but as it's a combination of all the different responses I can't mark any one of you as having answered the question, sorry!

Link to post
Share on other sites
  • 0
37 minutes ago, WiteWulf said:

I'm contemplating migrating moving my baremetal install on an HP Gen8 Microserver to ESXi (ESXi because I use it at work and am more familiar with it than Proxmox).

 

It seems pretty simple: just replace the xpenology USB boot stick I'm currently using with an ESXi boot stick, create a VM for DSM with a virtual boot image, pass through the existing disks and boot it up. DSM will do the "I've detected disks from another server, do you want to migrate?" thing, and I'm done, right?

 

My main question before I do this is: given that I'm running the SATA controller on the Gen8 in AHCI mode (ie. no "hardware" RAID), should I pass through the controller to the VM, or the individual disks in Raw Disk Mode? Is there any performance benefit to either?

 

The disks (4x3TB) are full with DSM data, obviously, so I'll not be able to use that set of disks for any other ESXi guests, but I'm considering getting an HBA at some point to add some extra storage.

To be honest I had really poor performances with onboard AHCI controller. And no SMART support in DSM.

This is why I choosed a LSI HBA IT card.it works great with jun's, But you know the status with redpill.

Let me find my tests i posted on xpenelogy if it can help you.

I can share with you my current ESXi conf.

Be warned, you can't use ESXi > than 6.7 with Mpt2sas card as VMware dropped their support. The only option if using ESXi 7.0+ is to passtrough the card to VM (I currently do)

Link to post
Share on other sites
  • 0

And also, you won't be able to passtrough the whole internal controller, you must have a datastore (hdd/ssd) were VM are stored. If you passthrough the controller, where will you have the datastore ? USB disk is not an option unless you use a "hack" to have datastore on USB.

I have :

- USB boot ESXi OS (loaded in RAM)

- SSD plugged on odd sata port motherboard

- SAS connector removed from motherboard and plugged to LSI card.

- 4 disks 4To installed (on LSI card of course)

Edited by Orphée
Link to post
Share on other sites
  • 0
1 hour ago, Orphée said:

Here were my tests :

 

if you read from there, you will find why I swiched to LSI card.

 

I didn't have any issues in test with ESXi 6.7 , but I did roll back the scsi-hpvsa driver , having read elsewhere about ESxi in general on the G8 

 

Have a read here https://www.johandraaisma.nl/fix-vmware-esxi-6-slow-disk-performance-on-hp-b120i-controller/ 

and here https://communities.vmware.com/t5/ESXi-Discussions/Very-slow-acces-to-datastores-on-HP-MIcroserver-Gen8-Can-t-edit/td-p/2276368

 

Iv'e since installed an LSI card, for greater flexibility , (and faster SATA ports) , only the first 2 sata ports on the MS G8 are 6Gbs Sata III , the others are 3gbs , for what it is worth .. 

Link to post
Share on other sites
  • 0
1 hour ago, Orphée said:

To be honest I had really poor performances with onboard AHCI controller. And no SMART support in DSM.

 

This is incorrect for AHCI controller passthrough. SMART does not work with RDM (also, trying to TRIM will crash RDM'd SSDs) on Jun's loader but it's perfectly fine with drives attached to passthrough controllers.  I passthrough my onboard SATA controller and it is exactly as baremetal.

 

40 minutes ago, Orphée said:

And also, you won't be able to passtrough the whole internal controller, you must have a datastore (hdd/ssd) were VM are stored. If you passthrough the controller, where will you have the datastore

 

Some folks have more than one SATA controller, or use a NVMe disk as a datastore and scratch volume.

 

The reasons to use RDM is to split drives between ESXi datastore and DSM on one controller (Orphée's case) or provide DSM access to disks that cannot otherwise be used at all (no controller support, NVMe, etc).

Edited by flyride
Link to post
Share on other sites
  • 0
30 minutes ago, scoobdriver said:

 

I didn't have any issues in test with ESXi 6.7 , but I did roll back the scsi-hpvsa driver , having read elsewhere about ESxi in general on the G8 

 

Have a read here https://www.johandraaisma.nl/fix-vmware-esxi-6-slow-disk-performance-on-hp-b120i-controller/ 

and here https://communities.vmware.com/t5/ESXi-Discussions/Very-slow-acces-to-datastores-on-HP-MIcroserver-Gen8-Can-t-edit/td-p/2276368

 

Iv'e since installed an LSI card, for greater flexibility , (and faster SATA ports) , only the first 2 sata ports on the MS G8 are 6Gbs Sata III , the others are 3gbs , for what it is worth .. 

The driver rollback did not improve the high latency issues for me.

Link to post
Share on other sites
  • 0
13 hours ago, Orphée said:

And also, you won't be able to passtrough the whole internal controller, you must have a datastore (hdd/ssd) were VM are stored. If you passthrough the controller, where will you have the datastore ? USB disk is not an option unless you use a "hack" to have datastore on USB.

As I mentioned: all four HDDs are currently full with DSM data. I can't resize them so I need to pass those through to the VM (either as raw disks or via the controller). My plan is to use an SSD on the ODD connector inside the Gen8 for datastore. There's one in there already that's configured as a read-cache for DSM, but I'm not convinced it's making a lot of difference.

 

I've read a few places online explaining how to keep the datastore on the same USB stick you boot from, or the SD card slot on the motherboard, but I'm loathe to run from flash storage.

Link to post
Share on other sites
  • 0
16 minutes ago, WiteWulf said:

As I mentioned: all four HDDs are currently full with DSM data. I can't resize them so I need to pass those through to the VM (either as raw disks or via the controller). My plan is to use an SSD on the ODD connector inside the Gen8 for datastore. There's one in there already that's configured as a read-cache for DSM, but I'm not convinced it's making a lot of difference.

 

I've read a few places online explaining how to keep the datastore on the same USB stick you boot from, or the SD card slot on the motherboard, but I'm loathe to run from flash storage.

If you have only one controller (internal SATA AHCI), your only choice will be your four disks with RDM feature (there is a tutorial for it).

I just meant you won't be able to passtrough the controller as if you do so, the SSD on odd port will be passed through also and won't be visible for ESXi host.

PCI Controller passtrough is all or nothing.

Maybe I'm wrong and in this case I missunderstood how PCI passtrough work.

 

Using a datastore on USB is a "hack" not supported by VMWare, never tried it. So I can't help on this subject.

 

Edit : Here for me with LSI card passed through :

image.thumb.png.30edd9ecc367c51e737a0379ca1b286d.png

 

My 4 data disks are not visible on ESXi.

 

Edit 2:

 

Edited by Orphée
  • Like 1
Link to post
Share on other sites
  • 0
3 minutes ago, Orphée said:

I just meant you won't be able to passtrough the controller as if you do so, the SSD on odd port will be passed through also and won't be visible for ESXi host.

Ah, I hadn't thought of that! I'd assumed that it was on a different controller, actually.

 

Raw disk is definitely the only option, then, unless I get PCIe HBA and get redpill/DSM7 working with the internal NIC (which plenty of people seem to be doing now).

Link to post
Share on other sites
  • 0
Just now, WiteWulf said:

Ah, I hadn't thought of that! I'd assumed that it was on a different controller, actually.

 

Raw disk is definitely the only option, then, unless I get PCIe HBA and get redpill/DSM7 working with the internal NIC (which plenty of people seem to be doing now).

Yes.

And from my personnal experience, I had "high latency" issues with disks as RDM with internal controller.

This is why I bought a LSI HBA IT card.

Link to post
Share on other sites
  • 0
2 minutes ago, WiteWulf said:

Interesting idea, do those PCIe NVMe adapters work on xpenlogy, then? This sort of thing, for example:
https://www.amazon.co.uk/SupaGeek-PCIe-Express-Adapter-Card/dp/B07CBJ6RH7/ref=pd_lpo_3?pd_rd_i=B07CBJ6RH7&psc=1

It should, actually you would use it not for "Xpenology" but for ESXi as Datastore.

As long as ESXi has the driver for this PCIe device (take care of PCIe slot compatibility of Gen8) you should be able to use it as datastore, and then passtrough the internal SATA AHCI to Xpenology VM.

  • Like 1
Link to post
Share on other sites
  • 0
9 minutes ago, WiteWulf said:

Raw disk is definitely the only option, then, unless I get PCIe HBA and get redpill/DSM7 working with the internal NIC (which plenty of people seem to be doing now).

with ESXi, it will always work with internal NIC as you configure the VM with E1000e NIC (handled by DSM) or if you add vmxnet3 driver to set NIC as VMXNET 3.

  • Like 1
Link to post
Share on other sites
  • 0
5 hours ago, WiteWulf said:

Interesting idea, do those PCIe NVMe adapters work on xpenlogy, then? This sort of thing, for example:
https://www.amazon.co.uk/SupaGeek-PCIe-Express-Adapter-Card/dp/B07CBJ6RH7/ref=pd_lpo_3?pd_rd_i=B07CBJ6RH7&psc=1

 

NVMe = PCIe.  They are different form factors for the same interface type.  So they have the same rules for ESXi and DSM as an onboard NVMe slot.

 

I use two of them on my NAS to drive enterprise NVMe disks, then RDM them into my VM for extremely high performance volume.

I also use an NVMe slot on the motherboard to run ESXi datastore and scratch.

  • Like 2
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.