Jump to content
XPEnology Community

ESXi vs bare metal


bateau

Recommended Posts

Good morning, everyone.

 

I'm in the process of upgrading my XPenology hardware and I wanted to understand pros/cons of using ESXi vs bare metal.  Right now I have a bare metal installation and I use VM Manager to run several VMs.  The main pro of moving to ESXi would be decoupling those VMs from XPenology and have them available independent of NAS state.  The con I found so far is the different boot process and synoboot fix-up that's needed for ESXi.

 

Are there any other considerations I should be aware of?  The hardware in the new box isn't significantly different from the old one.  I'm just moving to 8-bay chassis with LSI HBA rather than HP tower I was using before.  Still running E3-1245v3 as CPU and 16GB RAM (can be expanded, that's just what I have).

Link to comment
Share on other sites

I don't know that those are "cons."  The synoboot problem happens to baremetal installs as well, but not as frequently.  I'm not sure what you mean by "different boot process" as it's the same loader... the boot option just tries a different strategy to address/hide the boot device.

 

Most folks select ESXi to help with hardware compatibility issues.  You can also create test VM's and run trial upgrades without risking your production system.  You have determined that VM flexibility is better (and I'd also suggest that the VM environment with ESXi is more featured/robust).

 

The real cons, in my opinion, are that wrapping everything in a hypervisor takes some system resources (but if you want to run VM's inside of DSM it's a wash) and the recent DSM hardware features (transcoding and NVMe cache) are not really viable in a virtual machine.

Link to comment
Share on other sites

Thank you @flyride.  I was reading your ESXi fix thread.  I don't plan to use NVME cache or transcoding via DSM (Plex Server is on nVidia Shield at the moment).  Could you recommend a guide for migrating from baremetal to hypervisor?  I've been scouring the forum and it seems like that's possible to do with drive migration.

 

Link to comment
Share on other sites

If you are not changing the DSM version or platform, no migration is required.  What are you running now?

 

In short, use the vdisk image of the same loader you are running now,  pass through your disk controller, and it will boot right up.  With that strategy you will need some other connected storage for VMWare scratch, VM configurations, and vdisks for the non-DSM VM's (if you don't want to NFS from DSM).  That's a good role for an M.2 disk if your motherboard has the capability.

 

That said, have a backup in case it goes wrong.

Edited by flyride
Link to comment
Share on other sites

@flyride, at the moment I’m running HP Z230 with E3-1245v3. 1.04b loader and no custom lzma. Latest DSM version. 
 

New (to me) hardware is SuperMicro X10SLM-F, E3-1265L v3, LSI 9211-8i HBA in a U-NAS 810A chassis so I can have 8 drives and option of SAS. 
 

 

Link to comment
Share on other sites

EDIT: Ack, I need to read.  You can passthrough your LSI and use your onboard SATA ports for scratch disks.

 

(take or leave this advice below, you have options)

 

So your alternative is to individually RDM each drive and import the ones for DSM, while leaving one of the motherboard connected drives for scratch.  I would prefer the simplicity of just passing through your C224 controller to your DSM VM.

 

So for ESXi scratch, consider an NVMe drive connected via PCIe slot, like this: https://www.amazon.com/QNINE-Adapter-Express-Controller-Expansion/dp/B075MDH28Y

Edited by flyride
Link to comment
Share on other sites

Would you mind educating me on the concept of scratch disks for ESXi?  I have a ton of reading to do on ESXi, but as I understand things so far, I would use a SSD attached to C224 SATA for VM storage (my Linux VMs, synoboot, etc).  I would pass through LSI to Xpenology and let DSM own the 8 drives attached to it.  The chassis has room for 1 2.5" drive, which ought to be plenty for VM partitions.

Link to comment
Share on other sites

ESXi uses read-only storage for its OS and runs in RAMdisk, in our configurations that storage is typically a USB pen drive (functionally replacing the USB loader for XPEnology) or DOM.

 

Then ESXi also needs read/write storage to internally function, that is called "scratch."  That is where ESXi stores swapfiles, logs and VM configuration, VM state information and virtual disk files.  So you need to plan some sort of physical storage for that per my prior post.  An SSD attached to the C224 per your last post will work fine.

 

 

Link to comment
Share on other sites

  • 3 weeks later...

One question for ESXi experts in the course of this thread: I finally opted to an ESXi DSM installation to workaround XPenology installation problems in Baremetal mode (I have an Intel DH77DF motherboard with a LSI MegaRaid 9260-8i card) but I am facing an issue:

I have only one VM hosted under ESXi and I can't allocate it the 16 GB of RAM available, I am limited to around 12 GB. Posts on the net didn't help me much, because they were rather dealing with general resource reservation questions and vSphere, not useful for this case.

Is there any hint or resource to properly grant all hardware resources to the VM running DSM?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...