Jump to content
XPEnology Community

Best setup -> ESXi -> XPENology (DSM5.2)


ibexcentral

Recommended Posts

Hi,

 

I am about to build a VMWare ESXi 5.5 (for VM's) / NAS running XPENology (DSM5.2) and need some advice on the best / standard setup for perfomance and safe storage. I want DSM volumes to be separate from ESXi diskstore so that I can map a network drive from a laptop on the same network and store files on DSM. I was thinking that I would use 2 x 4TB WD Red HDD for DSM volume and have a backup process to sync with the last 1 x 4TB WD Red HDD.

 

Hardware:

Asrock AM1H-ITX mini-itx board

AMD 5350 CPU

16GB RAM

3 x 4TB WD Red HDD

1 x SDD? (See below)

 

What is the best setup to achieve what I need?

 

Is it possible to have a SDD that has two partitions the first paritition for ESXi only and the second for a datastore to contain a VM for running XPENology / DSM? The other HDD used as DSM volumes.

 

Does ESXi and EXPnology / DSM operate efficiently this way or is there another configruation? that is boot ESXi from USB and use small SDD as the intial datastore to contain VM for running XPEnology / DSM?

 

Any advice would be appreciated.

Link to comment
Share on other sites

I think the missing piece is what else you are going to use your ESXi server for. The other VMs you intend to create may affect how you configure your server.

ESXi will quite happily boot from a USB drive and that will save you trying to partition your SSD.

I am assuming that you will connect your HDDs via RDM and so, as far as DSM is concerned, the datastore doesn't need to be on anything particularly fast as the virtual machine folder will only contain configuration and the xpenology virtual disk (or .iso file). This is only read at boot time, the DSM system itself will be on your HDDs.

So I have:

USB with ESXi

1 small HDD with a datastore containing the DSM VM and others

3 x HDD for DSM.

Link to comment
Share on other sites

When you install ESXi to any device it will create a handful of small partitions for itself and (except on USB devices) another partition using the rest of the disk, formatted as a datastore.

 

If you are going to dedicate a HD for the datastore, the only practical benefit to install ESXi on a USB drive is that you will save the few GBs it needs for install and a slightly faster boot compared to a normal HD.

 

My advice in your case, is to install ESXi and its datastore in that dedicated HD, as HD are generally more reliable than USB drives.

 

As for the type of disk you need, SSD or not will not make a real difference for ESXi and XPEnoboot, as both are fully loaded in memory and small, so, performance wise, we are talking of an initial delay of a few seconds at startup. I made a test once and I THINK (was time ago) it was less than 2 seconds of difference.

 

However, the other virtual machines that you intend to start will greatly benefit from a SSD as their virtual disks will normally be allocated in the datastore.

 

Last, any reason for ESXi 5.5 when you have v6 freely available?

Link to comment
Share on other sites

1. Why go with ESX 5.5 when 6.0 is out and stable?

2. You have two options in regards to how you present the HDDs to ESX

  • RDM - Raw Device Mapping (or passthrough, skipping ESX, and going directly to the VM)
  • VMFS based - where you present the disks to ESX and format them as VMFS and then create a THICK VMDK disk on each Datastore (basically HDD). ATTENTION: THICK and not THIN. I suggest THICK LAZY ZEROED

3. There are advantages and disadvantages for each method of presenting the disks.

  • In case of RDM, DSM can't monitor the HDDs as no SMART data can be passed-through the hypervisor but you can always take your disks and insert them into a real Synology system or even XPENO and will work.
  • In case of VMFS based disks, SMART will work but you can't "transplant" the HDDs to another DSM. In theory, it will be slower due to the fact that data is being passed through another filesystem, but I can you or anybody else in this forum can even measure the difference :roll:

4. Take my advice and use 4 HDDs in a SHR1 volume. You will thank me later. If not, when one of the HDDs will break, you will lose all your data. SHR1 with 4 disks will protect you from a disk failure (any one of them). In 3 years of XPENO use, I lost 2 HDDs but not a single bit of data due to it.

5. You can install ESX on an USB stick. Based on VMware's best practices, a 16GB stick would be recommended for the ESX. The stick will be used just at the boot until the OS (ESX) will be loaded in RAM.

6. The SSD is pretty useless if you install ESX on the USB drive, unless you use as a datastore for some additional VMs that you want to be a bit speedy, like a Windows machine that you can connect via RDP

7. If you don't use an USB drive, you could install ESX on the SSD and the remaining free space from the SSD after installing ESX could be use as a datastore for VMs like DSM and Windows or the Xpenoboot boot disk

8. My DSM runs under Hyper-V for 3 years. Hyper-V because on J1900 ESX cannot be installed (or better said, I didn't know how to make it work at that time)

 

Choose wise Young Padawan!

Link to comment
Share on other sites

There's a third option to present HDDs to a VM - you can use DirectPath I/O to pass-through the whole SATA adapter to the VM. The VM will then have complete control over the adapter and any disks attached to it. However, you do need a CPU and motherboard that support DirectPath I/O.

 

I'm currently getting ready to move the 4x 3TB drives from my old server to the T20 in my signature using this method.

Link to comment
Share on other sites

There's a third option to present HDDs to a VM - you can use DirectPath I/O to pass-through the whole SATA adapter to the VM. The VM will then have complete control over the adapter and any disks attached to it. However, you do need a CPU and motherboard that support DirectPath I/O.

 

I was sticking to his config and forgot about pass-through RAID card.

Good that you mentioned.

Link to comment
Share on other sites

Thick disks aren't really faster then thin anymore (even much slower in 4K writes, especially with flash) also I'd recommend use the correct virtual nic (VMXNET3) and virtual controller (PVSCSI).

Passthrough will give you best performance (throughput and lower CPU) and best power savings as the disks can spindown.

 

Best way of doing XPENology on ESXi 6:

1) Boot ESXi from other controller (SATA/PCIe/USB) if using passthrough. SATA/PCIe recommended as you can use that as datastore for other VM's and the XPENology boot vmdk.

2) Passthrough extra SATA controller or the onboard (requires map file edit, nothing else can be on that controller) if using passthrough.

3) Modify image to block IDE and set NIC mac.

4) Modify VM to use VMXNET, PVSCSI (when not using passthrough) and set MAC.

 

How to do step 2 for onboard Intel controller:

1) Login to ssh console and run 'esxcli storage core adapter list'

2) In the description between parenthesis you'll see your pci id, something like: (0000:00:11.4) Intel Corporation Wellsburg AHCI Controller

3) Run lspci -n |grep 0000:00:11.4

4) After the last colon you'll see your pid, something like 0000:00:11.4 Class 0106: 8086:8d62 [vmhba0]

5) Now put '8086 [PID] d3d0 false' at the bottom of /etc/vmware/passthru.map, my full file:

# passthrough attributes for devices                                                                                                                                                                         
# file format: vendor-id device-id resetMethod fptShareable                                                                                                                                                  
# vendor/device id: xxxx (in hex) (ffff can be used for wildchar match)                                                                                                                                      
# reset methods: flr, d3d0, link, bridge, default                                                                                                                                                            
# fptShareable: true/default, false                                                                                                                                                                          

# Intel 82598 10Gig cards can be reset with d3d0                                                                                                                                                             
8086  10b6  d3d0     default                                                                                                                                                                                 
8086  10c6  d3d0     default                                                                                                                                                                                 
8086  10c7  d3d0     default                                                                                                                                                                                 
8086  10c8  d3d0     default                                                                                                                                                                                 
8086  10dd  d3d0     default                                                                                                                                                                                 
# Broadcom 57710/57711/57712 10Gig cards are not shareable                                                                                                                                                   
14e4  164e  default  false                                                                                                                                                                                   
14e4  164f  default  false                                                                                                                                                                                   
14e4  1650  default  false                                                                                                                                                                                   
14e4  1662  link     false                                                                                                                                                                                   
# Qlogic 8Gb FC card can not be shared                                                                                                                                                                       
1077  2532  default  false                                                                                                                                                                                   
# LSILogic 1068 based SAS controllers                                                                                                                                                                        
1000  0056  d3d0     default                                                                                                                                                                                 
1000  0058  d3d0     default                                                                                                                                                                                 
# NVIDIA                                                                                                                                                                                                     
10de  ffff  bridge   false                                                                                                                                                                                   
# Intel Wellsburg AHCI                                                                                                                                                                                       
8086  8d62  d3d0     false                                                                                                                                                                                   
8086  8d02  d3d0     false

6) Reboot, put device in passthrough, reboot and attach to VM.

 

For step 3 see 'Modify the bootimage' here: https://idmedia.no/projects/xpenology/installing-or-upgrading-to-xpenology-5-2-or-later/

 

In your case I'd use an extra SATA controller (LSI based) as you don't have an Intel controller.

Link to comment
Share on other sites

×
×
  • Create New...