Jump to content
XPEnology Community

Gen8 + ESXI (5.X/6.X) + RDM + raid ?


Rikk

Recommended Posts

Hello,

 

I have my Gen8 Proliant server (t1610, 6Go RAM), which works fine with the new loader, in bare metal config. I would like to switch to VM with ESXI, but taking in account several "contraints/specifications":

 

- DSM disks (4 HDD, 2 in raid mode, 1 data, 1 SSD - connected internal in CDROM plug,not used until now -) --> reuse of disks without migration (RDM mode needed) + RAID controller activated in the bios (as of today with the bare metal DSM 6.0.2)

- ESXI installation --> could be installed on USB ou internal SD card

- SSD disk --> used for VM storage

 

What I did:

Tried several time to install ESXI (6.0 / 6.5) custom HP version (SD & USB key ): OK, no issue, ESXI properly installed

 

Creation of Datastore :

- OK: in ACPI mode activated in bios (disks visible in administration interface), but not compatible with current settings (RAID controlled activated --> issue: data not accessible by DSM (virtual)

- NOK: in RAID controlled mode activated in bios (disks not visible in ESXI administration interface) compatible with current settings (RAID controlled activated)-->issue: no disks visible

 

Did somebody have tried to set the same config (settings) , with RAID controller activated in the bios, and disks visible ? Some advise ?

 

Thanks a lot.

 

Rikk

Link to comment
Share on other sites

Hmm not tested by myself but have it running on same baremetal but than passing the disks to xpenology by RDM so can't help in troubleshoot.

 

But... Any reason you use the internal raid?

Xpenology (and shr) is much more flexible.

Sure you have ilo so if raid fails you can still connect remote and troubleshoot.

But I would like to understand why you would use the low performing internal controller above dsm?

 

 

Sent from my iPhone using Tapatalk

Link to comment
Share on other sites

Thanks Dutchman for your comments.

 

To clarify:

Any reason you use the internal raid? -->lack of knowledge for my side: I was convinced that the DSM use the RAID controller to create/manage RAID

 

Sure you have ilo so if raid fails you can still connect remote and troubleshoot. I didin't check the status though ilo, I have just lanched the VM with RDM created to the DD, and the disk was not seen by the DSM.

 

But I would like to understand why you would use the low performing internal controller above dsm? stupid idea , finally.. :grin:

 

I will restart my trials..

Link to comment
Share on other sites

I'm newbie and today all day tried setup:

Esxi 6.5 (ssd datastorage) + RDM (2tb hitachi)+ DSM 6.0.2-8451 Update 7 (Jun loader)

 

I have issues with RDM, when I create DSM Volume with RDM disk - always got "Volume crashed" when going "optimization" process. Disk checked for errors/bad sectors/smart - all ok.

In logs message something like "error write to sector #num".

 

Then I tried to install DSM on bare metal create Volume with this disk - works perfect. Tried to fill almost full space with data - all ok, no crashes.

Link to comment
Share on other sites

Perhaps because you have the flags "raid" in partion .

I have the same issue when I have switched to VM from bare metal. I had switched to ACPI mode (not raid) and remove all partions /flags with parted tool (live CD). After I have declared as "new" disks in VM DSM.

 

I am a newbie also, so ..

Regards.

Link to comment
Share on other sites

  • 2 weeks later...
I'm newbie and today all day tried setup:

Esxi 6.5 (ssd datastorage) + RDM (2tb hitachi)+ DSM 6.0.2-8451 Update 7 (Jun loader)

 

I have issues with RDM, when I create DSM Volume with RDM disk - always got "Volume crashed" when going "optimization" process. Disk checked for errors/bad sectors/smart - all ok.

In logs message something like "error write to sector #num".

 

Then I tried to install DSM on bare metal create Volume with this disk - works perfect. Tried to fill almost full space with data - all ok, no crashes.

 

Hello mrinner, did you try to use virtual disk files instead of RDM? I found that the volume keep crashing in optimization stage for both Virtual Disk or RDM.

 

I am a bit lost now...

Link to comment
Share on other sites

I have not tried rdm or internal raid so I can't provide solution to your problem. However the way I set it up work flawlessly so you may take it a a reference.

 

I got the internal B120 controller in ACHI mode, then passthrough it to my DSM virtual machine. All HDD connected to this will see as connected directly to my "Synology Box" without the layer of vmdk or rdm (which rdm is kind of a virtual disk layer to me. I know very little about virtualisation so I could be wrong, but I think rdm is a software mapping of the drive). My setup will means the hdd will work like a normal Synology NAS, therefore less likely to face error or data loss caused by esxi.

Link to comment
Share on other sites

Dud you add a second controller were your VM guest files are stored? As passing through the controller makes it invisible bit esxi just wondering were you put the VM files (datastore)

 

 

Sent from my iPhone using Tapatalk

 

I do have another raid card added for my ssd and additional hdds for use of other virtual machines.

Link to comment
Share on other sites

If i understood right, your setup has a Celeron G1610 with 6GB Ram.

 

Why would anyone want to migrate from a baremetal installation to an ESXi installation with such limited ressources?

 

You would not want to loose your already limited ressources, would you?

- CPU: a passmark of roughly 2500 is not realy that much if you plan to run concurent VMs next to your XPE installation

- RAM: 6 GB of total ram is not realy much if you consider that ESXi already wants to have a share of it itself.. not much room for concurent VMs..

- SMART Information: without a vt-d capable CPU you won't be able to add a HBA controller and forward it to your XPE vm via direct-io.. so there will be no SMART information available for your drives. Forwarding the B120i does not work. RDM drives do not provide SMART information because vmware only implemented a subset of sata/scsi commands..

 

So appearently your goal can't be to run concurent VMs on the same host. So why bothering with ESXi then? Especialy, if the only things you gain are disadvantages?

 

I operate two of those boxes, but mine do have E3 CPUs with 16GB each and an LSI controller that i do forward to my DSM6.0.2 instances... So yes, it's possible if you are willing to take money in your hands and sacrifice it to the hardware-upgrade-daemons :wink:

 

I would strongly advice to stick to your baremetal intallation.

Link to comment
Share on other sites

×
×
  • Create New...