Jump to content
XPEnology Community

Tutorial: Install/DSM 6.2 on ESXi [HP Microserver Gen8] with RDM


jadehawk

Recommended Posts

8 hours ago, Orphée said:

Can I ask why do we need to add/create a dedicated disk (32gb in the video) ?

It is not initialised in the Xpenology VM...

 

Is it possible to not add it and only have synoboot.vmdk and and RDM disks ?

 

It is not required.  But it often is helpful for troubleshooting as there is little to go wrong with configuration.  If you haven't done XPe with ESXi before, configure with a virtual disk, don't provision a Storage Pool on it and then delete it when your RDM drives are up and running and working correctly.

  • Thanks 1
Link to comment
Share on other sites

On 1/18/2021 at 9:17 PM, Orphée said:

I ordered a LSI 9211-8i IT, I will try it.

I hope it is plug & play for passtrough to the ESXi (unplug mini SAS from motherboard and plug it to LSI ?)

 

My speeds are not that bad for writing :

 

$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/testx conv=fdatasync                                                                                                                              
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 1.80159 s, 149 MB/s


$ sudo dd bs=1M count=512 if=/dev/zero of=/volume1/testx conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 2.90437 s, 185 MB/s


$ sudo dd bs=1M count=1024 if=/dev/zero of=/volume1/testx conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.54903 s, 194 MB/s

 

$ sudo dd bs=1M count=2048 if=/dev/zero of=/volume1/testx conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 8.00585 s, 268 MB/s

 

$ sudo dd bs=1M count=4096 if=/dev/zero of=/volume1/testx conv=fdatasync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 13.0355 s, 329 MB/s

 

$ dd if=/dev/zero bs=1M count=8192 | md5sum
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 24.0138 s, 358 MB/s

 

Forget the graph about write speed, I was just refering the latency, the disk activity came from my camera Survelliance Station.

 

Edit : graph with count=50000 twice :

 

$ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync
50000+0 records in  
50000+0 records out
52428800000 bytes (52 GB) copied, 153.624 s, 341 MB/s

 

$ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync
50000+0 records in  
50000+0 records out
52428800000 bytes (52 GB) copied, 155.17 s, 338 MB/s

 

image.thumb.png.a0a97e24c3777c3b6ff3a9aeeac3ef43.png

 

Hello again !

 

I wanted to try latest ESXi 7.0U1c on the Gen8.

 

So I stopped my running 6.7U3 working USB key and installed a new one instead.

 

I booted on the the 7.0U1c ISO to make a fresh install, but after installation, it failed to boot on the USB Key.

 

So I tried again but instead I installed HPE 7.0 customized ISO for Gen10.

 

It worked and I was able to boot on it.

Then I direclty upgraded with the latest 7.0U1c ISO and it worked !

 

I'm with latest ESXi 7.0U1c currently

 

Then I added existing Xpenology VM from SSD datastore, and it works flowlessly :

 

image.thumb.png.372cfb4bdbc4fe07135ed8a473037042.png

 

And last but not least :

image.thumb.png.7aad56f2f9f8122c0b00482c7b01f414.png

 

No more latency issue on disks ! Or the info is not true anymore ?

 

Disk write speed is a bit less than on 6.7U3 it seems :

 

$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/testx conv=fdatasync  
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 1.95899 s, 137 MB/s

 

$ sudo dd bs=1M count=512 if=/dev/zero of=/volume1/testx conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 3.34841 s, 160 MB/s

 

$ sudo dd bs=1M count=1024 if=/dev/zero of=/volume1/testx conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.48982 s, 239 MB/s

 

$ sudo dd bs=1M count=2048 if=/dev/zero of=/volume1/testx conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 9.36259 s, 229 MB/s

 

$ sudo dd bs=1M count=4096 if=/dev/zero of=/volume1/testx conv=fdatasync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 16.3967 s, 262 MB/s

 

$ sudo dd bs=1M count=8192 if=/dev/zero of=/volume1/testx conv=fdatasync
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 27.519 s, 312 MB/s

 

$ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 162.903 s, 322 MB/s

 

$ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 161.099 s, 325 MB/s

 

$ dd if=/dev/zero bs=1M count=8192 | md5sum
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 22.1442 s, 388 MB/s

 

Edit : Hum, on host monitoring level, it seems latency is still not very good.

image.thumb.png.b2f31fb93c342491d85835a9156f577a.png

 

Maybe value from VMs are not true anymore

Edited by Orphée
Link to comment
Share on other sites

Hello,

 

I just received my PCE-e P222 SATA-raid card.

 

I flashed it to latest FW available : 8.32

 

Then with HP SSA I enabled HBA mode on it.

 

I'm still waiting Xeon CPU to replace my current I3 so I still don't have VT-d (passthrough) available.

 

But still, after installed, flashed, and HBA enabled, I plugged SAS cable from MB to P222 card

Then in Xpenology VM settings I removed RDM mapped disk and I was able to add "New raw disk" instead.

 

Currently no real benefit exept performances :

 

image.thumb.png.d94390cd25a740f19fcea3a7b6bdaae1.png

 

$ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync

50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 133.144 s, 394 MB/s

 

$ dd if=/dev/zero bs=1M count=8192 | md5sum
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 21.8785 s,
393 MB/s

 

Once I receive the CPU I'll be able to try passthrough to have SMART data inside Xpenology.

Edited by Orphée
Link to comment
Share on other sites

Smart data is visible via esxi RDM ... it think its safer to use RDM than use passthrough as HBA will be supported on esxi but that might not be the case for xpenololy. Also in case you want to attach it to a linux VM, or even windows with windows btrfs driver for data recovery it will still be easier

Edited by pocopico
Link to comment
Share on other sites

6 minutes ago, pocopico said:

Smart data is visible via esxi RDM ... it think its safer to use RDM than use passthrough as HBA will be supported on esxi but that might not be the case for xpenololy. Also in case you want to attach it to a linux VM, or even windows with windows btrfs driver for data recovery it will still be easier

How do you make SMART data available in Xpenology with 6.2.3 with RDM ?

 

no_smart.png

Link to comment
Share on other sites

1 hour ago, pocopico said:

if you attach the RDM disks attached on a supported HBA then the smart data will be visible. If you attach unsupported disks (SATA attached on mainboard controller) using command line RDM files then you wont have SMART data.

Well I just explained I attached my 4 disks to P222 set in HBA mode, added disks with menu (not command line RDM), and I have no SMART.

 

These disks were on a bare metal N54L before with SMART data working.

 

ESXi is on USB key

Xpenology VM is on SSD plugged on ODD sata port

4 x 4 To disks on P222 HBA controller...

 

No Smart DATA.

Should I change something in VM SATA/SCSI configuration ?

Currently the 4 drives are set on controller SATA1

 

Data from ESXi host :

 

image.thumb.png.5493b0c4bcd51dde8b3bc1235a3c4e50.png

Edited by Orphée
Link to comment
Share on other sites

Well first of all you shouldn't be seeing disk as virtual SATA drives, but rather something like the below list, where i have set the disks to RDM.

 

Anyway i'm using the onboard HBA and so i had to use the SATA creation of RDMs using the CLI. Like i said this was never working in my Gen 8 microserver with ESXi.

 

Maybe SMART data for SATA RDM disks is not working as it is on a SAS HBA with SAS disks that i have and  i'm getting SMART data on my VM.

 

image.png.534845925832604e9d4d38425ef2ced3.png

 

This might help you a bit more :

 

 

 

 

Edited by pocopico
Link to comment
Share on other sites

To be honest, I have the same issue, and therefore i just dont bother with the SMART details showing up in Expen.  If i want to know what the SMART values are, I would just run a script in ESXi host to find them out.  At least you can do that.  If the drive is going to fail, it will fail, i dont care- but its probably very highly likely that 2 drives wont fail at the same time, therefore im not that bothered (hence i have 2 drives in a volume mirrored)

 

Regards

Edited by conhulio2000
Link to comment
Share on other sites

Hello again !

 

So as stated earlier passthrough did not work with P222 set in HBA mode...

 

I just received my LSI 9211-8i IT mode enabled.

 

I swapped PCIe cards :

 

- First thing to notice, CASE FAN noise was quite ennoying with P222, always running above 20%... with LSI card case FAN is at 11%, really better...

- with LSI card, I can't see disk inside ESXi... so no way to add them as RDM.

- I enabled passthrough on LSI card :

image.thumb.png.8c47d93d8f8fbf7b111751d908cd7d40.png

 

Then I removed old mapped RDM disks (not seen anymore by the way) and added new PCI card inside VM :

image.thumb.png.2747a070fef415ad2194142858e7066a.png

 

Then I booted the VM with serial port enabled I and saw this with telnet :

image.thumb.png.488a37b63bbb553816a96972eda4f311.png

 

Oh crap ! did it work ?!!

 

Then I was able to login to my Syno, so network was working... :

Tadaaaaa

image.thumb.png.c6195c816881e159cf7cae49dc2c1883.png

 

So passthrough works with this LSI card ! I'm very happy :)

 

Edit : And perfs are very good :

 

$ sudo dd bs=1M count=128 if=/dev/zero of=/volume1/testx conv=fdatasync                                                                                                                                                                                     
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 0.928178 s, 145 MB/s

 

$ sudo dd bs=1M count=512 if=/dev/zero of=/volume1/testx conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 2.09148 s, 257 MB/s

 

$ sudo dd bs=1M count=2048 if=/dev/zero of=/volume1/testx conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 7.45988 s, 288 MB/s

 

$ sudo dd bs=1M count=100000 if=/dev/zero of=/volume1/testx conv=fdatasync
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 258.977 s, 405 MB/s

Edited by Orphée
Link to comment
Share on other sites

  • 3 weeks later...

hi, can anyone point me in the right direction for installing 918+ on esxi w what software i  should be using? i can't figure it out; ive dl'd 1.04 band loaded and adapted this with an old tutorial i used to get my (now ancient) 5.xx server up, but when i boot up the VM i just get a splash page for linux (here)

 

im on an e3-1245v3 chip (haswell) w/ 32gb ram and an lsi flashed card 

 

id like to take advantage of hardwqare transcoding if possible. my previous build however was on the 3615xs.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...