jadehawk

Tutorial: Install/DSM 6.2 on ESXi [HP Microserver Gen8] with RDM

Recommended Posts

22 hours ago, i5Js said:

Well I can passthough the asm1062 card, but the DSM doesn't recognice its msata drive. Via cli I can see it, but DSM doesn't :(

 

 

admin@VM:~$ sudo fdisk /dev/sdaf

Welcome to fdisk (util-linux 2.26.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sdaf: 59.6 GiB, 64023257088 bytes, 125045424 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb94a1b35

Device     Boot   Start       End   Sectors  Size Id Type
/dev/sdaf1         2048   4982527   4980480  2.4G fd Linux raid autodetect
/dev/sdaf2      4982528   9176831   4194304    2G fd Linux raid autodetect
/dev/sdaf3      9437184 124840607 115403424   55G fd Linux raid autodetect

 

Playing with the SataPortMap parameter I've get the ssd working.

 

I've used this settings: SataPortMap=251, I had to select 5 drives in the second controller, because when I put 4 instead 5, one of the RDM drive disapeared. 

 

Now I only have two things to fix:

 

** When I reboot the system, always came with system partition failed of one of the drives.

** The synoboot is detected as esata drive

Edited by i5Js

Share this post


Link to post
Share on other sites

Adding my two cens to this fantastical tuto.

 

Finally I got working all the drives via passthrough. What I did was:

 

** Enable passthroug in the Cougar Point 6 port SATA AHCI Controller.

** Remember, we have now 3 SATA controllers so, we need to modify the SataPortMap in the grub. In may case, look like this: 

     set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=1F SataPortMap=116 SasIdxMap=0'

Basically I've modified:

      1.- DiskIdxMap=1F to hide the synoboot drive

      2.- SataPortMap=116 to tell the bootloader I have one drive in the first SATA controller, 1 in the second and 6 in the Cougar.

 

And boom, worked flawless, recognice perfectly the volume, It was only needed repair the system partiton, and now I have my 4 SATA drives and 1 SSD for cache.

 

Hope this help somebody.

 

Cheers.

 

Share this post


Link to post
Share on other sites
On 11/30/2018 at 6:29 AM, i5Js said:

Adding my two cens to this fantastical tuto.

 

Finally I got working all the drives via passthrough. What I did was:

 

** Enable passthroug in the Cougar Point 6 port SATA AHCI Controller.

** Remember, we have now 3 SATA controllers so, we need to modify the SataPortMap in the grub. In may case, look like this: 

     set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=1F SataPortMap=116 SasIdxMap=0'

Basically I've modified:

      1.- DiskIdxMap=1F to hide the synoboot drive

      2.- SataPortMap=116 to tell the bootloader I have one drive in the first SATA controller, 1 in the second and 6 in the Cougar.

 

And boom, worked flawless, recognice perfectly the volume, It was only needed repair the system partiton, and now I have my 4 SATA drives and 1 SSD for cache.

 

Hope this help somebody.

 

Cheers.

 

Excellent Info! Thank you for posting...

Share this post


Link to post
Share on other sites

@jadehawk hello i have a similiar settings... but i came from esxi 6.5... then upgrade to 6.7.

 

I also have the 6.2 dsm in datastore (ssd on odd port) and then rdmap to two hdd ... but lattely i'm thinking about moving away from esxi... i get very bad latency in esxi... i get high of 40ms latency. I already change the driver for the older one but never get better than that.

Can you report your latency? Is like mine?

image.thumb.png.13a3955a6e3ba901cea89291a0a4a91a.png

Thanks

Edited by codedmind

Share this post


Link to post
Share on other sites
2 hours ago, luchuma said:

image.thumb.png.bd715f7a3ee127fbc2177ee81a9e7d12.png

Can you post the graphic pic?

You have even higher ms 😕

Edited by codedmind

Share this post


Link to post
Share on other sites

Mine is even Higher. But I really only use this copy of XPEnology as backup for the Main Synology box, and it also runs my VPN.

 

Capture.thumb.JPG.b6345ebe8ff05441ab01b59b381d8cbc.JPG

 

Share this post


Link to post
Share on other sites
5 hours ago, Landu said:

Hey, thx for this nice tutorial. 


Which Critical Update did you use?

Currently is up to the latest update.

 

Capture.thumb.JPG.e8a00db72b18413432fb2d0a744f1f32.JPG

Share this post


Link to post
Share on other sites
3 minutes ago, jadehawk said:

Mine is even Higher. But I really only use this copy of XPEnology as backup for the Main Synology box, and it also runs my VPN.

 

Capture.thumb.JPG.b6345ebe8ff05441ab01b59b381d8cbc.JPG

 

 

Maybe you still have the new esxi driver that cause you this issue.

My xpenology only have vpn, synthing, sonar and radar, and the nfs share for kodi ... but in esxi i have a linux vm and the performance is boring me...

I'm serious considering moving to proxmox 5.3... its hard because this solution esxi+xpenology is almost straight forward...

Share this post


Link to post
Share on other sites

I don't have any idea how to go about installing the older driver. As far as I understand it, the acceptable latency for a VM datastore is 20ms or less. (Saw it somewhere, just cant remember where).

 

Am also running a Surveillance Station on the XPEnology VM with 1 camera connected (forgot to mention that!)

Edited by jadehawk

Share this post


Link to post
Share on other sites
4 minutes ago, jadehawk said:

I don't have any idea how to go about installing the older driver. As far as I understand it, the acceptable latency for a VM datastore is 20ms or less. (Saw it somewhere, just cant remember where).

 

Am also running a Surveillance Station on the XPEnology VM with 1 camera connected (forgot to mention that!)

If you search on Google for gen8 esxi performance fix you will get a lot of results for the fix.

The max latency accepted for a normal esxi is 10ms of latency... Maybe what you find is 20ms for this controller... But even if that is the case 40 is the double :/

 

some user buy other controllers to overcome this issue but is a more radical option...

Share this post


Link to post
Share on other sites
8 hours ago, jadehawk said:

Currently is up to the latest update.

 

Capture.thumb.JPG.e8a00db72b18413432fb2d0a744f1f32.JPG

 

I'm on 3617xs, for this I can't update to the latest version, can I? 
 


What would I have to do to go to 3615xs?  Without losing my installation. 
 


Is there a difference between the two versions?  For example in performance?

Share this post


Link to post
Share on other sites

@jadehawk this is a dell server ...

image.thumb.png.8c0b48ed61339070a16c5c802d508b75.png

 

As you can see the maximum latency is  1 ms... i will assume that microserver an entry server and with sometimes hdd 5400rpm and nas ones lets assume 10ms as maxium... i'm getting 40 so yes... i think is bad :)

 

One of the links where you can improve tour performance... https://www.johandraaisma.nl/fix-vmware-esxi-6-slow-disk-performance-on-hp-b120i-controller/

Share this post


Link to post
Share on other sites

b120i driver downgrade is usefull only when you use internal raid :)

with RDM and sata mode it will not help...

 

for me this speeds are acceptable :) earlier when i used new driver and internal raid, performace was awfull - 50MB/s disks write/read speed. now is 110MB/s and up to 500mb/s with SSD in ODD port...

 

maybe in future i'll buy other pcie raid controller

 

Share this post


Link to post
Share on other sites
6 minutes ago, luchuma said:

b120i driver downgrade is usefull only when you use internal raid :)

with RDM and sata mode it will not help...

 

for me this speeds are acceptable :) earlier when i used new driver and internal raid, performace was awfull - 50MB/s disks write/read speed. now is 110MB/s and up to 500mb/s with SSD in ODD port...

 

maybe in future i'll buy other pcie raid controller

 

How do you do the test? 

Share this post


Link to post
Share on other sites

i run disk bench on windows vm

 

 

and direct from dsm ssh

vol1 - btrfs raid1

sudo dd bs=1M count=256 if=/dev/zero of=/volume1/testx conv=fdatasync
Password:
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.71714 s, 98.8 MB/s

 

vol2 - btrfs raid0

sudo dd bs=1M count=256 if=/dev/zero of=/volume2/testx conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.15798 s, 124 MB/s

 

 

and cpu 

dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.56942 s, 418 MB/s

Edited by luchuma

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now