Jump to content
XPEnology Community

Tutorial: Install/DSM 6.2 on ESXi [HP Microserver Gen8] with RDM


jadehawk

Recommended Posts

22 hours ago, i5Js said:

Well I can passthough the asm1062 card, but the DSM doesn't recognice its msata drive. Via cli I can see it, but DSM doesn't :(

 

 

admin@VM:~$ sudo fdisk /dev/sdaf

Welcome to fdisk (util-linux 2.26.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sdaf: 59.6 GiB, 64023257088 bytes, 125045424 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb94a1b35

Device     Boot   Start       End   Sectors  Size Id Type
/dev/sdaf1         2048   4982527   4980480  2.4G fd Linux raid autodetect
/dev/sdaf2      4982528   9176831   4194304    2G fd Linux raid autodetect
/dev/sdaf3      9437184 124840607 115403424   55G fd Linux raid autodetect

 

Playing with the SataPortMap parameter I've get the ssd working.

 

I've used this settings: SataPortMap=251, I had to select 5 drives in the second controller, because when I put 4 instead 5, one of the RDM drive disapeared. 

 

Now I only have two things to fix:

 

** When I reboot the system, always came with system partition failed of one of the drives.

** The synoboot is detected as esata drive

Edited by i5Js
Link to comment
Share on other sites

Adding my two cens to this fantastical tuto.

 

Finally I got working all the drives via passthrough. What I did was:

 

** Enable passthroug in the Cougar Point 6 port SATA AHCI Controller.

** Remember, we have now 3 SATA controllers so, we need to modify the SataPortMap in the grub. In may case, look like this: 

     set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=1F SataPortMap=116 SasIdxMap=0'

Basically I've modified:

      1.- DiskIdxMap=1F to hide the synoboot drive

      2.- SataPortMap=116 to tell the bootloader I have one drive in the first SATA controller, 1 in the second and 6 in the Cougar.

 

And boom, worked flawless, recognice perfectly the volume, It was only needed repair the system partiton, and now I have my 4 SATA drives and 1 SSD for cache.

 

Hope this help somebody.

 

Cheers.

 

  • Thanks 1
Link to comment
Share on other sites

On 11/30/2018 at 6:29 AM, i5Js said:

Adding my two cens to this fantastical tuto.

 

Finally I got working all the drives via passthrough. What I did was:

 

** Enable passthroug in the Cougar Point 6 port SATA AHCI Controller.

** Remember, we have now 3 SATA controllers so, we need to modify the SataPortMap in the grub. In may case, look like this: 

     set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=1F SataPortMap=116 SasIdxMap=0'

Basically I've modified:

      1.- DiskIdxMap=1F to hide the synoboot drive

      2.- SataPortMap=116 to tell the bootloader I have one drive in the first SATA controller, 1 in the second and 6 in the Cougar.

 

And boom, worked flawless, recognice perfectly the volume, It was only needed repair the system partiton, and now I have my 4 SATA drives and 1 SSD for cache.

 

Hope this help somebody.

 

Cheers.

 

Excellent Info! Thank you for posting...

Link to comment
Share on other sites

@jadehawk hello i have a similiar settings... but i came from esxi 6.5... then upgrade to 6.7.

 

I also have the 6.2 dsm in datastore (ssd on odd port) and then rdmap to two hdd ... but lattely i'm thinking about moving away from esxi... i get very bad latency in esxi... i get high of 40ms latency. I already change the driver for the older one but never get better than that.

Can you report your latency? Is like mine?

image.thumb.png.13a3955a6e3ba901cea89291a0a4a91a.png

Thanks

Edited by codedmind
Link to comment
Share on other sites

3 minutes ago, jadehawk said:

Mine is even Higher. But I really only use this copy of XPEnology as backup for the Main Synology box, and it also runs my VPN.

 

Capture.thumb.JPG.b6345ebe8ff05441ab01b59b381d8cbc.JPG

 

 

Maybe you still have the new esxi driver that cause you this issue.

My xpenology only have vpn, synthing, sonar and radar, and the nfs share for kodi ... but in esxi i have a linux vm and the performance is boring me...

I'm serious considering moving to proxmox 5.3... its hard because this solution esxi+xpenology is almost straight forward...

Link to comment
Share on other sites

I don't have any idea how to go about installing the older driver. As far as I understand it, the acceptable latency for a VM datastore is 20ms or less. (Saw it somewhere, just cant remember where).

 

Am also running a Surveillance Station on the XPEnology VM with 1 camera connected (forgot to mention that!)

Edited by jadehawk
Link to comment
Share on other sites

4 minutes ago, jadehawk said:

I don't have any idea how to go about installing the older driver. As far as I understand it, the acceptable latency for a VM datastore is 20ms or less. (Saw it somewhere, just cant remember where).

 

Am also running a Surveillance Station on the XPEnology VM with 1 camera connected (forgot to mention that!)

If you search on Google for gen8 esxi performance fix you will get a lot of results for the fix.

The max latency accepted for a normal esxi is 10ms of latency... Maybe what you find is 20ms for this controller... But even if that is the case 40 is the double :/

 

some user buy other controllers to overcome this issue but is a more radical option...

Link to comment
Share on other sites

8 hours ago, jadehawk said:

Currently is up to the latest update.

 

Capture.thumb.JPG.e8a00db72b18413432fb2d0a744f1f32.JPG

 

I'm on 3617xs, for this I can't update to the latest version, can I? 
 


What would I have to do to go to 3615xs?  Without losing my installation. 
 


Is there a difference between the two versions?  For example in performance?

Link to comment
Share on other sites

@jadehawk this is a dell server ...

image.thumb.png.8c0b48ed61339070a16c5c802d508b75.png

 

As you can see the maximum latency is  1 ms... i will assume that microserver an entry server and with sometimes hdd 5400rpm and nas ones lets assume 10ms as maxium... i'm getting 40 so yes... i think is bad :)

 

One of the links where you can improve tour performance... https://www.johandraaisma.nl/fix-vmware-esxi-6-slow-disk-performance-on-hp-b120i-controller/

Link to comment
Share on other sites

b120i driver downgrade is usefull only when you use internal raid :)

with RDM and sata mode it will not help...

 

for me this speeds are acceptable :) earlier when i used new driver and internal raid, performace was awfull - 50MB/s disks write/read speed. now is 110MB/s and up to 500mb/s with SSD in ODD port...

 

maybe in future i'll buy other pcie raid controller

 

Link to comment
Share on other sites

6 minutes ago, luchuma said:

b120i driver downgrade is usefull only when you use internal raid :)

with RDM and sata mode it will not help...

 

for me this speeds are acceptable :) earlier when i used new driver and internal raid, performace was awfull - 50MB/s disks write/read speed. now is 110MB/s and up to 500mb/s with SSD in ODD port...

 

maybe in future i'll buy other pcie raid controller

 

How do you do the test? 

Link to comment
Share on other sites

i run disk bench on windows vm

 

 

and direct from dsm ssh

vol1 - btrfs raid1

sudo dd bs=1M count=256 if=/dev/zero of=/volume1/testx conv=fdatasync
Password:
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.71714 s, 98.8 MB/s

 

vol2 - btrfs raid0

sudo dd bs=1M count=256 if=/dev/zero of=/volume2/testx conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.15798 s, 124 MB/s

 

 

and cpu 

dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.56942 s, 418 MB/s

Edited by luchuma
Link to comment
Share on other sites

On 12/11/2018 at 8:12 PM, luchuma said:

i run disk bench on windows vm

 

 

and direct from dsm ssh

vol1 - btrfs raid1

sudo dd bs=1M count=256 if=/dev/zero of=/volume1/testx conv=fdatasync
Password:
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.71714 s, 98.8 MB/s

 

vol2 - btrfs raid0

sudo dd bs=1M count=256 if=/dev/zero of=/volume2/testx conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.15798 s, 124 MB/s

 

 

and cpu 

dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.56942 s, 418 MB/s

 

 

I have rdm, in hp all the discs are raid0, in dsm:

volume1 is basic (esxi disk in ssd) -> 195 MB/s

volume2 is hybridraid (rdm drive 2 in gen8) 79 MB/s

volume3 is raid 1 (rdm drive 3 and 4 in gen8) 117MB/s

 

cpu 233 MB/s (gt1060)

 

 

Link to comment
Share on other sites

  • 2 weeks later...

As mentioned on several threads, the use of E1000e Adapter is mandatory to run 6.2 on ESXi 6.7. 
However, this has limitations on the speed.
In my Microserver I have a 10GBase-T card, and with a virtual PC I have a 10GB connection (VMXNET3), with the XPE Synology (E1000e) it stays on 1GB.
Is there any hope, that this might get changed sometimes with a new loader?
I'd really love to backup from server to server over a 10GB link. 

Link to comment
Share on other sites

Just a short feedback. 

 

After installing loader 1.02b and DSM 6.1.4 as initial installation, all went fine, including updates to 6.1.7 Update 2.

 

Now, the 10GBase-T card is recognised and shown in the virtual DS3615xs.
As soon as the second one is up and the network 10G ready, I'm going to do some speed tests from hp Server to hp Server ... 

Edited by hpk
typo
Link to comment
Share on other sites

On 11/17/2018 at 4:50 PM, jadehawk said:

To those interested.. 

  • DSM 6.2.1-23824 Update 1 = Did NOT work on ESXi Using DS3617xs (Changing Network vnic to E1000e did not help)
  • DSM 6.2.1-23824 Update 1 = Works 100% on ESXi  Using DS3615xs (Changing Network vnic to E1000e).
  • Migrating the already setup Volume from DS3617xs to DS3615xs was as simple as reassigning the vmdk disks 1-3 to the new VM and repairing the partitions

 

Thank you @haydibe

 

Now Running latest DSM version on my HP Microserver Gen8!!!

 

Dear @jadehawk, your video tutorial has been very helpful. Thank you very much, I seem to have followed your instructions in detail. Yet I get the error below when trying to update to DSM 6.2.1-23824 Update 1 (from DSM 6.2-23739 Update 2 on DS3615xs with E1000e). Would you have any idea what I could be doing wrong?

Thanks, J

 

 

Screenshot 2018-12-29 at 23.14.38.png

Screenshot 2018-12-29 at 23.14.28.png

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...