i5Js Posted November 29, 2018 Share #26 Posted November 29, 2018 (edited) 22 hours ago, i5Js said: Well I can passthough the asm1062 card, but the DSM doesn't recognice its msata drive. Via cli I can see it, but DSM doesn't admin@VM:~$ sudo fdisk /dev/sdaf Welcome to fdisk (util-linux 2.26.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/sdaf: 59.6 GiB, 64023257088 bytes, 125045424 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xb94a1b35 Device Boot Start End Sectors Size Id Type /dev/sdaf1 2048 4982527 4980480 2.4G fd Linux raid autodetect /dev/sdaf2 4982528 9176831 4194304 2G fd Linux raid autodetect /dev/sdaf3 9437184 124840607 115403424 55G fd Linux raid autodetect Playing with the SataPortMap parameter I've get the ssd working. I've used this settings: SataPortMap=251, I had to select 5 drives in the second controller, because when I put 4 instead 5, one of the RDM drive disapeared. Now I only have two things to fix: ** When I reboot the system, always came with system partition failed of one of the drives. ** The synoboot is detected as esata drive Edited November 29, 2018 by i5Js Quote Link to comment Share on other sites More sharing options...
i5Js Posted November 30, 2018 Share #27 Posted November 30, 2018 Adding my two cens to this fantastical tuto. Finally I got working all the drives via passthrough. What I did was: ** Enable passthroug in the Cougar Point 6 port SATA AHCI Controller. ** Remember, we have now 3 SATA controllers so, we need to modify the SataPortMap in the grub. In may case, look like this: set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=1F SataPortMap=116 SasIdxMap=0' Basically I've modified: 1.- DiskIdxMap=1F to hide the synoboot drive 2.- SataPortMap=116 to tell the bootloader I have one drive in the first SATA controller, 1 in the second and 6 in the Cougar. And boom, worked flawless, recognice perfectly the volume, It was only needed repair the system partiton, and now I have my 4 SATA drives and 1 SSD for cache. Hope this help somebody. Cheers. 1 Quote Link to comment Share on other sites More sharing options...
jadehawk Posted December 5, 2018 Author Share #28 Posted December 5, 2018 On 11/30/2018 at 6:29 AM, i5Js said: Adding my two cens to this fantastical tuto. Finally I got working all the drives via passthrough. What I did was: ** Enable passthroug in the Cougar Point 6 port SATA AHCI Controller. ** Remember, we have now 3 SATA controllers so, we need to modify the SataPortMap in the grub. In may case, look like this: set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=1F SataPortMap=116 SasIdxMap=0' Basically I've modified: 1.- DiskIdxMap=1F to hide the synoboot drive 2.- SataPortMap=116 to tell the bootloader I have one drive in the first SATA controller, 1 in the second and 6 in the Cougar. And boom, worked flawless, recognice perfectly the volume, It was only needed repair the system partiton, and now I have my 4 SATA drives and 1 SSD for cache. Hope this help somebody. Cheers. Excellent Info! Thank you for posting... Quote Link to comment Share on other sites More sharing options...
i5Js Posted December 6, 2018 Share #29 Posted December 6, 2018 Happy to help Quote Link to comment Share on other sites More sharing options...
codedmind Posted December 7, 2018 Share #30 Posted December 7, 2018 (edited) @jadehawk hello i have a similiar settings... but i came from esxi 6.5... then upgrade to 6.7. I also have the 6.2 dsm in datastore (ssd on odd port) and then rdmap to two hdd ... but lattely i'm thinking about moving away from esxi... i get very bad latency in esxi... i get high of 40ms latency. I already change the driver for the older one but never get better than that. Can you report your latency? Is like mine? Thanks Edited December 7, 2018 by codedmind Quote Link to comment Share on other sites More sharing options...
luchuma Posted December 7, 2018 Share #31 Posted December 7, 2018 Quote Link to comment Share on other sites More sharing options...
Landu Posted December 7, 2018 Share #32 Posted December 7, 2018 Hey, thx for this nice tutorial. Which Critical Update did you use? Quote Link to comment Share on other sites More sharing options...
codedmind Posted December 7, 2018 Share #33 Posted December 7, 2018 (edited) 2 hours ago, luchuma said: Can you post the graphic pic? You have even higher ms 😕 Edited December 7, 2018 by codedmind Quote Link to comment Share on other sites More sharing options...
luchuma Posted December 7, 2018 Share #34 Posted December 7, 2018 Quote Link to comment Share on other sites More sharing options...
jadehawk Posted December 7, 2018 Author Share #35 Posted December 7, 2018 Mine is even Higher. But I really only use this copy of XPEnology as backup for the Main Synology box, and it also runs my VPN. Quote Link to comment Share on other sites More sharing options...
jadehawk Posted December 7, 2018 Author Share #36 Posted December 7, 2018 5 hours ago, Landu said: Hey, thx for this nice tutorial. Which Critical Update did you use? Currently is up to the latest update. Quote Link to comment Share on other sites More sharing options...
codedmind Posted December 7, 2018 Share #37 Posted December 7, 2018 3 minutes ago, jadehawk said: Mine is even Higher. But I really only use this copy of XPEnology as backup for the Main Synology box, and it also runs my VPN. Maybe you still have the new esxi driver that cause you this issue. My xpenology only have vpn, synthing, sonar and radar, and the nfs share for kodi ... but in esxi i have a linux vm and the performance is boring me... I'm serious considering moving to proxmox 5.3... its hard because this solution esxi+xpenology is almost straight forward... Quote Link to comment Share on other sites More sharing options...
jadehawk Posted December 8, 2018 Author Share #38 Posted December 8, 2018 (edited) I don't have any idea how to go about installing the older driver. As far as I understand it, the acceptable latency for a VM datastore is 20ms or less. (Saw it somewhere, just cant remember where). Am also running a Surveillance Station on the XPEnology VM with 1 camera connected (forgot to mention that!) Edited December 8, 2018 by jadehawk Quote Link to comment Share on other sites More sharing options...
codedmind Posted December 8, 2018 Share #39 Posted December 8, 2018 4 minutes ago, jadehawk said: I don't have any idea how to go about installing the older driver. As far as I understand it, the acceptable latency for a VM datastore is 20ms or less. (Saw it somewhere, just cant remember where). Am also running a Surveillance Station on the XPEnology VM with 1 camera connected (forgot to mention that!) If you search on Google for gen8 esxi performance fix you will get a lot of results for the fix. The max latency accepted for a normal esxi is 10ms of latency... Maybe what you find is 20ms for this controller... But even if that is the case 40 is the double :/ some user buy other controllers to overcome this issue but is a more radical option... Quote Link to comment Share on other sites More sharing options...
Landu Posted December 8, 2018 Share #40 Posted December 8, 2018 8 hours ago, jadehawk said: Currently is up to the latest update. I'm on 3617xs, for this I can't update to the latest version, can I? What would I have to do to go to 3615xs? Without losing my installation. Is there a difference between the two versions? For example in performance? Quote Link to comment Share on other sites More sharing options...
codedmind Posted December 10, 2018 Share #41 Posted December 10, 2018 @jadehawk this is a dell server ... As you can see the maximum latency is 1 ms... i will assume that microserver an entry server and with sometimes hdd 5400rpm and nas ones lets assume 10ms as maxium... i'm getting 40 so yes... i think is bad One of the links where you can improve tour performance... https://www.johandraaisma.nl/fix-vmware-esxi-6-slow-disk-performance-on-hp-b120i-controller/ Quote Link to comment Share on other sites More sharing options...
luchuma Posted December 11, 2018 Share #42 Posted December 11, 2018 b120i driver downgrade is usefull only when you use internal raid with RDM and sata mode it will not help... for me this speeds are acceptable earlier when i used new driver and internal raid, performace was awfull - 50MB/s disks write/read speed. now is 110MB/s and up to 500mb/s with SSD in ODD port... maybe in future i'll buy other pcie raid controller Quote Link to comment Share on other sites More sharing options...
codedmind Posted December 11, 2018 Share #43 Posted December 11, 2018 6 minutes ago, luchuma said: b120i driver downgrade is usefull only when you use internal raid with RDM and sata mode it will not help... for me this speeds are acceptable earlier when i used new driver and internal raid, performace was awfull - 50MB/s disks write/read speed. now is 110MB/s and up to 500mb/s with SSD in ODD port... maybe in future i'll buy other pcie raid controller How do you do the test? Quote Link to comment Share on other sites More sharing options...
luchuma Posted December 11, 2018 Share #44 Posted December 11, 2018 (edited) i run disk bench on windows vm and direct from dsm ssh vol1 - btrfs raid1 sudo dd bs=1M count=256 if=/dev/zero of=/volume1/testx conv=fdatasync Password: 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 2.71714 s, 98.8 MB/s vol2 - btrfs raid0 sudo dd bs=1M count=256 if=/dev/zero of=/volume2/testx conv=fdatasync 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 2.15798 s, 124 MB/s and cpu dd if=/dev/zero bs=1M count=1024 | md5sum 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 2.56942 s, 418 MB/s Edited December 11, 2018 by luchuma Quote Link to comment Share on other sites More sharing options...
codedmind Posted December 13, 2018 Share #45 Posted December 13, 2018 On 12/11/2018 at 8:12 PM, luchuma said: i run disk bench on windows vm and direct from dsm ssh vol1 - btrfs raid1 sudo dd bs=1M count=256 if=/dev/zero of=/volume1/testx conv=fdatasync Password: 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 2.71714 s, 98.8 MB/s vol2 - btrfs raid0 sudo dd bs=1M count=256 if=/dev/zero of=/volume2/testx conv=fdatasync 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 2.15798 s, 124 MB/s and cpu dd if=/dev/zero bs=1M count=1024 | md5sum 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 2.56942 s, 418 MB/s I have rdm, in hp all the discs are raid0, in dsm: volume1 is basic (esxi disk in ssd) -> 195 MB/s volume2 is hybridraid (rdm drive 2 in gen8) 79 MB/s volume3 is raid 1 (rdm drive 3 and 4 in gen8) 117MB/s cpu 233 MB/s (gt1060) Quote Link to comment Share on other sites More sharing options...
hpk Posted December 21, 2018 Share #46 Posted December 21, 2018 As mentioned on several threads, the use of E1000e Adapter is mandatory to run 6.2 on ESXi 6.7. However, this has limitations on the speed. In my Microserver I have a 10GBase-T card, and with a virtual PC I have a 10GB connection (VMXNET3), with the XPE Synology (E1000e) it stays on 1GB. Is there any hope, that this might get changed sometimes with a new loader? I'd really love to backup from server to server over a 10GB link. Quote Link to comment Share on other sites More sharing options...
hpk Posted December 22, 2018 Share #47 Posted December 22, 2018 I guess, I'd rather go back to 6.1.7 ... the last release supporting higher Network speeds ... But this seems not to be as simple as thought. Lots of trouble so far ... but I keep trying Quote Link to comment Share on other sites More sharing options...
hpk Posted December 24, 2018 Share #48 Posted December 24, 2018 (edited) Just a short feedback. After installing loader 1.02b and DSM 6.1.4 as initial installation, all went fine, including updates to 6.1.7 Update 2. Now, the 10GBase-T card is recognised and shown in the virtual DS3615xs. As soon as the second one is up and the network 10G ready, I'm going to do some speed tests from hp Server to hp Server ... Edited December 24, 2018 by hpk typo Quote Link to comment Share on other sites More sharing options...
flyride Posted December 24, 2018 Share #49 Posted December 24, 2018 This is one of the reasons I am still using 6.1.7 on my system with 10Gbe Quote Link to comment Share on other sites More sharing options...
JJJL Posted December 29, 2018 Share #50 Posted December 29, 2018 On 11/17/2018 at 4:50 PM, jadehawk said: To those interested.. DSM 6.2.1-23824 Update 1 = Did NOT work on ESXi Using DS3617xs (Changing Network vnic to E1000e did not help) DSM 6.2.1-23824 Update 1 = Works 100% on ESXi Using DS3615xs (Changing Network vnic to E1000e). Migrating the already setup Volume from DS3617xs to DS3615xs was as simple as reassigning the vmdk disks 1-3 to the new VM and repairing the partitions Thank you @haydibe Now Running latest DSM version on my HP Microserver Gen8!!! Dear @jadehawk, your video tutorial has been very helpful. Thank you very much, I seem to have followed your instructions in detail. Yet I get the error below when trying to update to DSM 6.2.1-23824 Update 1 (from DSM 6.2-23739 Update 2 on DS3615xs with E1000e). Would you have any idea what I could be doing wrong? Thanks, J Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.