flyride Posted January 22, 2021 Share #101 Posted January 22, 2021 8 hours ago, Orphée said: Can I ask why do we need to add/create a dedicated disk (32gb in the video) ? It is not initialised in the Xpenology VM... Is it possible to not add it and only have synoboot.vmdk and and RDM disks ? It is not required. But it often is helpful for troubleshooting as there is little to go wrong with configuration. If you haven't done XPe with ESXi before, configure with a virtual disk, don't provision a Storage Pool on it and then delete it when your RDM drives are up and running and working correctly. 1 Quote Link to comment Share on other sites More sharing options...
Orphée Posted January 23, 2021 Share #102 Posted January 23, 2021 (edited) On 1/18/2021 at 9:17 PM, Orphée said: I ordered a LSI 9211-8i IT, I will try it. I hope it is plug & play for passtrough to the ESXi (unplug mini SAS from motherboard and plug it to LSI ?) My speeds are not that bad for writing : $ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/testx conv=fdatasync 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 1.80159 s, 149 MB/s $ sudo dd bs=1M count=512 if=/dev/zero of=/volume1/testx conv=fdatasync 512+0 records in 512+0 records out 536870912 bytes (537 MB) copied, 2.90437 s, 185 MB/s $ sudo dd bs=1M count=1024 if=/dev/zero of=/volume1/testx conv=fdatasync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 5.54903 s, 194 MB/s $ sudo dd bs=1M count=2048 if=/dev/zero of=/volume1/testx conv=fdatasync 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 8.00585 s, 268 MB/s $ sudo dd bs=1M count=4096 if=/dev/zero of=/volume1/testx conv=fdatasync 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB) copied, 13.0355 s, 329 MB/s $ dd if=/dev/zero bs=1M count=8192 | md5sum 8192+0 records in 8192+0 records out 8589934592 bytes (8.6 GB) copied, 24.0138 s, 358 MB/s Forget the graph about write speed, I was just refering the latency, the disk activity came from my camera Survelliance Station. Edit : graph with count=50000 twice : $ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync 50000+0 records in 50000+0 records out 52428800000 bytes (52 GB) copied, 153.624 s, 341 MB/s $ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync 50000+0 records in 50000+0 records out 52428800000 bytes (52 GB) copied, 155.17 s, 338 MB/s Hello again ! I wanted to try latest ESXi 7.0U1c on the Gen8. So I stopped my running 6.7U3 working USB key and installed a new one instead. I booted on the the 7.0U1c ISO to make a fresh install, but after installation, it failed to boot on the USB Key. So I tried again but instead I installed HPE 7.0 customized ISO for Gen10. It worked and I was able to boot on it. Then I direclty upgraded with the latest 7.0U1c ISO and it worked ! I'm with latest ESXi 7.0U1c currently Then I added existing Xpenology VM from SSD datastore, and it works flowlessly : And last but not least : No more latency issue on disks ! Or the info is not true anymore ? Disk write speed is a bit less than on 6.7U3 it seems : $ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/testx conv=fdatasync 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 1.95899 s, 137 MB/s $ sudo dd bs=1M count=512 if=/dev/zero of=/volume1/testx conv=fdatasync 512+0 records in 512+0 records out 536870912 bytes (537 MB) copied, 3.34841 s, 160 MB/s $ sudo dd bs=1M count=1024 if=/dev/zero of=/volume1/testx conv=fdatasync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.48982 s, 239 MB/s $ sudo dd bs=1M count=2048 if=/dev/zero of=/volume1/testx conv=fdatasync 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 9.36259 s, 229 MB/s $ sudo dd bs=1M count=4096 if=/dev/zero of=/volume1/testx conv=fdatasync 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB) copied, 16.3967 s, 262 MB/s $ sudo dd bs=1M count=8192 if=/dev/zero of=/volume1/testx conv=fdatasync 8192+0 records in 8192+0 records out 8589934592 bytes (8.6 GB) copied, 27.519 s, 312 MB/s $ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync 50000+0 records in 50000+0 records out 52428800000 bytes (52 GB) copied, 162.903 s, 322 MB/s $ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync 50000+0 records in 50000+0 records out 52428800000 bytes (52 GB) copied, 161.099 s, 325 MB/s $ dd if=/dev/zero bs=1M count=8192 | md5sum 8192+0 records in 8192+0 records out 8589934592 bytes (8.6 GB) copied, 22.1442 s, 388 MB/s Edit : Hum, on host monitoring level, it seems latency is still not very good. Maybe value from VMs are not true anymore Edited January 23, 2021 by Orphée Quote Link to comment Share on other sites More sharing options...
Orphée Posted January 25, 2021 Share #103 Posted January 25, 2021 (edited) Hello, I just received my PCE-e P222 SATA-raid card. I flashed it to latest FW available : 8.32 Then with HP SSA I enabled HBA mode on it. I'm still waiting Xeon CPU to replace my current I3 so I still don't have VT-d (passthrough) available. But still, after installed, flashed, and HBA enabled, I plugged SAS cable from MB to P222 card Then in Xpenology VM settings I removed RDM mapped disk and I was able to add "New raw disk" instead. Currently no real benefit exept performances : $ sudo dd bs=1M count=50000 if=/dev/zero of=/volume1/testx conv=fdatasync 50000+0 records in 50000+0 records out 52428800000 bytes (52 GB) copied, 133.144 s, 394 MB/s $ dd if=/dev/zero bs=1M count=8192 | md5sum 8192+0 records in 8192+0 records out 8589934592 bytes (8.6 GB) copied, 21.8785 s, 393 MB/s Once I receive the CPU I'll be able to try passthrough to have SMART data inside Xpenology. Edited January 25, 2021 by Orphée Quote Link to comment Share on other sites More sharing options...
pocopico Posted January 25, 2021 Share #104 Posted January 25, 2021 (edited) Smart data is visible via esxi RDM ... it think its safer to use RDM than use passthrough as HBA will be supported on esxi but that might not be the case for xpenololy. Also in case you want to attach it to a linux VM, or even windows with windows btrfs driver for data recovery it will still be easier Edited January 25, 2021 by pocopico Quote Link to comment Share on other sites More sharing options...
Orphée Posted January 25, 2021 Share #105 Posted January 25, 2021 6 minutes ago, pocopico said: Smart data is visible via esxi RDM ... it think its safer to use RDM than use passthrough as HBA will be supported on esxi but that might not be the case for xpenololy. Also in case you want to attach it to a linux VM, or even windows with windows btrfs driver for data recovery it will still be easier How do you make SMART data available in Xpenology with 6.2.3 with RDM ? Quote Link to comment Share on other sites More sharing options...
pocopico Posted January 25, 2021 Share #106 Posted January 25, 2021 if you attach the RDM disks attached on a supported HBA then the smart data will be visible. If you attach unsupported disks (SATA attached on mainboard controller) using command line RDM files then you wont have SMART data. Quote Link to comment Share on other sites More sharing options...
Orphée Posted January 25, 2021 Share #107 Posted January 25, 2021 (edited) 1 hour ago, pocopico said: if you attach the RDM disks attached on a supported HBA then the smart data will be visible. If you attach unsupported disks (SATA attached on mainboard controller) using command line RDM files then you wont have SMART data. Well I just explained I attached my 4 disks to P222 set in HBA mode, added disks with menu (not command line RDM), and I have no SMART. These disks were on a bare metal N54L before with SMART data working. ESXi is on USB key Xpenology VM is on SSD plugged on ODD sata port 4 x 4 To disks on P222 HBA controller... No Smart DATA. Should I change something in VM SATA/SCSI configuration ? Currently the 4 drives are set on controller SATA1 Data from ESXi host : Edited January 25, 2021 by Orphée Quote Link to comment Share on other sites More sharing options...
pocopico Posted January 25, 2021 Share #108 Posted January 25, 2021 (edited) Well first of all you shouldn't be seeing disk as virtual SATA drives, but rather something like the below list, where i have set the disks to RDM. Anyway i'm using the onboard HBA and so i had to use the SATA creation of RDMs using the CLI. Like i said this was never working in my Gen 8 microserver with ESXi. Maybe SMART data for SATA RDM disks is not working as it is on a SAS HBA with SAS disks that i have and i'm getting SMART data on my VM. This might help you a bit more : Edited January 25, 2021 by pocopico Quote Link to comment Share on other sites More sharing options...
Orphée Posted January 26, 2021 Share #109 Posted January 26, 2021 @pocopico I already saw this thread, it ends with no current solution. Quote Link to comment Share on other sites More sharing options...
conhulio2000 Posted January 26, 2021 Share #110 Posted January 26, 2021 (edited) To be honest, I have the same issue, and therefore i just dont bother with the SMART details showing up in Expen. If i want to know what the SMART values are, I would just run a script in ESXi host to find them out. At least you can do that. If the drive is going to fail, it will fail, i dont care- but its probably very highly likely that 2 drives wont fail at the same time, therefore im not that bothered (hence i have 2 drives in a volume mirrored) Regards Edited January 26, 2021 by conhulio2000 Quote Link to comment Share on other sites More sharing options...
Orphée Posted January 28, 2021 Share #111 Posted January 28, 2021 Hi, So I tried to passthrough the P222 to Xpenology, the syno fail to boot correctly and is not reacheable from network. Issue seems to be addressed here : Quote Link to comment Share on other sites More sharing options...
Orphée Posted January 31, 2021 Share #112 Posted January 31, 2021 (edited) Hello again ! So as stated earlier passthrough did not work with P222 set in HBA mode... I just received my LSI 9211-8i IT mode enabled. I swapped PCIe cards : - First thing to notice, CASE FAN noise was quite ennoying with P222, always running above 20%... with LSI card case FAN is at 11%, really better... - with LSI card, I can't see disk inside ESXi... so no way to add them as RDM. - I enabled passthrough on LSI card : Then I removed old mapped RDM disks (not seen anymore by the way) and added new PCI card inside VM : Then I booted the VM with serial port enabled I and saw this with telnet : Oh crap ! did it work ?!! Then I was able to login to my Syno, so network was working... : Tadaaaaa So passthrough works with this LSI card ! I'm very happy Edit : And perfs are very good : $ sudo dd bs=1M count=128 if=/dev/zero of=/volume1/testx conv=fdatasync 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 0.928178 s, 145 MB/s $ sudo dd bs=1M count=512 if=/dev/zero of=/volume1/testx conv=fdatasync 512+0 records in 512+0 records out 536870912 bytes (537 MB) copied, 2.09148 s, 257 MB/s $ sudo dd bs=1M count=2048 if=/dev/zero of=/volume1/testx conv=fdatasync 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 7.45988 s, 288 MB/s $ sudo dd bs=1M count=100000 if=/dev/zero of=/volume1/testx conv=fdatasync 100000+0 records in 100000+0 records out 104857600000 bytes (105 GB) copied, 258.977 s, 405 MB/s Edited January 31, 2021 by Orphée Quote Link to comment Share on other sites More sharing options...
xllbenllx Posted February 20, 2021 Share #113 Posted February 20, 2021 hi, can anyone point me in the right direction for installing 918+ on esxi w what software i should be using? i can't figure it out; ive dl'd 1.04 band loaded and adapted this with an old tutorial i used to get my (now ancient) 5.xx server up, but when i boot up the VM i just get a splash page for linux (here) im on an e3-1245v3 chip (haswell) w/ 32gb ram and an lsi flashed card id like to take advantage of hardwqare transcoding if possible. my previous build however was on the 3615xs. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.