flyride Posted January 11, 2020 Share #76 Posted January 11, 2020 1 hour ago, Captainfingerbang said: Speaking of brilliant, i think i forgot all of my linux commands in my brain. Its been so long! It appears the system detects the drive but its also giving me error. I DID manage to get the sh script moved to the proper directory though! root@ahern2:~# cp /volume1/test/libNVMEpatch.sh /usr/local/etc/rc.d/ Yeah, the only thing in that whole long terminal session that did anything was the copy file above. You're lucky the random shell action didn't do any damage as root. Now you need to make the script executable: On 11/29/2019 at 10:38 AM, The Chief said: chmod 0755 /usr/local/etc/rc.d/libNVMEpatch.sh Then reboot and your drive should show up in Storage Manager. Please do yourself a favor and only use R/O cache mode, not R/W ... 2 1 Quote Link to comment Share on other sites More sharing options...
nadiva Posted January 18, 2020 Share #77 Posted January 18, 2020 On 12/12/2019 at 6:50 AM, The Chief said: One can also use NVMe SSD's as storage as well. Here are some details (written in Russian, but screenshots are self-explanatory). overlooked this post, indeed i did this meanwhile and running NVME drive happily. Now i'm searching for a solution to turn on the full disk encryption and idle the spinning drives. like it was possible years ago, when the procedure was easy: install DSM without spinning drives, add them later. Nowadays drives are resumed immediately after hdparm -Y <drive>. Quote Link to comment Share on other sites More sharing options...
mervincm Posted January 20, 2020 Share #78 Posted January 20, 2020 Thanks for the work, with this I was able to mount a spare Intel SSD750-400GB as a read only SSD cache. Not sure yet how helpful it will be but time will tell. Quote Link to comment Share on other sites More sharing options...
gocaleb Posted February 2, 2020 Share #79 Posted February 2, 2020 ls -l /dev/mapper/cache* ls: cannot access /dev/mapper/cache*: No such file or directory root@DS918:~# !/bin/ash -ash: !/bin/ash: event not found root@DS918:~# /bin/ash ash-4.3# patchnvme for DSM 6.2.x ash: patchnvme: command not found ash-4.3# ash-4.3# TARGFILE="/usr/lib/libsynonvme.so.1" ash-4.3# root ash: root: command not found ash-4.3# exit exit root@DS918:~# synonvme --get-location /dev/nvme0n1 Can't get the location of /dev/nvme0n1 root@DS918:~# lspci -k pcilib: Cannot open /sys/bus/pci/devices/0000:04:00.0/config Quote Link to comment Share on other sites More sharing options...
Captainfingerbang Posted February 28, 2020 Share #80 Posted February 28, 2020 (edited) Thanks Again Flyride. Def took your advice on the R/O cache vs R/W, i was only testing. I've personally experienced R/W cache barfing then killing my Volume 1 in the past. Thank goodness for backups. One quick question for you Flyride. Though i've already gotten NVME cache working, i saw you mentioned something in thread. In your DSM 6.2.2-24922 Update 5 reporting thread you mention an "Nvme Patch" like from in the below image What "patch" are you referring to and where does one find it??? I don't recall using a "Patch" per se. Gracias! @ Edited February 28, 2020 by Captainfingerbang Quote Link to comment Share on other sites More sharing options...
flyride Posted February 28, 2020 Share #81 Posted February 28, 2020 1 hour ago, Captainfingerbang said: In your DSM 6.2.2-24922 Update 5 reporting thread you mention an "Nvme Patch" like from in the below image What "patch" are you referring to and where does one find it??? I don't recall using a "Patch" per se. The scripts referred to in this thread implement the patch. Quote Link to comment Share on other sites More sharing options...
IST QIAN Posted March 9, 2020 Share #82 Posted March 9, 2020 Hi guys. I make two nvme emu disk for ds918+, and it works. Thank you very much. Quote Link to comment Share on other sites More sharing options...
DerMoeJoe Posted April 5, 2020 Share #83 Posted April 5, 2020 ive upgraded my DSM (3 days before) to u6 and before the update ive removed the ssd cache. after the upgrade was done ive recreated the cache.. and then yesterday at night one if my 2 nvme cache ssd's get corrupted (DSM said). on the CLI i can see both nvme ssd's. but insight synology DSM i only can see 1. root@NAS:~# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 3QLUS7RSFP174FJTPDR5 INTENSO 1 128.04 GB / 128.04 GB 512 B + 0 B R1115A0 /dev/nvme0n1p1 3QLUS7RSFP174FJTPDR5 INTENSO 1 128.04 GB / 128.04 GB 512 B + 0 B R1115A0 root@NAS:~# so i removed the damaged ssd cache drive, make a proper reboot of the DSM, restarted the script from "The Chief" make again a reboot... and after this again i only can see 1 nvme drive insight the DSM but 2 drives on CLI. got someone a hint for me ? Quote Link to comment Share on other sites More sharing options...
DerMoeJoe Posted April 13, 2020 Share #84 Posted April 13, 2020 short update... ive swapped the 2 nvme ssds ports... and now i've got the following.. root@FSK-NAS:~# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 3QLUS7RSFP174FJTPDR5 INTENSO 1 128.04 GB / 128.04 GB 512 B + 0 B R1115A0 /dev/nvme1n1 OWNUZUMYDV8D9WHNPMAA INTENSO 1 128.04 GB / 128.04 GB 512 B + 0 B R1115A0 /dev/nvme1n1p1 OWNUZUMYDV8D9WHNPMAA INTENSO 1 128.04 GB / 128.04 GB 512 B + 0 B R1115A0 root@FSK-NAS:~# does someone got any idea why i can see now 3 nvme disks within the SHELL ? and on the GUI again only 1 is usable. Quote Link to comment Share on other sites More sharing options...
IG-88 Posted April 13, 2020 Share #85 Posted April 13, 2020 (edited) 1 hour ago, DerMoeJoe said: short update... ive swapped the 2 nvme ssds ports... and now i've got the following.. root@FSK-NAS:~# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 3QLUS7RSFP174FJTPDR5 INTENSO 1 128.04 GB / 128.04 GB 512 B + 0 B R1115A0 /dev/nvme1n1 OWNUZUMYDV8D9WHNPMAA INTENSO 1 128.04 GB / 128.04 GB 512 B + 0 B R1115A0 /dev/nvme1n1p1 OWNUZUMYDV8D9WHNPMAA INTENSO 1 128.04 GB / 128.04 GB 512 B + 0 B R1115A0 root@FSK-NAS:~# does someone got any idea why i can see now 3 nvme disks within the SHELL ? look closer to the nameing, its just 2 disks The NVMe naming standard describes: nvme0: first registered device's device controller nvme0n1: first registered device's first namespace nvme0n1p1: first registered device's first namespace's first partition dev/nvme0n1 3QLUS7RSFP174FJTPDR5 INTENSO -> nvme0, your 1st disk /dev/nvme1n1 OWNUZUMYDV8D9WHNPMAA INTENSO -> nvme1, your 2nd disk /dev/nvme1n1p1 OWNUZUMYDV8D9WHNPMAA INTENSO -> nvme1, still your 2nd disk but "p1" tells you its the 1st partition on that disk Edited April 13, 2020 by IG-88 Quote Link to comment Share on other sites More sharing options...
DerMoeJoe Posted April 15, 2020 Share #86 Posted April 15, 2020 Update: after restart the script for enabling the nvme cache and make a full reboot i was able again to setup the nvme cache with the 2 drives. Quote Link to comment Share on other sites More sharing options...
flyride Posted April 17, 2020 Share #87 Posted April 17, 2020 NOTE: 6.2.3 upgrade overwrites the nvme lib file. I strongly recommend deleting your r/w cache prior to installing 6.2.3 or risk corrupting your volume on reboot. The patch installs (meaning the search strings are found) but it is not working completely. There is an additional PCI port check that is in the new library that is not addressed by this patch. Quote Link to comment Share on other sites More sharing options...
Selfmade RuLeZ Posted April 17, 2020 Share #88 Posted April 17, 2020 Yeah, I think the Dev Guys from Synology are also reading this topic. Can‘t wait to see what they build next to prevent NVMe Usage Quote Link to comment Share on other sites More sharing options...
DerMoeJoe Posted April 17, 2020 Share #89 Posted April 17, 2020 Before the Upgrade to 6.2.3 i removed the Cache (aß always Before an Update). After the Update u still cann See the Hardware With console. But running the Script and performing a reboot wouldnt help until now to get the Cache up again. Quote Link to comment Share on other sites More sharing options...
The Chief Posted April 17, 2020 Share #90 Posted April 17, 2020 «Is supported NVMe device» check logic is changed somewhat. I'm inspecting reversed code for details now… Quote Link to comment Share on other sites More sharing options...
The Chief Posted April 17, 2020 Share #91 Posted April 17, 2020 Minor changes. Dev Guys now do some more strict PHYSDEV parsing. Nothing special. libNVMEpatch.sh 7 Quote Link to comment Share on other sites More sharing options...
DerMoeJoe Posted April 18, 2020 Share #92 Posted April 18, 2020 @The Chief thanks for your work, but the new nvme patch doesnt works for me with 6.2.3. ive relpaced the old patch, modified the rights with chmod, started the script and rebooted the NAS, but im unable to see the nvme drives within the storrage manager. Quote Link to comment Share on other sites More sharing options...
The Chief Posted April 18, 2020 Share #93 Posted April 18, 2020 11 минуту назад, DerMoeJoe сказал: @The Chief thanks for your work, but the new nvme patch doesnt works for me with 6.2.3. ive relpaced the old patch, modified the rights with chmod, started the script and rebooted the NAS, but im unable to see the nvme drives within the storrage manager. Patched .so.1 — try to directly replace it in /lib64. libsynonvme.so.1 Quote Link to comment Share on other sites More sharing options...
flyride Posted April 18, 2020 Share #94 Posted April 18, 2020 (edited) On 4/18/2020 at 3:22 AM, DerMoeJoe said: @The Chief thanks for your work, but the new nvme patch doesnt works for me with 6.2.3. ive relpaced the old patch, modified the rights with chmod, started the script and rebooted the NAS, but im unable to see the nvme drives within the storrage manager. If you tried to patch with the old patch, the file won't be in the correct state for the new patch. Restore the old file first (or just use the one @The Chief gave you). Edited May 6, 2020 by flyride 1 1 Quote Link to comment Share on other sites More sharing options...
DerMoeJoe Posted April 20, 2020 Share #95 Posted April 20, 2020 @The Chief @flyride thanks for your help. with the modded libary file it works again (y) important is, that the libary must be placed as u said within the /lib64/ on my first test ive searched for the original file and found it on /usr/lib/ and replaced this file, but after this the WEB GUI wasnt working. probably this is a good option to point the libary with the path in the STARTER Post. (or make another bookmarked topic or something for this) Quote Link to comment Share on other sites More sharing options...
indiandave Posted April 28, 2020 Share #96 Posted April 28, 2020 Thanks @flyride Confirmed that it works on DSM 6.2.3-25423. Important thing to note here is to Copy the provided "libsynonvme.so.1" file into /usr/lib64 directory. And if you were using the previous solution of script "libNVMEpatch.sh" , dont forget to remove it from /usr/local/etc/rc.d folder. After you place the file into /usr/lib64 director, the DSM UI will stop working so you have to hard restart the machine. After reboot, the NVME cache is identified and shows up in Storage Manager. 1 Quote Link to comment Share on other sites More sharing options...
dragosmp Posted April 29, 2020 Share #97 Posted April 29, 2020 Hi, After patch to 6.23 added the so.1 and the patch everything is visible in Storage Manager but I have the following error: 486.646943] pcieport 0000:00:1d.0: device [8086:a298] error status/mask=00000001/00002000 [ 486.655445] pcieport 0000:00:1d.0: [ 0] Receiver Error (First) [ 488.015090] pcieport 0000:00:1d.0: AER: Corrected error received: id=00e8 [ 488.015097] pcieport 0000:00:1d.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e8(Receiver ID) [ 488.025522] pcieport 0000:00:1d.0: device [8086:a298] error status/mask=00000001/00002000 [ 488.034230] pcieport 0000:00:1d.0: [ 0] Receiver Error [ 491.950039] pcieport 0000:00:1d.0: AER: Corrected error received: id=00e8 [ 491.950045] pcieport 0000:00:1d.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e8(Receiver ID) [ 491.960446] pcieport 0000:00:1d.0: device [8086:a298] error status/mask=00000001/00002000 [ 491.968918] pcieport 0000:00:1d.0: [ 0] Receiver Error (First) Quote Link to comment Share on other sites More sharing options...
The Chief Posted May 14, 2020 Share #98 Posted May 14, 2020 В 29.04.2020 в 15:50, dragosmp сказал: AER: Corrected error received: Add "pcie_aspm=off" to common_args_XXXXX parameter (common_args_ds918 in ds918 loader and so on) in grub.cfg. 1 Quote Link to comment Share on other sites More sharing options...
mervincm Posted May 14, 2020 Share #99 Posted May 14, 2020 Has anyone tested on DSM 6.2.3-25426? Quote Link to comment Share on other sites More sharing options...
The Chief Posted May 14, 2020 Share #100 Posted May 14, 2020 2 часа назад, mervincm сказал: Has anyone tested on DSM 6.2.3-25426 It works on -25426 without any glitch. Feel free to update. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.