Jump to content
XPEnology Community

NVMe cache support


advin

Recommended Posts

1 hour ago, Captainfingerbang said:

Speaking of brilliant, i think i forgot all of my linux commands in my brain. Its been so long!

It appears the system detects the drive but its also giving me error.

I DID manage to get the sh script moved to the proper directory though!


root@ahern2:~# cp /volume1/test/libNVMEpatch.sh /usr/local/etc/rc.d/

 

 

Yeah, the only thing in that whole long terminal session that did anything was the copy file above.  You're lucky the random shell action didn't do any damage as root.

Now you need to make the script executable:

 

On 11/29/2019 at 10:38 AM, The Chief said:

 


chmod 0755  /usr/local/etc/rc.d/libNVMEpatch.sh

 

 

Then reboot and your drive should show up in Storage Manager.  Please do yourself a favor and only use R/O cache mode, not R/W ...

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

On 12/12/2019 at 6:50 AM, The Chief said:

One can also use NVMe SSD's as storage as well. Here are some details (written in Russian, but screenshots are self-explanatory). 

overlooked this post, indeed i did this meanwhile and running NVME drive happily. Now i'm searching for a solution to turn on the full disk encryption and idle the spinning drives. like it was possible years ago, when the procedure was easy: install DSM without spinning drives, add them later. Nowadays drives are resumed immediately after hdparm -Y <drive>.

 

NVME as drive 2.png

Link to comment
Share on other sites

  • 2 weeks later...

 ls -l /dev/mapper/cache*
ls: cannot access /dev/mapper/cache*: No such file or directory
root@DS918:~# !/bin/ash
-ash: !/bin/ash: event not found
root@DS918:~# /bin/ash
ash-4.3# patchnvme for DSM 6.2.x
ash: patchnvme: command not found
ash-4.3#
ash-4.3# TARGFILE="/usr/lib/libsynonvme.so.1"
ash-4.3# root
ash: root: command not found
ash-4.3# exit
exit
root@DS918:~# synonvme --get-location /dev/nvme0n1
Can't get the location of /dev/nvme0n1
root@DS918:~# lspci -k
pcilib: Cannot open /sys/bus/pci/devices/0000:04:00.0/config
 

Link to comment
Share on other sites

  • 4 weeks later...

Thanks Again Flyride.

Def took your advice on the R/O cache vs R/W, i was only testing. I've personally experienced R/W cache barfing then killing my Volume 1 in the past. Thank goodness for backups.

 

 

 

One quick question for you Flyride. Though i've already gotten NVME cache working, i saw you mentioned something in thread.

 

In your DSM 6.2.2-24922 Update 5  reporting thread you mention an "Nvme Patch"  like from in the below image

What "patch" are you referring to and where does one find it???

 I don't recall using a "Patch" per se.

 

 

Gracias!

 

 

31865229_nvmepatch.thumb.png.c80387cba5445e1da33449847db8c269.png

@

 

 

Edited by Captainfingerbang
Link to comment
Share on other sites

1 hour ago, Captainfingerbang said:

In your DSM 6.2.2-24922 Update 5  reporting thread you mention an "Nvme Patch"  like from in the below image

What "patch" are you referring to and where does one find it???

 I don't recall using a "Patch" per se.

 

31865229_nvmepatch.thumb.png.c80387cba5445e1da33449847db8c269.png

 

The scripts referred to in this thread implement the patch.

Link to comment
Share on other sites

  • 2 weeks later...
  • 4 weeks later...

ive upgraded my DSM (3 days before) to u6 and before the update ive removed the ssd cache.

after the upgrade was done ive recreated the cache..

 

and then yesterday at night one if my 2 nvme cache ssd's get corrupted (DSM said).

on the CLI i  can see both nvme ssd's.

but insight synology DSM i only can see 1.

 

root@NAS:~# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     3QLUS7RSFP174FJTPDR5 INTENSO                                  1         128.04  GB / 128.04  GB    512   B +  0 B   R1115A0
/dev/nvme0n1p1   3QLUS7RSFP174FJTPDR5 INTENSO                                  1         128.04  GB / 128.04  GB    512   B +  0 B   R1115A0
root@NAS:~#

 

so i removed the damaged ssd cache drive, make a proper reboot of the DSM, restarted the script from "The Chief" make again a reboot...

and after this again i only can see 1 nvme drive insight the DSM but 2 drives on CLI.

 

got someone a hint for me ?

Link to comment
Share on other sites

  • 2 weeks later...

short update...

 

ive swapped the 2 nvme ssds ports... and now i've got the following..


root@FSK-NAS:~# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     3QLUS7RSFP174FJTPDR5 INTENSO                                  1         128.04  GB / 128.04  GB    512   B +  0 B   R1115A0
/dev/nvme1n1     OWNUZUMYDV8D9WHNPMAA INTENSO                                  1         128.04  GB / 128.04  GB    512   B +  0 B   R1115A0
/dev/nvme1n1p1   OWNUZUMYDV8D9WHNPMAA INTENSO                                  1         128.04  GB / 128.04  GB    512   B +  0 B   R1115A0
root@FSK-NAS:~#

 

grafik.png.9dbf24d03b17323da7ff11d3cd6b840b.png

 

does someone got any idea why i can see now 3 nvme disks within the SHELL ?

 

and on the GUI again only 1 is usable.

Link to comment
Share on other sites

1 hour ago, DerMoeJoe said:

short update...

 

ive swapped the 2 nvme ssds ports... and now i've got the following..


root@FSK-NAS:~# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     3QLUS7RSFP174FJTPDR5 INTENSO                                  1         128.04  GB / 128.04  GB    512   B +  0 B   R1115A0
/dev/nvme1n1     OWNUZUMYDV8D9WHNPMAA INTENSO                                  1         128.04  GB / 128.04  GB    512   B +  0 B   R1115A0
/dev/nvme1n1p1   OWNUZUMYDV8D9WHNPMAA INTENSO                                  1         128.04  GB / 128.04  GB    512   B +  0 B   R1115A0
root@FSK-NAS:~#

does someone got any idea why i can see now 3 nvme disks within the SHELL ?

 

look closer to the nameing, its just 2 disks

 

The NVMe naming standard describes:

nvme0: first registered device's device controller
nvme0n1: first registered device's first namespace
nvme0n1p1: first registered device's first namespace's first partition

 

dev/nvme0n1     3QLUS7RSFP174FJTPDR5 INTENSO

-> nvme0, your 1st disk

 

/dev/nvme1n1     OWNUZUMYDV8D9WHNPMAA INTENSO

-> nvme1, your 2nd disk


/dev/nvme1n1p1   OWNUZUMYDV8D9WHNPMAA INTENSO  

-> nvme1, still your 2nd disk but "p1" tells you its the 1st partition on that disk

Edited by IG-88
Link to comment
Share on other sites

NOTE: 6.2.3 upgrade overwrites the nvme lib file.

I strongly recommend deleting your r/w cache prior to installing 6.2.3 or risk corrupting your volume on reboot.

 

The patch installs (meaning the search strings are found) but it is not working completely.  There is an additional PCI port check that is in the new library that is not addressed by this patch.

Link to comment
Share on other sites

11 минуту назад, DerMoeJoe сказал:

@The Chief

 

thanks for your work, but the new nvme patch doesnt works for me with 6.2.3.

 

ive relpaced the old patch, modified the rights with chmod, started the script and rebooted the NAS, but im unable to see the nvme drives within the storrage manager.

 

Patched .so.1 — try to directly replace it in /lib64. 

libsynonvme.so.1

Link to comment
Share on other sites

On 4/18/2020 at 3:22 AM, DerMoeJoe said:

@The Chief

 

thanks for your work, but the new nvme patch doesnt works for me with 6.2.3.

 

ive relpaced the old patch, modified the rights with chmod, started the script and rebooted the NAS, but im unable to see the nvme drives within the storrage manager.

 

If you tried to patch with the old patch, the file won't be in the correct state for the new patch.  Restore the old file first (or just use the one @The Chief gave you).

Edited by flyride
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

@The Chief @flyride

 

thanks for your help.

with the modded libary file it works again (y)

 

important is, that the libary must be placed as u said within the /lib64/

on my first test ive searched for the original file and found it on /usr/lib/ and replaced this file, but after this the WEB GUI wasnt working.

probably this is a good option to point the libary with the path in the STARTER Post. (or make another bookmarked topic or something for this)

Link to comment
Share on other sites

Thanks @flyride

Confirmed that it works on DSM 6.2.3-25423. 

Important thing to note here is to Copy the provided "libsynonvme.so.1" file into /usr/lib64 directory.

And if you were using the previous solution of script "libNVMEpatch.sh" , dont forget to remove it from /usr/local/etc/rc.d folder.

 

After you place the file into /usr/lib64 director, the DSM UI will stop working so you have to hard restart the machine.

After reboot, the NVME cache is identified  and shows up in Storage Manager.

  • Thanks 1
Link to comment
Share on other sites

Hi,

 

After patch to 6.23 added the so.1 and the patch everything is visible in Storage Manager but I have the following error:

  486.646943] pcieport 0000:00:1d.0:   device [8086:a298] error status/mask=00000001/00002000
[  486.655445] pcieport 0000:00:1d.0:    [ 0] Receiver Error         (First)
[  488.015090] pcieport 0000:00:1d.0: AER: Corrected error received: id=00e8
[  488.015097] pcieport 0000:00:1d.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e8(Receiver ID)
[  488.025522] pcieport 0000:00:1d.0:   device [8086:a298] error status/mask=00000001/00002000
[  488.034230] pcieport 0000:00:1d.0:    [ 0] Receiver Error
[  491.950039] pcieport 0000:00:1d.0: AER: Corrected error received: id=00e8
[  491.950045] pcieport 0000:00:1d.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e8(Receiver ID)
[  491.960446] pcieport 0000:00:1d.0:   device [8086:a298] error status/mask=00000001/00002000
[  491.968918] pcieport 0000:00:1d.0:    [ 0] Receiver Error         (First)

 

Link to comment
Share on other sites

  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...