Jump to content
XPEnology Community

NVMe cache support


advin

Recommended Posts

On 1/29/2021 at 11:02 PM, meatball said:

Hi, I'v pathed libsynonvme.so.1 . But I found that only nvme ssd on the M2A slot can be recognized. So how can I use the M2Q slot?

My hardware info:

CPU i39100

Motherboard GIGABYTE b365m aorus elite.

SSD WD SN750 1T

I checked the manual of motherboard, it is said that M2Q slot is connected directly to CPU while M2A slot shares bandwidth with SATA3.0 slot

i have a 9900 on a z390 systemboard. It's been a while since I tried m2 in xpenology, but when i tested both slots were working. Heck, I use an NVME on a pcie card now and it works fine.

Link to comment
Share on other sites

On 1/29/2021 at 10:02 PM, meatball said:

Hi, I'v pathed libsynonvme.so.1 . But I found that only nvme ssd on the M2A slot can be recognized. So how can I use the M2Q slot?

My hardware info:

CPU i39100

Motherboard GIGABYTE b365m aorus elite.

SSD WD SN750 1T

I checked the manual of motherboard, it is said that M2Q slot is connected directly to CPU while M2A slot shares bandwidth with SATA3.0 slot

 

Your motherboard has some limitations between NVMe and SATA SSD's and some overlap with SATA addressing.  Not sure if any of this applies to your situation.

image.thumb.png.3ac9d2a68ccfe26c3725ad731cc97973.png

Link to comment
Share on other sites

On 2/2/2021 at 4:29 PM, flyride said:

I'm not sure there is anything to do about it.  The nvme binary is standard Linux accessing the kernel driver and is unaffected by the patch.  I would guess that you would see the same behavior with a real DS918+.

@flyride so it's safe to use them? ... I' ve also tested TWO Samsung 970 EVO (instead of Plus version) and I got the same result in SSH and the same in DSM (they work like a charm) ... but isn't it very strange? Can be the standard Linux binary so firmware version dependant?

Link to comment
Share on other sites

  • 2 weeks later...

Hi all,

 

I am looking to use an NVMe drive as a separate volume (for installing a certain application to) rather than using as a cache.  So far I have installed the drive and verified that it is seen in the BIOS and also confirmed that DSM is not showing it as available for use.  This led me to this thread.

 

I have read most of the pages / posts and understand that the common use of an NVMe is for cache.  I also saw @Hackaro's useful summary on page 6:

 

Quote

The only thing is really working at the moment is a copy of the libsynonvme.so.1 to the right path. So put this file in a public zone of your volume (this is my case) or wherever you like and then with root's privileges (sudo -i) put the lib in the right place: 

 

cp /volume1/public/libsynonvme.so.1 /usr/lib64 cd /usr/lib64 chmod 777  libsynonvme.so.1 shutdown -r now

 

and that's it. The Storage Manager should recognise correctly yours NVMe's and use them as cache. 

 

2 questions if I may:

  1. Should the above process work for me to be able to see the drive in DSM and configure as a new volume?
  2. I am a novice when it comes to linux.  Do I perform the above task using SSH?  Can anyone offer some further guidance (or point to a resource) that lays it out in baby steps for me please?

Thank you guys for you help and support.

Link to comment
Share on other sites

7 hours ago, RobPulsar said:

2 questions if I may:

  1. Should the above process work for me to be able to see the drive in DSM and configure as a new volume?

No.  This is a limitation of DSM, not the patch. Right now the only "safe" way to do it is to embed DSM as a VM in ESXi, attach the NVMe disk to ESXi and syndicate it as an emulated SATA device to the VM.

 

7 hours ago, RobPulsar said:
  1. I am a novice when it comes to linux.  Do I perform the above task using SSH?  Can anyone offer some further guidance (or point to a resource) that lays it out in baby steps for me please?

Yes, you would need to turn on TELNET and/or SSH capability via a checkbox in Control Panel.  The apps are usually free, such as Putty (on Windows).

 

You do need to learn a little about Linux command line, and there are thousands of Internet resources to help you there.

  • Like 1
Link to comment
Share on other sites

Hi @flyride

 

Thanks for answering above.  It is a shame that I can't use the NVMe as a separate volume as that was the intended purpose.  I think running VMs for DSM is beyond my technical capability (and sounds risky regarding the volume of data I already have and would not want to lose).  Perhaps I'll take a look at ESXi before completely dismissing it though... in for a penny and all that.

 

I did a little SSH last night actually (just following a guide for something else) so perhaps I can give it a go and at least make use of the NVMe as a cache?

 

Thanks again for helping me out.

Link to comment
Share on other sites

On 4/28/2020 at 4:32 PM, indiandave said:

Thanks @flyride

Confirmed that it works on DSM 6.2.3-25423. 

Important thing to note here is to Copy the provided "libsynonvme.so.1" file into /usr/lib64 directory.

And if you were using the previous solution of script "libNVMEpatch.sh" , dont forget to remove it from /usr/local/etc/rc.d folder.

 

After you place the file into /usr/lib64 director, the DSM UI will stop working so you have to hard restart the machine.

After reboot, the NVME cache is identified  and shows up in Storage Manager.

 

This is the solution that finally worked for me. (page 4)

Note to self: don't forget to chmod.

 

Huge shout to @The Chief and @flyride for getting us pointing in the right direction to start with.

Edited by Bad Wolf
Link to comment
Share on other sites

  • 3 weeks later...
  • 3 weeks later...
  • 3 weeks later...
On 11/24/2020 at 11:14 AM, Hackaro said:

 

Just a recap for those who'll want to try NVMe cache because all the thread is quite messy imho. 

 

The above shell script with DSM 6.2.3-25426 Update 2 (on DS918+ , indeed) does not work anymore. At least in my experience it leads in an incorrect state where the two NVMe are not recognised as the same and therefore they cannot be used for a necessary RAID 1 in case of R/W cache. 

 

The only thing is really working at the moment is a copy of the libsynonvme.so.1 to the right path. So put this file in a public zone of your volume (this is my case) or wherever you like and then with root's privileges (sudo -i) put the lib in the right place: 

 




cp /volume1/public/libsynonvme.so.1 /usr/lib64
cd /usr/lib64
chmod 777  libsynonvme.so.1
shutdown -r now

 

and that's it. The Storage Manager should recognise correctly yours NVMe's and use them as cache. 

 

This was quick and easy solution for baremetal (6.2.3-25426 Update 2) using a Crucial P5 M.2 2280 NVMe 1TB SSD. Thank you @Hackaro and thanks to @The Chief @flyride and anyone else that contributed to the great work.

 

Only a few questions. Should we still expect DSM upgrades to wipe this? Should the cache always be removed before upgrades? Can the patch be re-applied on startup by adding the above into a new "libNVMEpatch.sh" and placing into /usr/local/etc/rc.d?

Edited by MrGarak
  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...

Hi all,, have run into difficulties in getting this running. I installed the patch (details below) and although I still get Juns Loader screen, I can't ping, SSH or web into DSM.

  • Using Jun’s Loader v1.04b and DSM DSM 6.2.2 (can't remember the rest unfortunately)
  • ASRock MB E3C236D2I
  • WD PC SN520 NVMe (128GB)

 

Prior to issue, I had copied "libsynonvme.so.1" to /volume1/backups/

and ran the following commands:

 

sudo -i

cp /volume1/backups/libsynonvme.so.1 /usr/lib64
cd /usr/lib64
chmod 777  libsynonvme.so.1
shutdown -r now

 

Would appreciate any suggestions on how I could get things back to normal. Tried a number of reboots and even removed the MVMe drive. Would anyone know how I could get to the CLI so I could at least try deleting the "libsynonvme.so.1" file?

Link to comment
Share on other sites

On 5/3/2021 at 9:57 PM, rossi said:

Hi all,, have run into difficulties in getting this running. I installed the patch (details below) and although I still get Juns Loader screen, I can't ping, SSH or web into DSM.

  • Using Jun’s Loader v1.04b and DSM DSM 6.2.2 (can't remember the rest unfortunately)
  • ASRock MB E3C236D2I
  • WD PC SN520 NVMe (128GB)

 

Prior to issue, I had copied "libsynonvme.so.1" to /volume1/backups/

and ran the following commands:

 


sudo -i

cp /volume1/backups/libsynonvme.so.1 /usr/lib64
cd /usr/lib64
chmod 777  libsynonvme.so.1
shutdown -r now

 

Would appreciate any suggestions on how I could get things back to normal. Tried a number of reboots and even removed the MVMe drive. Would anyone know how I could get to the CLI so I could at least try deleting the "libsynonvme.so.1" file?

Just found out about the serial port output. Below is the output. I couldn't get my logins to work at the prompt though :-(

Would really appreciate anyone's feedback on this.

 

      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, `e' to edit the commands
      before booting or `c' for a command-line.
[    2.539592] ata5: No present pin info for SATA link down event
[    2.853157] ata7: send port disabled event
[    2.853158] ata7: No present pin info for send port disabled event
[    2.853179] ata8: send port disabled event
[    2.853180] ata8: No present pin info for send port disabled event
patching file etc/rc
Hunk #1 succeeded at 182 (offset 11 lines).
patching file etc/synoinfo.conf
Hunk #2 FAILED at 263.
Hunk #3 FAILED at 291.
Hunk #4 FAILED at 304.
Hunk #5 FAILED at 312.
Hunk #6 FAILED at 328.
5 out of 6 hunks FAILED -- saving rejects to file etc/synoinfo.conf.rej
patching file linuxrc.syno
Hunk #1 succeeded at 40 with fuzz 2 (offset 1 line).
Hunk #2 succeeded at 207 (offset 72 lines).
Hunk #3 succeeded at 645 (offset 93 lines).
patching file usr/sbin/init.post
START /linuxrc.syno
Insert basic USB modules...
:: Loading module usb-common ... [  OK  ]
:: Loading module usbcore ... [  OK  ]
:: Loading module xhci-hcd ... [  OK  ]
:: Loading module xhci-pci ... [  OK  ]
:: Loading module usb-storage ... [  OK  ]
:: Loading module BusLogic ... [  OK  ]
:: Loading module vmw_pvscsi ... [  OK  ]
:: Loading module megaraid_mm ... [  OK  ]
:: Loading module megaraid_mbox ... [  OK  ]
:: Loading module scsi_transport_spi ... [  OK  ]
:: Loading module mptbase ... [  OK  ]
:: Loading module mptscsih ... [  OK  ]
:: Loading module mptspi ... [  OK  ]
:: Loading module mptctl ... [  OK  ]
:: Loading module megaraid ... [  OK  ]
:: Loading module megaraid_sas ... [  OK  ]
:: Loading module scsi_transport_sas ... [  OK  ]
:: Loading module raid_class ... [  OK  ]
:: Loading module mpt3sas ... [  OK  ]
:: Loading module mdio ... [  OK  ]
:: Loading module rtc-cmos ... [  OK  ]
Insert net driver(Mindspeed only)...
Starting /usr/syno/bin/synocfgen...
/usr/syno/bin/synocfgen returns 0
[    4.309420] md: invalid raid superblock magic on sda5
[    4.316554] md: invalid raid superblock magic on sdb5
[    4.368515] md: invalid raid superblock magic on sdc5
[    4.519038] md: invalid raid superblock magic on sdd5
[    4.580943] md: invalid raid superblock magic on sdf3
Partition Version=8
 /sbin/e2fsck exists, checking /dev/md0...
/sbin/e2fsck -pvf returns 0
Mounting /dev/md0 /tmpRoot
------------upgrade
Begin upgrade procedure
No upgrade file exists
End upgrade procedure
============upgrade
Exit on error [2] .noroot exists...
[    5.943437] sd 8:0:0:0: [synoboot] No Caching mode page found
[    5.949193] sd 8:0:0:0: [synoboot] Assuming drive cache: write through
Tue May  4 21:38:07 UTC 2021
/dev/md0 /tmpRoot ext4 rw,relatime,data=ordered 0 0
none /sys/kernel/debug debugfs rw,relatime 0 0
sys /sys sysfs rw,relatime 0 0
none /dev devtmpfs rw,relatime,size=8170616k,nr_inodes=2042654,mode=755 0 0
proc /proc proc rw,relatime 0 0
linuxrc.syno failed on 2
starting pid 4839, tty '': '/etc/rc'
:: Starting /etc/rc
:: Mounting procfs ... [  OK  ]
:: Mounting tmpfs ... [  OK  ]
:: Mounting devtmpfs ... [  OK  ]
:: Mounting devpts ... [  OK  ]
:: Mounting sysfs ... [  OK  ]
rc: Use all internal disk as swap.
/etc/rc: line 117: /etc/rc: /usr/syno/bin/synodiskpathparse: not foundline 117:                                                                                         awk: not found

rc: Failed to parse partition sdf2
:: Loading module fat ... [  OK  ]
:: Loading module vfat ... [  OK  ]
:: Loading module udp_tunnel ... [  OK  ]
:: Loading module ip6_udp_tunnel ... [  OK  ]
:: Loading module vxlan ... [  OK  ]
:: Loading module e1000e ... [  OK  ]
:: Loading module i2c-algo-bit ... [  OK  ]
:: Loading module igb ... [  OK  ]
:: Loading module ixgbe ... [  OK  ]
:: Loading module r8168 ... [  OK  ]
:: Loading module mii ... [  OK  ]
:: Loading module libphy ... [  OK  ]
:: Loading module atl1 ... [  OK  ]
:: Loading module atl1e ... [  OK  ]
:: Loading module atl1c ... [  OK  ]
:: Loading module alx ... [  OK  ]
:: Loading module uio ... [  OK  ]
:: Loading module jme ... [  OK  ]
:: Loading module skge ... [  OK  ]
:: Loading module sky2 ... [  OK  ]
:: Loading module qla3xxx ... [  OK  ]
:: Loading module qlcnic ... [  OK  ]
:: Loading module qlge ... [  OK  ]
:: Loading module netxen_nic ... [  OK  ]
:: Loading module sfc ... [  OK  ]
:: Loading module e1000 ... [  OK  ]
:: Loading module pcnet32 ... [  OK  ]
:: Loading module vmxnet3 ... [  OK  ]
:: Loading module bnx2 ... [  OK  ]
:: Loading module cnic ... [FAILED]
:: Loading module tg3 ... [  OK  ]
:: Loading module usbnet ... [  OK  ]
:: Loading module ax88179_178a ... [  OK  ]
:: Loading module button ... [  OK  ]
:: Loading module leds-lp3943 ... [  OK  ]
:: Loading module synobios ... [  OK  ]
udhcpc (v1.16.1) started
udhcpc (v1.16.1) started
[   10.460458] usb 1-9: unable to read config index 0 descriptor/all
[   10.466547] usb 1-9: can't read configurations, error -110
eth0      Link encap:Ethernet  HWaddr D0:50:99:C2:9E:B7
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Interrupt:16 Memory:df200000-df220000

eth1      Link encap:Ethernet  HWaddr D0:50:99:C2:9E:B8
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Memory:df100000-df17ffff

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

:: Starting syslogd ... [  OK  ]
:: Starting scemd
:: Starting services in background
Starting findhostd in flash_rd...
Starting services in flash_rd...
Running /usr/syno/etc/rc.d/J01httpd.sh...
Starting httpd:80 in flash_rd...
Starting httpd:5000 in flash_rd...
Running /usr/syno/etc/rc.d/J03ssdpd.sh...
/usr/bin/minissdpd -i eth0 -i eth1
eth0 not RUNNING
(15): upnp:rootdevice
(51): uuid:upnp_SynologyNAS-d05099c29eb8::upnp:rootdevice
(58): Synology/synology_apollolake_918+/6.2-24922/169.254.137.22
(47): http://169.254.137.22:5000/description-eth1.xml
Connected.
done.
/usr/syno/bin/reg_ssdp_service 169.254.137.22 d05099c29eb8 6.2-24922 synology_apollolake_918+ eth1
Running /usr/syno/etc/rc.d/J04synoagentregisterd.sh...
Starting synoagentregisterd...
Running /usr/syno/etc/rc.d/J30DisableNCQ.sh...
Running /usr/syno/etc/rc.d/J80ADTFanControl.sh...
Running /usr/syno/etc/rc.d/J98nbnsd.sh...
Starting nbnsd...
Running /usr/syno/etc/rc.d/J99avahi.sh...
Starting Avahi mDNS/DNS-SD Daemon
cname_load_conf failed:/var/tmp/nginx/avahi-aliases.conf
:: Loading module hid ... [  OK  ]
:: Loading module usbhid ... [  OK  ]
:: Loading module syno_hddmon ... [FAILED]
============ Dat
Tue May  4 21:38:19 2021

DiskStation login:

 

Link to comment
Share on other sites

On 5/4/2021 at 10:43 PM, rossi said:

Just found out about the serial port output. Below is the output. I couldn't get my logins to work at the prompt though :-(

Would really appreciate anyone's feedback on this.

 


      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, `e' to edit the commands
      before booting or `c' for a command-line.
[    2.539592] ata5: No present pin info for SATA link down event
[    2.853157] ata7: send port disabled event
[    2.853158] ata7: No present pin info for send port disabled event
[    2.853179] ata8: send port disabled event
[    2.853180] ata8: No present pin info for send port disabled event
patching file etc/rc
Hunk #1 succeeded at 182 (offset 11 lines).
patching file etc/synoinfo.conf
Hunk #2 FAILED at 263.
Hunk #3 FAILED at 291.
Hunk #4 FAILED at 304.
Hunk #5 FAILED at 312.
Hunk #6 FAILED at 328.
5 out of 6 hunks FAILED -- saving rejects to file etc/synoinfo.conf.rej
patching file linuxrc.syno
Hunk #1 succeeded at 40 with fuzz 2 (offset 1 line).
Hunk #2 succeeded at 207 (offset 72 lines).
Hunk #3 succeeded at 645 (offset 93 lines).
patching file usr/sbin/init.post
START /linuxrc.syno
Insert basic USB modules...
:: Loading module usb-common ... [  OK  ]
:: Loading module usbcore ... [  OK  ]
:: Loading module xhci-hcd ... [  OK  ]
:: Loading module xhci-pci ... [  OK  ]
:: Loading module usb-storage ... [  OK  ]
:: Loading module BusLogic ... [  OK  ]
:: Loading module vmw_pvscsi ... [  OK  ]
:: Loading module megaraid_mm ... [  OK  ]
:: Loading module megaraid_mbox ... [  OK  ]
:: Loading module scsi_transport_spi ... [  OK  ]
:: Loading module mptbase ... [  OK  ]
:: Loading module mptscsih ... [  OK  ]
:: Loading module mptspi ... [  OK  ]
:: Loading module mptctl ... [  OK  ]
:: Loading module megaraid ... [  OK  ]
:: Loading module megaraid_sas ... [  OK  ]
:: Loading module scsi_transport_sas ... [  OK  ]
:: Loading module raid_class ... [  OK  ]
:: Loading module mpt3sas ... [  OK  ]
:: Loading module mdio ... [  OK  ]
:: Loading module rtc-cmos ... [  OK  ]
Insert net driver(Mindspeed only)...
Starting /usr/syno/bin/synocfgen...
/usr/syno/bin/synocfgen returns 0
[    4.309420] md: invalid raid superblock magic on sda5
[    4.316554] md: invalid raid superblock magic on sdb5
[    4.368515] md: invalid raid superblock magic on sdc5
[    4.519038] md: invalid raid superblock magic on sdd5
[    4.580943] md: invalid raid superblock magic on sdf3
Partition Version=8
 /sbin/e2fsck exists, checking /dev/md0...
/sbin/e2fsck -pvf returns 0
Mounting /dev/md0 /tmpRoot
------------upgrade
Begin upgrade procedure
No upgrade file exists
End upgrade procedure
============upgrade
Exit on error [2] .noroot exists...
[    5.943437] sd 8:0:0:0: [synoboot] No Caching mode page found
[    5.949193] sd 8:0:0:0: [synoboot] Assuming drive cache: write through
Tue May  4 21:38:07 UTC 2021
/dev/md0 /tmpRoot ext4 rw,relatime,data=ordered 0 0
none /sys/kernel/debug debugfs rw,relatime 0 0
sys /sys sysfs rw,relatime 0 0
none /dev devtmpfs rw,relatime,size=8170616k,nr_inodes=2042654,mode=755 0 0
proc /proc proc rw,relatime 0 0
linuxrc.syno failed on 2
starting pid 4839, tty '': '/etc/rc'
:: Starting /etc/rc
:: Mounting procfs ... [  OK  ]
:: Mounting tmpfs ... [  OK  ]
:: Mounting devtmpfs ... [  OK  ]
:: Mounting devpts ... [  OK  ]
:: Mounting sysfs ... [  OK  ]
rc: Use all internal disk as swap.
/etc/rc: line 117: /etc/rc: /usr/syno/bin/synodiskpathparse: not foundline 117:                                                                                         awk: not found

rc: Failed to parse partition sdf2
:: Loading module fat ... [  OK  ]
:: Loading module vfat ... [  OK  ]
:: Loading module udp_tunnel ... [  OK  ]
:: Loading module ip6_udp_tunnel ... [  OK  ]
:: Loading module vxlan ... [  OK  ]
:: Loading module e1000e ... [  OK  ]
:: Loading module i2c-algo-bit ... [  OK  ]
:: Loading module igb ... [  OK  ]
:: Loading module ixgbe ... [  OK  ]
:: Loading module r8168 ... [  OK  ]
:: Loading module mii ... [  OK  ]
:: Loading module libphy ... [  OK  ]
:: Loading module atl1 ... [  OK  ]
:: Loading module atl1e ... [  OK  ]
:: Loading module atl1c ... [  OK  ]
:: Loading module alx ... [  OK  ]
:: Loading module uio ... [  OK  ]
:: Loading module jme ... [  OK  ]
:: Loading module skge ... [  OK  ]
:: Loading module sky2 ... [  OK  ]
:: Loading module qla3xxx ... [  OK  ]
:: Loading module qlcnic ... [  OK  ]
:: Loading module qlge ... [  OK  ]
:: Loading module netxen_nic ... [  OK  ]
:: Loading module sfc ... [  OK  ]
:: Loading module e1000 ... [  OK  ]
:: Loading module pcnet32 ... [  OK  ]
:: Loading module vmxnet3 ... [  OK  ]
:: Loading module bnx2 ... [  OK  ]
:: Loading module cnic ... [FAILED]
:: Loading module tg3 ... [  OK  ]
:: Loading module usbnet ... [  OK  ]
:: Loading module ax88179_178a ... [  OK  ]
:: Loading module button ... [  OK  ]
:: Loading module leds-lp3943 ... [  OK  ]
:: Loading module synobios ... [  OK  ]
udhcpc (v1.16.1) started
udhcpc (v1.16.1) started
[   10.460458] usb 1-9: unable to read config index 0 descriptor/all
[   10.466547] usb 1-9: can't read configurations, error -110
eth0      Link encap:Ethernet  HWaddr D0:50:99:C2:9E:B7
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Interrupt:16 Memory:df200000-df220000

eth1      Link encap:Ethernet  HWaddr D0:50:99:C2:9E:B8
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Memory:df100000-df17ffff

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

:: Starting syslogd ... [  OK  ]
:: Starting scemd
:: Starting services in background
Starting findhostd in flash_rd...
Starting services in flash_rd...
Running /usr/syno/etc/rc.d/J01httpd.sh...
Starting httpd:80 in flash_rd...
Starting httpd:5000 in flash_rd...
Running /usr/syno/etc/rc.d/J03ssdpd.sh...
/usr/bin/minissdpd -i eth0 -i eth1
eth0 not RUNNING
(15): upnp:rootdevice
(51): uuid:upnp_SynologyNAS-d05099c29eb8::upnp:rootdevice
(58): Synology/synology_apollolake_918+/6.2-24922/169.254.137.22
(47): http://169.254.137.22:5000/description-eth1.xml
Connected.
done.
/usr/syno/bin/reg_ssdp_service 169.254.137.22 d05099c29eb8 6.2-24922 synology_apollolake_918+ eth1
Running /usr/syno/etc/rc.d/J04synoagentregisterd.sh...
Starting synoagentregisterd...
Running /usr/syno/etc/rc.d/J30DisableNCQ.sh...
Running /usr/syno/etc/rc.d/J80ADTFanControl.sh...
Running /usr/syno/etc/rc.d/J98nbnsd.sh...
Starting nbnsd...
Running /usr/syno/etc/rc.d/J99avahi.sh...
Starting Avahi mDNS/DNS-SD Daemon
cname_load_conf failed:/var/tmp/nginx/avahi-aliases.conf
:: Loading module hid ... [  OK  ]
:: Loading module usbhid ... [  OK  ]
:: Loading module syno_hddmon ... [FAILED]
============ Dat
Tue May  4 21:38:19 2021

DiskStation login:

 

... success... I created a new USB boot device but couldn't connect.. the serial connection was a godsend!

I had no success with it initially. I noted the 2 NICs I had weren't getting an IP address. They were aggregated on the switch on a VLAN. So to cut out the complexity, I plugged NIC1 into a VLAN port and assigned a static address 192.168.1.100 (using the ifconfig command and a telnet daily password generator, logged in as 'root'). I could finally ping the NAS... so I logged into the Web UI and saw I required an upgrade and to migrate my disks which remained connected. I upgraded using "DSM_DS918+_25426.pat" as recommended and after a while (and a few disk errors to upgrade to an updated file system version), I was back up and running! Hope this is of use to anyone in similar situation. For me, the breakthrough was the serial port which I hadn't been aware of, considering that the live screen doesn't show any log information that I'm aware of.

Link to comment
Share on other sites

  • 2 months later...
On 2/17/2021 at 9:10 AM, RobPulsar said:

Hi @flyride

 

Thanks for answering above.  It is a shame that I can't use the NVMe as a separate volume as that was the intended purpose.

my intended purpose too, but this was solved earlier in the thread. Running NVME as a standalone drive second year without issue (not even patching anymore with updates). The cache isn't cool idea for me, just like VM, i believe it gets reset after restart, allocates only around 400GB of multi TB drive. I prefer accessing all hot data from NVME and using RAID as a backup or replication target and cold data storage, and strongly prefer max speeds via 10Gbit NIC, where RAID seriously lags. Once you do the patch, you already win, then you just publish it into MD array to make it visible to DSM. Then do whatever you like in UI, e.g. move the shares to it.

Link to comment
Share on other sites

1 hour ago, nadiva said:

publish it into MD array to make it visible to DSM. Then do whatever you like in UI, e.g. move the shares to it.

 

Be careful with this.  Any MD event initiated by the UI will probably damage the integrity of an array with an NVMe member.

Link to comment
Share on other sites

12 hours ago, flyride said:

 

Be careful with this.  Any MD event initiated by the UI will probably damage the integrity of an array with an NVMe member.

Since formatted with synopartition NVME acts exactly like other arrays, it has the the same small SYNO partition, and every UI/CLI disk related command works on it, from creating volumes, monitoring, trimming, replicating, share transfers back and forth. It had too much time to prove itself, and became the most reliable drive with highest availability along with SSD (even HDD RAID had to be rebuilt for no reason, just internal controller hiccup). Not bad for a cheap external PCI3x1-4. Once NVMEs are cheap, i will build big arrays from PCI5 NVMEs in order to utilize multi-40+gbit NICs :) In future, LAN if not WAN connection speeds must exceed local highend PC speeds. Not with official Syno boxes thou - their pathetic hardware is good for archaic 100Mbit samba only, i think they even use PCI2, that's why their UI pretends not to see NVME for volume creation, officially claiming lack of support "because of heat problems":)

Link to comment
Share on other sites

3 minutes ago, nadiva said:

Since formatted with synopartition NVME acts exactly like other arrays, it has the the same small SYNO partition, and every UI/CLI disk related command works on it, from creating volumes, monitoring, trimming, replicating, share transfers back and forth. It had too much time to prove itself, and became the most reliable drive with highest availability along with SSD (even HDD RAID had to be rebuilt for no reason, just internal controller hiccup). Not bad for a cheap external PCI3x1-4. Once NVMEs are cheap, i will build big arrays from PCI5 NVMEs in order to utilize multi-40+gbit NICs :) In future, LAN if not WAN connection speeds must exceed local highend PC speeds. Not with official Syno boxes thou - their pathetic hardware is good for archaic 100Mbit samba only, i think they even use PCI2, that's why their UI pretends not to see NVME for volume creation, officially claiming lack of support "because of heat problems":)

 

So let me understand, you are manually creating partitions on /dev/nvmeXn1 and they have nvme proper nomenclature (i.e. /dev/nvme0n1p1) and they behave as above?

 

15 hours ago, nadiva said:

Once you do the patch, you already win, then you just publish it into MD array to make it visible to DSM

 

Why do you even need the patch then?  I/O support already exists prior to the patch, which only exists for the cache utilities.

Link to comment
Share on other sites

1 minute ago, flyride said:

 

So let me understand, you are manually creating partitions on /dev/nvmeXn1 and they have nvme proper nomenclature (i.e. /dev/nvme0n1p1) and they behave as above?

 

 

Why do you even need the patch then?  I/O support already exists prior to the patch, which only exists for the cache utilities.

Yes they have nomenclatures, but only after patch (which is supposed to hide NVMEs via drivers). I think this will vary per model. My patched file wasn't overwritten by updates, but is still patched. Second limitation is in UI: Syno is not confident to use NVME arrays via their cheap hardware so they don't offer volume creation. We just do exactly what DSM would do during volume creation: use the same tools to create volume, and since then, it's treated equally.

Link to comment
Share on other sites

All the patch does is allow Synology's own nvme tools to recognize nvme devices that don't exactly conform to the PCI slots of a DS918+.

The base nvme support is already built into DS918+ DSM and is functional.  So I do not think the patch has any impact on what you are doing.

 

IMHO Syno does not offer NVMe array capable systems because they do not want the cheap systems competing with their expensive ones.

 

If you don't mind, post some Disk Manager screenshots and a cat /proc/mdstat of a healthy running system with your NVMe devices.

Edited by flyride
Link to comment
Share on other sites

2 hours ago, flyride said:

All the patch does is allow Synology's own nvme tools to recognize nvme devices that don't exactly conform to the PCI slots of a DS918+.

The base nvme support is already built into DS918+ DSM and is functional.  So I do not think the patch has any impact on what you are doing.

 

IMHO Syno does not offer NVMe array capable systems because they do not want the cheap systems competing with their expensive ones.

 

If you don't mind, post some Disk Manager screenshots and a cat /proc/mdstat of a healthy running system with your NVMe devices.

 

this is a barebone setup with NVME setup as a standalone drive, same as SSD:

Device         Boot   Start        End    Sectors  Size Id Type
/dev/nvme0n1p1         2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/nvme0n1p2      4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/nvme0n1p3      9437184 4000795469 3991358286  1.9T fd Linux raid autodetect


md4 : active raid1 nvme0n1p3[0]
      1995678080 blocks super 1.2 [1/1] [U]


ls /volume3
@eaDir  Share1  @Share1@  homes  @homes@  @quarantine  @sharesnap  @sharesnap_restoring  @SnapshotReplication  @synologydrive  @tmp

when placed on VM, speeds were horrible even with optimized drivers. This way, PCI3 is utilized to max.

nvme2.png

nvme1.png

Link to comment
Share on other sites

Thanks.  In my own testing, I've manually created a partition structure similar to what you have done, as has @The Chief who authored the NVMe patch.  You have created a simple, single-element array so there is no possibility of array maintenance.

 

What I have also found in testing is that if there is an NVMe member in a complex (multiple member RAID1, RAID5, etc) array or SHR, an array change often causes the NVMe disk(s) to be dropped.  Do you have more complex arrays with NVMe working as described?

Edited by flyride
Link to comment
Share on other sites

2 minutes ago, flyride said:

Do you have more complex arrays with NVMe working as described?

not really, i didn't fill up the 2nd slot (1 NVME cost me 1/2 of NAS microserver) but i plan to upgrade to RAID0 as it fills up.

if what you suspect is true (and RAID0 is also under risk), i'd consider adapter like Asus Hyper M.2 which i guess has a controller to face server with a bootable standalone RAID drive. Hopefully!

If not, some shares will move back to spinning drives. Still the "intended purpose" for most people is the single member array setup i reckon, the speed and extra redundancy is worth it imo.

Link to comment
Share on other sites

NVMe is just a PCIe interface - there is no controller involved.  So the ASUS Hyper M.2 is nothing more than a PCIe form factor translation (PCIe slot to M.2)... it doesn't do anything to enable RAID or boot or anything else.

 

Some of the multi-NVMe cards do have some logic - a PCIe switch to enable use of more drives while economizing on PCIe lanes.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...