Jump to content
XPEnology Community

How to have DS3622xs recognize nvme SSD cache drive (Maybe works on other models).


Recommended Posts

So I migrated my xpenology server from DS918+ model to DS3622xs, and the nvme cache no longer works since the model number no longer exist in libsynonvme.so.1. I dig into the libsynonvme.so.1 and found it might check your pcie location to have the nvme drive works properly. After inspect the file, I found it just checks /etc.defaults/.extensionPorts and we just need to modify that.

 

Here are the steps:

1. Check your nvme pci location(in my case it's 0000:00:01.0😞

udevadm info /dev/nvme0n1

P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/nvme/nvme0/nvme0n1

 

2. Modify /etc.defaults/.extensionPorts to have the port number match your nvme location.

cat /etc.defaults/extensionPorts

[pci]

pci1="0000:00:01.0"

3. I did not even restart and the nvme cache drive already appears.  Hope this helps anyone who is looking to solve this.

image.thumb.png.e75c4846e5970af46b59bf878cca01b8.png

image.thumb.png.6703d50f1faba92b9cf45c54e7955f7e.png

 

 

Update:

 

If you worry a system update will revert this modification, just add a startup script with root:

sed -i 's/03.2/[your_pci_last_three_digs]/g' /etc.defaults/extensionPorts

 

In this way, no matter what version you goes, this will always stay.

Edited by ryancc
  • Like 7
  • Thanks 7
Link to comment
Share on other sites

Thanks ryancc

 

I have done mine with 2 nvme pcie.

 

 udevadm info /dev/nvme0n1
P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/nvme/nvme0/nvme0n1

 udevadm info /dev/nvme1n1
P: /devices/pci0000:00/0000:00:01.1/0000:02:00.0/nvme/nvme1/nvme1n1

 

Edited  /etc.defaults/.extensionPorts and add my two pci1

pci1="0000:00:01.0"
pci1="0000:00:01.1"

 

Thanks all for your good job.

cache ssd.jpg

extensionPorts.jpg

nvme cache.jpg

Link to comment
Share on other sites

4 hours ago, manepape said:

Thanks ryancc

 

I have done mine with 2 nvme pcie.

 

 udevadm info /dev/nvme0n1
P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/nvme/nvme0/nvme0n1

 udevadm info /dev/nvme1n1
P: /devices/pci0000:00/0000:00:01.1/0000:02:00.0/nvme/nvme1/nvme1n1

 

Edited  /etc.defaults/.extensionPorts and add my two pci1

pci1="0000:00:01.0"
pci1="0000:00:01.1"

 

Thanks all for your good job.

cache ssd.jpg

extensionPorts.jpg

nvme cache.jpg

Nice! Good to hear that both of yours are working.

Link to comment
Share on other sites

Hey i will build a tinycore VM on unraid. The Question at this moment i had is it only possible to get a ssd cache work if you passthrough a native ssd to the DSM7 VM or can i passthrough a VM disc of 200gb of an native 2tb NVMe and set this "virtual ssd" as a cache in DSM7  ??

Link to comment
Share on other sites

Hi,

 

Using DSM 7.0 with tinycore redpill running in ESXi 7 I can attach a VIRTUAL NVME device and it has detected using the simple edit of the "/etc.defaults/extensionPorts" file. Thank you for the tip. However, I can't use it as a cache (the final objective, as I put the VMDK in a SSD storage) because the SMART present incorrect data:

 

root@DSM:~# smartctl -a -d nvme /dev/nvme0
smartctl 6.5 (build date Feb 20 2021) [x86_64-linux-4.4.180+] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       VMware Virtual NVMe Disk
Serial Number:                      VMWare NVME_0000
Firmware Version:                   1.0
PCI Vendor ID:                      0x15ad:0x15ad
IEEE OUI Identifier:                0x005056
Total NVM Capacity:                 0 [0 B]
Unallocated NVM Capacity:           0 [0 B]
Maximum Data Transfer Size:         8
Number of Namespaces:               1
Controller ID:                      0
Warning  Comp. Temp. Threshold:     -
Critical Comp. Temp. Threshold:     -
Local Time is:                      Sat Mar 12 22:13:16 2022 CET

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02, NSID 0xffffffff)
Critical Warning:                   0x00
Temperature:                        11759 Celsius
Available Spare:                    1%
Available Spare Threshold:          100%
Percentage Used:                    0%
Data Units Read:                    ~110680464442257309696
Data Units Written:                 ~92233720368547758080
Host Read Commands:                 ~110680464442257309696000
Host Write Commands:                ~92233720368547758080000
Controller Busy Time:               ~92233720368547758080
Power Cycles:                       ~184467440737095516160
Power On Hours:                     ~1106804644422573096960
Unsafe Shutdowns:                   0
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0

Error Information (NVMe Log 0x01, max 4 entries)
No Errors Logged

 

Regards.

 

Link to comment
Share on other sites

On 3/12/2022 at 10:30 PM, alienman said:

Hi, the main problem with the virtual nvme is that the "Available Spare" of 1% implies a critical state of the divice, and then the DSM refuses to initialize and use it. Any idea to solve this?

I can confirm this. Tried on unraid to passthrough a virtual disk which is on a nvme... no chance

Link to comment
Share on other sites

Hi,

 

I want to share more technical information with us regarding virtual NVME support:

 

- I've compiled the redpill module with the DBG_SMART_PRINT_ALL_IOCTL flag and installed it in my Virtual Machine (ESXi) with a DSM 7.0.1 to see the IO shims calls (using the dmesg command). The results are:

  1. When executing commands like "synodisk --smart_info_get /dev/nvme0n1" , "smartctl -a -d nvme /dev/nvme0n1" and "nvme smart-log /dev/nvme0n1" , then you can see the same results. And in the KERNEL LOG you can check that the "smart_shim" hook is called. However, the values are incorrect as the syscall returns data formated for a SATA disk and not for a NVME device. Then the Synology system interprets that the device even it's working, it has troubles. In this case you can't use it for CACHE, but you can manually create a Logical Volume on it and continue to use the UI with the "degraded" group. However, this is useless as you can add a simple virtual SATA disk for a regular volume and it will work without troubles. So the problem here is how to add a Virtual NVME for Caching.
  2. After installing the "strace" tool from the "entware" package, and executed the commands that request for the NVME status (synodisk, smartctl and nvme), I discovered that in all cases the same IOSYSCALL is called: NVME_IOCTL_ADMIN_CMD instead of the regular HDIO_DRIVE_CMD. So I thought of implementing a simple change in the "sd_ioctl_smart_shim()" function to return a different data (formated for NVME) if the requested device (block_device *bdev) contains in the name the string "nvme". But this doesn't work! The function receives EVERY TIME the device "/dev/sda" as a parameter when requesting a nvme device. Installing two virtual SATA devices inside xpenology they're detected as "/dev/sda" and "/dev/sdl", and the virtual NVME as "/dev/nvme0n1". And for the sata devices the smart_shim function receives the correct block device descriptor. But for the nvme device, it receives the data of the sda. That doesn't have a lot of sense, but is true. I feel that the syno kernel is doing something ugly and when the disk type is not detected then it failbacks to the device 0, that corresponds in the kernel list of disks to sda. So perhaps for this reason the "sd_ioctl_smart_shim()" function is called, even NOT from the HDIO_DRIVE_CMD iosyscall.

So, my results from this point are suggesting that a solution could be achieved if someone will develop a new shim for nvme devices. This new shim will need to intercept the calls to the NVME_IOCTL_ADMIN_CMD iosyscall. And then return the SMART values with the correct format for NVME devices. The base could be the current "smart_shim.c" code. But this is far from my competencies. So I can't help on this. But I feel this is the correct way to arrive to a status where the XPEnology could work with virtual devices for volumes *and* caching. That IMHO is a must have for this project for serious use.

 

Therefore, I hope one developer of redpill will want to target this.
Regards.

 

  • Thanks 1
Link to comment
Share on other sites

Hi to all,

 

Finally, I've configured SSD Cache in DSM 7.0.1 using virtual disks configured in ESXi. Here the disk configuration of the Virtual Machine (that hosts the DSM):

- redpill bootdevice: SATA disk.

- volume disks: configured as SATA disks.

- cache disks: configured as SCSI over paravirtual SCSI controller, and with the VMX config file edited to add scsi0:0.virtualSSD = "TRUE" for each cache disk.

 

Then I installed redpill tinycore for DS3622xs+ with DSM 7.0.1 42218. So, as the "volume" disks are configured as SATA, and the "cache" disks as SCSI over the pvscsi driver (you need to configure redpill tinycore to add this driver and the vmxnet3), then all in the UI will work. The volume disks are detected in the range sda to sdl (disks 1-12) and the cache disks from sdm. And all of them present correct (fake) SMART values. Therefore, you can create all volumes that you want, and attach to them (using the UI) the cache disks that you want.

 

I advise that I've configured the cache disks only for READ. If you want to enable READ-WRITE, then I recommend to put the host vmdk files in different devices. Be aware!!

  • Like 4
Link to comment
Share on other sites

2 hours ago, Indio said:

Hi all,

Can any one help me find what i am doing wrong?

Can't make the nvme apear on DSM.

 

Thank you.

Captura de ecrã 2022-04-01, às 17.32.12.png

But what is the problem? your nvme disk after the change "extensionPort" is not seen by DSM?

Have you rebooted your machine? 

 

Link to comment
Share on other sites

10 minutes ago, Indio said:

Yes, does not show any information in DSM.

I have rebooted.

 

Have you double checked that file has changed?
I am sorry that I am asking maybe so obvious things, but I recently I was 100% sure that I changed this file but unfortunately file has not changed. 

 

extensionsPort.PNG

Edited by MajkelP
add picture
Link to comment
Share on other sites

1 hour ago, MajkelP said:

Have you double checked that file has changed?
I am sorry that I am asking maybe so obvious things, but I recently I was 100% sure that I changed this file but unfortunately file has not changed. 

 

extensionsPort.PNG

Hi, yes I did, this is the print I have after the reboot.

Link to comment
Share on other sites

Tried on the DS918+ image (7.0.1-42218) running on VMware 6.7.0 Update 3 and edited the file as instructed but the nvme doesn't appear in DSM.

 

It does appear when querying via SSH

 

P: /devices/pci0000:00/0000:00:18.0/0000:1b:00.0/nvme/nvme0/nvme0n1
N: nvme0n1
E: DEVNAME=/dev/nvme0n1
E: DEVPATH=/devices/pci0000:00/0000:00:18.0/0000:1b:00.0/nvme/nvme0/nvme0n1
E: DEVTYPE=disk
E: MAJOR=259
E: MINOR=0
E: PHYSDEVBUS=pci
E: PHYSDEVDRIVER=nvme
E: PHYSDEVPATH=/devices/pci0000:00/0000:00:18.0/0000:1b:00.0
E: SUBSYSTEM=block
E: SYNO_ATTR_SERIAL=S9EWJA0N134157E
E: SYNO_DEV_DISKPORTTYPE=UNKNOWN
E: SYNO_INFO_PLATFORM_NAME=apollolake
E: SYNO_KERNEL_VERSION=4.4
E: SYNO_SUPPORT_XA=no
E: TAGS=:systemd:
E: USEC_INITIALIZED=59453

 

cat /etc.defaults/extensionPorts

 

[pci]
pci1="0000:00:18.0"

 

** EDIT **

 

Created a DS3622xs image, migrated DSM from DS918+ and edited the file and rebooted, NVMe now shows in DSM.

 

Edited by irishj
Link to comment
Share on other sites

Quote

 

If you worry a system update will revert this modification, just add a startup script with root:

sed -i 's/03.2/[your_pci_last_three_digs]/g' /etc.defaults/extensionPorts

 

 

How would you execute this command when you have two SSD's ?

 

Here are my PCI ID's

 

[pci]
pci1="0000:00:1b.4"
pci2="0000:00:1d.0"

 

Thanks !

Link to comment
Share on other sites

I can confirm that this works very easily on dsm 7.1 final, following the guide, I was able to get a couple of NVME drives and an inexpensive dual NVME card, setup PCIE bifurcation on my motherboard, modify the file accordingly, PCI1 for first device, PCI2 for the second and so on.. and it came up in my case after a reboot!

Link to comment
Share on other sites

  • 4 weeks later...
On 3/4/2022 at 5:41 PM, ryancc said:

So I migrated my xpenology server from DS918+ model to DS3622xs, and the nvme cache no longer works since the model number no longer exist in libsynonvme.so.1. I dig into the libsynonvme.so.1 and found it might check your pcie location to have the nvme drive works properly. After inspect the file, I found it just checks /etc.defaults/.extensionPorts and we just need to modify that.

 

Here are the steps:

1. Check your nvme pci location(in my case it's 0000:00:01.0😞

udevadm info /dev/nvme0n1

P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/nvme/nvme0/nvme0n1

 

2. Modify /etc.defaults/.extensionPorts to have the port number match your nvme location.

cat /etc.defaults/extensionPorts

[pci]

pci1="0000:00:01.0"

3. I did not even restart and the nvme cache drive already appears.  Hope this helps anyone who is looking to solve this.

image.thumb.png.e75c4846e5970af46b59bf878cca01b8.png

image.thumb.png.6703d50f1faba92b9cf45c54e7955f7e.png

 

 

Update:

 

If you worry a system update will revert this modification, just add a startup script with root:

sed -i 's/03.2/[your_pci_last_three_digs]/g' /etc.defaults/extensionPorts

 

In this way, no matter what version you goes, this will always stay.

 

Do you ever have those times, when Murphy's Law is like your BEST friend?! 🤣

 

Today, I did a fresh from the ground up, baremetal install of ds918+ first to apollolake-7.0.1-42218, then following @Peter Suh steps, surprised even myself and got up updated to 7.1-42661U1.

 

At no point was there ever a /etc.defaults/extensionPorts file, I just used vi to create it, also chmod 644/root to match the surrounding files.

 

Inside I put:

[pci]

pci1=“0000:00:1b.0”

pci2=“0000:00:1d.0”

 

root@TestNAS:~# udevadm info /dev/nvme0n1

P: /devices/pci0000:00/0000:00:1b.0/0000:02:00.0/nvme/nvme0/nvme0n1

N: nvme0n1

E: DEVNAME=/dev/nvme0n1

E: DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:02:00.0/nvme/nvme0/nvme0n1

E: DEVTYPE=disk

E: MAJOR=259

E: MINOR=0

E: PHYSDEVBUS=pci

E: PHYSDEVDRIVER=nvme

E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:02:00.0

E: SUBSYSTEM=block

E: SYNO_ATTR_SERIAL=S27WNX0HA14614

E: SYNO_DEV_DISKPORTTYPE=UNKNOWN

E: SYNO_INFO_PLATFORM_NAME=apollolake

E: SYNO_KERNEL_VERSION=4.4

E: SYNO_SUPPORT_XA=no

E: TAGS=:systemd:

E: USEC_INITIALIZED=675653

 

root@TestNAS:~# udevadm info /dev/nvme1n1

P: /devices/pci0000:00/0000:00:1d.0/0000:05:00.0/nvme/nvme1/nvme1n1

N: nvme1n1

E: DEVNAME=/dev/nvme1n1

E: DEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:05:00.0/nvme/nvme1/nvme1n1

E: DEVTYPE=disk

E: MAJOR=259

E: MINOR=5

E: PHYSDEVBUS=pci

E: PHYSDEVDRIVER=nvme

E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:05:00.0

E: SUBSYSTEM=block

E: SYNO_ATTR_SERIAL=S27WNX1HA03950

E: SYNO_DEV_DISKPORTTYPE=UNKNOWN

E: SYNO_INFO_PLATFORM_NAME=apollolake

E: SYNO_KERNEL_VERSION=4.4

E: SYNO_SUPPORT_XA=no

E: TAGS=:systemd:

E: USEC_INITIALIZED=676035

 

Clearly the 2 Samsung NVMe cards are being seen, but despite 2x reboot, they fail to show in Storage Manager...

 

Any ideas, helpful thoughts/suggestions?  Could it be I left out an EXT when I built the loader? 🤓

 

Thanks

Link to comment
Share on other sites

Switched out the Samsung PMxxx MNVe for Toshiba as an additional test:

 

root@TestNAS:~# udevadm info /dev/nvme0n1

 

P: /devices/pci0000:00/0000:00:1b.0/0000:02:00.0/nvme/nvme0/nvme0n1

N: nvme0n1

E: DEVNAME=/dev/nvme0n1

E: DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:02:00.0/nvme/nvme0/nvme0n1

E: DEVTYPE=disk

E: MAJOR=259

E: MINOR=0

E: PHYSDEVBUS=pci

E: PHYSDEVDRIVER=nvme

E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:02:00.0

E: SUBSYSTEM=block

E: SYNO_ATTR_SERIAL=Y6TS11PRT18T

E: SYNO_DEV_DISKPORTTYPE=UNKNOWN

E: SYNO_INFO_PLATFORM_NAME=apollolake

E: SYNO_KERNEL_VERSION=4.4

E: SYNO_SUPPORT_XA=no

E: TAGS=:systemd:

E: USEC_INITIALIZED=927815

 

root@TestNAS:~# udevadm info /dev/nvme1n1

 

P: /devices/pci0000:00/0000:00:1d.0/0000:05:00.0/nvme/nvme1/nvme1n1

N: nvme1n1

E: DEVNAME=/dev/nvme1n1

E: DEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:05:00.0/nvme/nvme1/nvme1n1

E: DEVTYPE=disk

E: MAJOR=259

E: MINOR=1

E: PHYSDEVBUS=pci

E: PHYSDEVDRIVER=nvme

E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:05:00.0

E: SUBSYSTEM=block

E: SYNO_ATTR_SERIAL=17FB7054KSGU

E: SYNO_DEV_DISKPORTTYPE=UNKNOWN

E: SYNO_INFO_PLATFORM_NAME=apollolake

E: SYNO_KERNEL_VERSION=4.4

E: SYNO_SUPPORT_XA=no

E: TAGS=:systemd:

E: USEC_INITIALIZED=928065

 

 

root@TestNAS:~# nvme list

Node             SN                   Model                                    Namespace Usage                      Format           FW Rev

---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------

/dev/nvme0n1     Y6TS11PRT18T         THNSN5256GPUK NVMe TOSHIBA 256GB         1         256.06  GB / 256.06  GB    512   B +  0 B   5KDA4101

/dev/nvme1n1     17FB7054KSGU         THNSN5256GPUK NVMe TOSHIBA 256GB         1         256.06  GB / 256.06  GB    512   B +  0 B   5KDA4103

 

Also if I run for example "syno_hdd_util --ssd_detect", no SSD drive is not listed at all.

 

root@TestNAS:~# lspci -k

0000:00:00.0 Class 0600: Device 8086:191f (rev 07)

Subsystem: Device 1028:06b9

0000:00:01.0 Class 0604: Device 8086:1901 (rev 07)

Kernel driver in use: pcieport

0000:00:02.0 Class 0300: Device 8086:1912 (rev 06)

DeviceName: Intel HD Graphics

Subsystem: Device 1028:06b9

Kernel driver in use: i915

0000:00:14.0 Class 0c03: Device 8086:a12f (rev 31)

Subsystem: Device 1028:06b9

Kernel driver in use: xhci_hcd

0000:00:14.2 Class 1180: Device 8086:a131 (rev 31)

Subsystem: Device 1028:06b9

0000:00:16.0 Class 0780: Device 8086:a13a (rev 31)

Subsystem: Device 1028:06b9

0000:00:16.3 Class 0700: Device 8086:a13d (rev 31)

Subsystem: Device 1028:06b9

Kernel driver in use: serial

0000:00:17.0 Class 0106: Device 8086:a102 (rev 31)

Subsystem: Device 1028:06b9

Kernel driver in use: ahci

0000:00:1b.0 Class 0604: Device 8086:a167 (rev f1)

DeviceName: Intel HD Audio

Kernel driver in use: pcieport

0000:00:1c.0 Class 0604: Device 8086:a110 (rev f1)

Kernel driver in use: pcieport

0000:00:1d.0 Class 0604: Device 8086:a118 (rev f1)

Kernel driver in use: pcieport

0000:00:1f.0 Class 0601: Device 8086:a146 (rev 31)

Subsystem: Device 1028:06b9

0000:00:1f.2 Class 0580: Device 8086:a121 (rev 31)

DeviceName: Onboard SATA #1

Subsystem: Device 1028:06b9

0000:00:1f.3 Class 0403: Device 8086:a170 (rev 31)

Subsystem: Device 1028:06b9

0000:00:1f.4 Class 0c05: Device 8086:a123 (rev 31)

Subsystem: Device 1028:06b9

Kernel driver in use: i801_smbus

0000:00:1f.6 Class 0200: Device 8086:15b7 (rev 31)

Subsystem: Device 1028:06b9

Kernel driver in use: e1000e

0000:02:00.0 Class 0108: Device 1179:0115 (rev 01)

Subsystem: Device 1179:0001

Kernel driver in use: nvme

0000:03:00.0 Class 0604: Device 104c:8240

0000:05:00.0 Class 0108: Device 1179:0115 (rev 01)

Subsystem: Device 1179:0001

Kernel driver in use: nvme

0001:00:12.0 Class 0106: Device 8086:5ae3

0001:00:13.0 Class 0000: Device 8086:5ad8

DeviceName: Intel LOM

0001:00:14.0 Class 0000: Device 8086:5ad6

0001:00:15.0 Class 0c03: Device 8086:5aa8

0001:00:16.0 Class 1180: Device 8086:5aac

0001:00:18.0 Class 1180: Device 8086:5abc

0001:00:19.0 Class 1180: Device 8086:5ac6

0001:00:19.2 Class 1180: Device 8086:5ac6

0001:00:1f.0 Class 0c05: Device 8086:5ad4

0001:00:1f.1 Class 0c05: Device 8086:5ad4

0001:01:00.0 Class 0106: Device 1b4b:9215 (rev 11)

0001:02:00.0 Class 0200: Device 8086:1539 (rev 03)

0001:03:00.0 Class 0200: Device 8086:1539 (rev 03)

 

root@TestNAS:~# ls /dev/nvm*

/dev/nvme0  /dev/nvme0n1  /dev/nvme1  /dev/nvme1n1

 

root@TestNAS:~# ls /dev/nvm*

/dev/nvme0  /dev/nvme0n1  /dev/nvme1  /dev/nvme1n1

 

root@TestNAS:~# synonvme --get-location /dev/nvme0

Can't get the location of /dev/nvme0

 

root@TestNAS:~# synonvme --port-type-get /dev/nvme0

Unknown.

 

root@TestNAS:~# ls /sys/block

dm-0  dm-2   loop1  loop3  loop5  loop7  md1  nvme0n1  ram0  ram10  ram12  ram14  ram2  ram4  ram6  ram8  sda  synoboot  zram1  zram3

dm-1  loop0  loop2  loop4  loop6  md0    md2  nvme1n1  ram1  ram11  ram13  ram15  ram3  ram5  ram7  ram9  sdb  zram0     zram2

 

root@TestNAS:~# ls /run/synostorage/disks

sda  sdb

 

root@TestNAS:~# synonvme --m2-card-model-get /dev/nvme0

Not M.2 adapter card

 

root@TestNAS:~# synodiskport -cache

{BLANK NO DATA}

root@TestNAS:~#

 

root@TestNAS:~# fdisk -l /dev/nvme0n1

Disk /dev/nvme0n1: 238.5 GiB, 256060514304 bytes, 500118192 sectors

Disk model: THNSN5256GPUK NVMe TOSHIBA 256GB        

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: gpt

Disk identifier: 5C7696AC-29A2-4505-9B33-C1875991B3D1

 

root@TestNAS:~# fdisk -l /dev/nvme1n1

Disk /dev/nvme1n1: 238.5 GiB, 256060514304 bytes, 500118192 sectors

Disk model: THNSN5256GPUK NVMe TOSHIBA 256GB        

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: gpt

Disk identifier: DC7CE400-4205-03AE-0039-1F778190EB00

 

 

Any ideas, helpful thoughts/suggestions?  So close, yet so far! 

 

Thanks

Link to comment
Share on other sites

Are you guys who are wanting and using ssd nvme cache running 10gb lan? I have a couple of real synology boxes, and the consensus was unless you are running >1gb LAN having the ssd/nvme cache was completely worthless and actually increased chance of data corruption.  I am only running standard gigabit networking in my environment, so I never pursued getting or installing the nvme ssd cache in any of my boxes, all of them do have the slots, but I was told I would see 0 improvement in any respect to speed or zero advantage whatsoever.  I guess if you are using 2.5gb LAN or faster that would be a reason to want the cache.

Edited by phone guy
Link to comment
Share on other sites

Are you guys who are wanting and using ssd nvme cache running 10gb lan? I have a couple of real synology boxes, and the consensus was unless you are running >1gb LAN having the ssd/nvme cache was completely worthless and actually increased chance of data corruption.  I am only running standard gigabit networking in my environment, so I never pursued getting or installing the nvme ssd cache in any of my boxes, all of them do have the slots, but I was told I would see 0 improvement in any respect to speed or zero advantage whatsoever.  I guess if you are using 2.5gb LAN or faster that would be a reason to want the cache.

Yeah, I have 10gb lan to the box - but even in the case the speed benefit is kinda marginal


Sent from my iPhone using Tapatalk
  • Thanks 1
Link to comment
Share on other sites

2 minutes ago, cferra said:


Yeah, I have 10gb lan to the box - but even in the case the speed benefit is kinda marginal


Sent from my iPhone using Tapatalk

 

Yeah, that's kind of what I have seen in the genuine synology forums as well... I hate to say not worth it, but that has been said by others. 🤐

 

As I recall the only time the cache was of any benefit was if you were hosting a DB of somekind? I cant remember exactly.  In an xpenology build I say why not, go for it... but with my puny 1 gigabit LAN I am sure I would never even be able to tell a difference.

 

Thanks for the quick reply.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...