Jump to content
XPEnology Community

How to have DS3622xs recognize nvme SSD cache drive (Maybe works on other models).


Recommended Posts

 
Yeah, that's kind of what I have seen in the genuine synology forums as well... I hate to say not worth it, but that has been said by others. [emoji850]
 
As I recall the only time the cache was of any benefit was if you were hosting a DB of somekind? I cant remember exactly.  In an xpenology build I say why not, go for it... but with my puny 1 gigabit LAN I am sure I would never even be able to tell a difference.
 
Thanks for the quick reply.

Yeah - I was hoping for some more benefit too. I think qnap is better in this regard or maybe if using standard raid as opposed to SHR we might see some more tangible benefits.


Sent from my iPhone using Tapatalk
Link to comment
Share on other sites

12 minutes ago, phone guy said:

Are you guys who are wanting and using ssd nvme cache running 10gb lan? I have a couple of real synology boxes, and the consensus was unless you are running >1gb LAN having the ssd/nvme cache was completely worthless and actually increased chance of data corruption.  I am only running standard gigabit networking in my environment, so I never pursued getting or installing the nvme ssd cache in any of my boxes, all of them do have the slots, but I was told I would see 0 improvement in any respect to speed or zero advantage whatsoever.  I guess if you are using 2.5gb LAN or faster that would be a reason to want the cache.

 

I have 10Gbe LAN and all-SSD array.  Four SATA SSD's will max out the 10Gbe link so it makes no sense to use cache in that case.

 

Additionally all DSMs use whatever RAM is free to cache recently used files, so if you have a heavy multi-user workload that all refer to a small dataset, it already is cached.

 

Personally I think it's a bit of a gimmick/product differentiation.  Not too much value if you ask me, other than to stress test the cache device.

Link to comment
Share on other sites

I never understood why it was said having the nvme cache could increase potential for data corruption? Maybe it was using the wrong type of nvme? I cant remember exactly.

BTW: Not trying to rain on anyone's cache parade, I just wanted to ask the question for my own curiosity.

Link to comment
Share on other sites

4 minutes ago, flyride said:

 

I have 10Gbe LAN and all-SSD array.  Four SATA SSD's will max out the 10Gbe link so it makes no sense to use cache in that case.

 

Additionally all DSMs use whatever RAM is free to cache recently used files, so if you have a heavy multi-user workload that all refer to a small dataset, it already is cached.

 

Personally I think it's a bit of a gimmick/product differentiation.  Not too much value if you ask me, other than to stress test the cache device.

Yeah my box has 64gig ram and it’s just me at home with a lot of spare time and -  plex media to store/watch so in my use case… it’s definitely marginal. 
 

But I got the 2x NVMe card cheap / mobo supports bifurcation and the msi drives for extremely cheap it was worth the test. 

  • Like 1
Link to comment
Share on other sites

Just now, phone guy said:

I never understood why it was said having the nvme cache could increase potential for data corruption? Maybe it was using the wrong type of nvme? I cant remember exactly.

BTW: Not trying to rain on anyone's cache parade, I just wanted to ask the question for my own curiosity.

If the cache drives are set to read/write cache and they fail it will take down the entire array. 
 

Syno warns about it in the ui but people I think can override. 

 

even in a raid - If enough drives go down it will still take the array down too. 

Link to comment
Share on other sites

  • 2 months later...
  • 2 weeks later...
  • 4 months later...

For any Proxmox VE users that may be struggling with direct passthrough of an NVMe PCI controller (not para-virtualized via QEMU) - which means real SMART value, etc. - I was able to accomplish this by doing the following:

 

Create Q35 machine with OVMF firmware. Uncheck creation of an EFI disk (note there'll be a warning when booting the VM due to the missing EFI disk, but it doesn't matter). Add a PCI device, choose your NVMe controller, and make sure to check the 'PCI Express' checkbox. This checkbox is only available in a Q35 machine, and at least in my case was the key to making this work. Here's what mine looks like - this is my NVMe drive:

 

image.thumb.png.772148e7c4f1f4f4be93fd77733c8081.png

 

Add whatever other devices (e.g. SATA controllers or virtual disks) to your VM. You can also add a serial console if you find it useful, but remember you'll have to choose the noVNC option to see the BIOS screen. Download your bootloader of choice - TCRP or arpl (I tested both successfully), and extract the .img file to somewhere on your PVE host. If using TCRP, make sure to grab the UEFI img file. I extracted to /var/lib/vz/template/iso/<vmid>-arpl.img so that it displays in the PVE GUI, but it really doesn't matter.

 

Add the following line to /etc/pve/qemu-server/<vmid>.conf, making sure to update the path to the TCRP/arpl .img file.

args: -device 'nec-usb-xhci,id=usb-bus0,multifunction=on' -drive 'file=<path-to-arpl-img-file>,media=disk,format=raw,if=none,id=drive-disk-bootloader' -device 'usb-storage,bus=usb-bus0.0,port=1,drive=drive-disk-bootloader,id=usb-disk-bootloader,bootindex=999,removable=on'

 

You can now boot the VM, and walk through the TCRP/arpl configuration, then boot and install DSM as normal (make sure to select USB boot if using TCRP), and follow the steps earlier in this thread to update /etc.defaults/extensionPorts. I needed to reboot xpenology after updating the file in order for the NVMe drive to show up.

 

And here is a the drive in DSM, with SMART data and everything! (yes, it is a little too warm..)

image.thumb.png.b3875eff7d812344e2db84d79a8386d7.png

 

Here are screenshots/configurations of my VM:

image.thumb.png.0be660bc4d141854006f026075aad7aa.png

 

VM conf file:

# cat /etc/pve/qemu-server/100.conf
agent: 1
args: -device 'nec-usb-xhci,id=usb-bus0,multifunction=on' -drive 'file=/var/lib/vz/template/iso/100-arpl.img,media=disk,format=raw,if=none,id=drive-disk-bootloader' -device 'usb-storage,bus=usb-bus0.0,port=1,drive=drive-disk-bootloader,id=usb-disk-bootloader,bootindex=999,removable=on'
bios: ovmf
boot: order=ide2
cores: 2
hostpci0: 0000:02:00.0,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=7.1.0,ctime=1672872380
name: testxpen
net0: virtio=CE:42:B4:E5:97:37,bridge=vmbr1,firewall=1
numa: 0
ostype: l26
scsi0: local-zfs:vm-100-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=44f382c2-7279-44a9-b06e-af44c19d713d
sockets: 1
vmgenid: 89bdbe7c-9d16-4405-ad60-cab2cf605383

 

udevadm output from within xpenology:

# udevadm info /dev/nvme0n1
P: /devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/nvme0n1
N: nvme0n1
E: DEVNAME=/dev/nvme0n1
E: DEVPATH=/devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/nvme0n1
E: DEVTYPE=disk
E: MAJOR=259
E: MINOR=0
E: PHYSDEVBUS=pci
E: PHYSDEVDRIVER=nvme
E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1c.0/0000:01:00.0
E: SUBSYSTEM=block
E: SYNO_ATTR_SERIAL=20332A42BE77
E: SYNO_DEV_DISKPORTTYPE=CACHE
E: SYNO_INFO_PLATFORM_NAME=broadwellnk
E: SYNO_KERNEL_VERSION=4.4
E: SYNO_SUPPORT_USB_PRINTER=yes
E: SYNO_SUPPORT_XA=no
E: TAGS=:systemd:
E: USEC_INITIALIZED=916119

 

/etc.defaults/extensionPorts:

# cat /etc.defaults/extensionPorts
[pci]
pci1="0000:00:1c.0"

 

Note that the versions I used to test this method are:

Proxmox VE 7.3-3

TinyCore RedPill 0.9.3.0

arpl 1.0 beta 9

DSM 7.1.1

image.png

image.png

  • Like 2
Link to comment
Share on other sites

A couple of other notes on this:

 

When using q35 machine type, assigning more than 2 cores results in a kernel panic during boot. I tried using both kvm64 and host CPU types, but both resulted in the same error. Switching back to i440fx fixes this, but this disables the NVMe being detected by DSM. This may be specific to my system's hardware - i7-8700 on Q370 chipset. A few corrections - I no longer believe q35 vs. i440fx impacts the kernel panics at boot. In my observations, this seems to occur only with a PCI device assigned, and over some threshold of amount of CPU cores PLUS memory assigned to the VM - in my case, both 2 cores and 8GB and 4 cores and 4GB booted without panic, but any increase to either resource resulted in kernel panic at boot. I believe I've found a solution enabling direct boot in the arpl Advanced menu - I had stumbled across this thread: https://github.com/fbelavenuto/arpl-modules/issues/94 mentioning this fix. I now have the following configuration booting reliably:

  • Machine type: q35
  • BIOS: OVMF (I'm using this because with my specific PCI SATA controllers, Seabios hangs on POST for >1 minute (Option ROM issue?)
  • Cores: 4
  • Memory: 8GB
  • Direct boot enabled in arpl

 

As an alternative to adding the 'args' line to the VM config to add a USB device for arpl, you can add a SATA device of the image:

 

qm set <vmid> --sata0 local-lvm:0,import-from=/path/to/arpl.img

 

Make sure to replace the bolded with specifics for your environment, and set boot order in your VM config as well. arpl correctly detects the drive as a SATA DOM, I haven't tested with TCRP.

Edited by rinseaid
Corrections
  • Like 1
Link to comment
Share on other sites

  • 4 weeks later...
On 3/4/2022 at 3:41 PM, ryancc said:

So I migrated my xpenology server from DS918+ model to DS3622xs, and the nvme cache no longer works since the model number no longer exist in libsynonvme.so.1. I dig into the libsynonvme.so.1 and found it might check your pcie location to have the nvme drive works properly. After inspect the file, I found it just checks /etc.defaults/.extensionPorts and we just need to modify that.

 

Here are the steps:

1. Check your nvme pci location(in my case it's 0000:00:01.0😞

udevadm info /dev/nvme0n1

P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/nvme/nvme0/nvme0n1

 

2. Modify /etc.defaults/.extensionPorts to have the port number match your nvme location.

cat /etc.defaults/extensionPorts

[pci]

pci1="0000:00:01.0"

3. I did not even restart and the nvme cache drive already appears.  Hope this helps anyone who is looking to solve this.

image.thumb.png.e75c4846e5970af46b59bf878cca01b8.png

image.thumb.png.6703d50f1faba92b9cf45c54e7955f7e.png

 

 

Update:

 

If you worry a system update will revert this modification, just add a startup script with root:

sed -i 's/03.2/[your_pci_last_three_digs]/g' /etc.defaults/extensionPorts

 

In this way, no matter what version you goes, this will always stay.

Can you have a cache drive in DS918+?

Link to comment
Share on other sites

  • 1 month later...
  • 9 months later...
On 1/4/2023 at 7:02 PM, rinseaid said:

Add a PCI device, choose your NVMe controller, and make sure to check the 'PCI Express' checkbox. This checkbox is only available in a Q35 machine, and at least in my case was the key to making this work. Here's what mine looks like - this is my NVMe drive:

 

image.thumb.png.772148e7c4f1f4f4be93fd77733c8081.png

 

 

Thanks so much for the Q35 and pcie option hint, I was struggling on this for the last few hours, now everything works perfectly!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...