cferra Posted May 15, 2022 #26 Posted May 15, 2022 Yeah, that's kind of what I have seen in the genuine synology forums as well... I hate to say not worth it, but that has been said by others. [emoji850] As I recall the only time the cache was of any benefit was if you were hosting a DB of somekind? I cant remember exactly. In an xpenology build I say why not, go for it... but with my puny 1 gigabit LAN I am sure I would never even be able to tell a difference. Thanks for the quick reply.Yeah - I was hoping for some more benefit too. I think qnap is better in this regard or maybe if using standard raid as opposed to SHR we might see some more tangible benefits. Sent from my iPhone using Tapatalk Quote
flyride Posted May 15, 2022 #27 Posted May 15, 2022 12 minutes ago, phone guy said: Are you guys who are wanting and using ssd nvme cache running 10gb lan? I have a couple of real synology boxes, and the consensus was unless you are running >1gb LAN having the ssd/nvme cache was completely worthless and actually increased chance of data corruption. I am only running standard gigabit networking in my environment, so I never pursued getting or installing the nvme ssd cache in any of my boxes, all of them do have the slots, but I was told I would see 0 improvement in any respect to speed or zero advantage whatsoever. I guess if you are using 2.5gb LAN or faster that would be a reason to want the cache. I have 10Gbe LAN and all-SSD array. Four SATA SSD's will max out the 10Gbe link so it makes no sense to use cache in that case. Additionally all DSMs use whatever RAM is free to cache recently used files, so if you have a heavy multi-user workload that all refer to a small dataset, it already is cached. Personally I think it's a bit of a gimmick/product differentiation. Not too much value if you ask me, other than to stress test the cache device. Quote
phone guy Posted May 15, 2022 #28 Posted May 15, 2022 I never understood why it was said having the nvme cache could increase potential for data corruption? Maybe it was using the wrong type of nvme? I cant remember exactly. BTW: Not trying to rain on anyone's cache parade, I just wanted to ask the question for my own curiosity. Quote
cferra Posted May 15, 2022 #29 Posted May 15, 2022 4 minutes ago, flyride said: I have 10Gbe LAN and all-SSD array. Four SATA SSD's will max out the 10Gbe link so it makes no sense to use cache in that case. Additionally all DSMs use whatever RAM is free to cache recently used files, so if you have a heavy multi-user workload that all refer to a small dataset, it already is cached. Personally I think it's a bit of a gimmick/product differentiation. Not too much value if you ask me, other than to stress test the cache device. Yeah my box has 64gig ram and it’s just me at home with a lot of spare time and - plex media to store/watch so in my use case… it’s definitely marginal. But I got the 2x NVMe card cheap / mobo supports bifurcation and the msi drives for extremely cheap it was worth the test. 1 Quote
cferra Posted May 15, 2022 #30 Posted May 15, 2022 Just now, phone guy said: I never understood why it was said having the nvme cache could increase potential for data corruption? Maybe it was using the wrong type of nvme? I cant remember exactly. BTW: Not trying to rain on anyone's cache parade, I just wanted to ask the question for my own curiosity. If the cache drives are set to read/write cache and they fail it will take down the entire array. Syno warns about it in the ui but people I think can override. even in a raid - If enough drives go down it will still take the array down too. Quote
phone guy Posted May 15, 2022 #31 Posted May 15, 2022 5 minutes ago, cferra said: it’s just me at home with a lot of spare time and - plex media to store/watch so in my use case… Sounds familiar! Except there are a bunch of kids in my house, also watching Emby. 🤣 1 1 Quote
goodone007 Posted August 8, 2022 #32 Posted August 8, 2022 For me this file do not exists. Do I just create a new file? Also is this folder path to Nas user or Root user? Please help. Quote
flyride Posted August 8, 2022 #33 Posted August 8, 2022 This doesn't work on DS918+, patch instead Quote
edinghi Posted August 18, 2022 #34 Posted August 18, 2022 Hi all, noobie question how can I edit the extensionPort file? Quote
kiwimonk Posted August 19, 2022 #35 Posted August 19, 2022 On 8/8/2022 at 4:45 AM, flyride said: This doesn't work on DS918+, patch instead What do you mean by patch instead? Is there a patch for NVMe support in the DS918+? Is it the same one as Jun's Loader? Quote
flyride Posted August 19, 2022 #36 Posted August 19, 2022 https://xpenology.com/forum/topic/13342-nvme-cache-support/ Quote
kiwimonk Posted August 19, 2022 #37 Posted August 19, 2022 2 hours ago, flyride said: https://xpenology.com/forum/topic/13342-nvme-cache-support/ Thanks for all the hard work you do 👍 I'm thinking I may have to build out a wiki or something. Piecing Xpenology together by sifting through forums is a royal pain. Quote
rinseaid Posted January 5, 2023 #38 Posted January 5, 2023 For any Proxmox VE users that may be struggling with direct passthrough of an NVMe PCI controller (not para-virtualized via QEMU) - which means real SMART value, etc. - I was able to accomplish this by doing the following: Create Q35 machine with OVMF firmware. Uncheck creation of an EFI disk (note there'll be a warning when booting the VM due to the missing EFI disk, but it doesn't matter). Add a PCI device, choose your NVMe controller, and make sure to check the 'PCI Express' checkbox. This checkbox is only available in a Q35 machine, and at least in my case was the key to making this work. Here's what mine looks like - this is my NVMe drive: Add whatever other devices (e.g. SATA controllers or virtual disks) to your VM. You can also add a serial console if you find it useful, but remember you'll have to choose the noVNC option to see the BIOS screen. Download your bootloader of choice - TCRP or arpl (I tested both successfully), and extract the .img file to somewhere on your PVE host. If using TCRP, make sure to grab the UEFI img file. I extracted to /var/lib/vz/template/iso/<vmid>-arpl.img so that it displays in the PVE GUI, but it really doesn't matter. Add the following line to /etc/pve/qemu-server/<vmid>.conf, making sure to update the path to the TCRP/arpl .img file. args: -device 'nec-usb-xhci,id=usb-bus0,multifunction=on' -drive 'file=<path-to-arpl-img-file>,media=disk,format=raw,if=none,id=drive-disk-bootloader' -device 'usb-storage,bus=usb-bus0.0,port=1,drive=drive-disk-bootloader,id=usb-disk-bootloader,bootindex=999,removable=on' You can now boot the VM, and walk through the TCRP/arpl configuration, then boot and install DSM as normal (make sure to select USB boot if using TCRP), and follow the steps earlier in this thread to update /etc.defaults/extensionPorts. I needed to reboot xpenology after updating the file in order for the NVMe drive to show up. And here is a the drive in DSM, with SMART data and everything! (yes, it is a little too warm..) Here are screenshots/configurations of my VM: VM conf file: # cat /etc/pve/qemu-server/100.conf agent: 1 args: -device 'nec-usb-xhci,id=usb-bus0,multifunction=on' -drive 'file=/var/lib/vz/template/iso/100-arpl.img,media=disk,format=raw,if=none,id=drive-disk-bootloader' -device 'usb-storage,bus=usb-bus0.0,port=1,drive=drive-disk-bootloader,id=usb-disk-bootloader,bootindex=999,removable=on' bios: ovmf boot: order=ide2 cores: 2 hostpci0: 0000:02:00.0,pcie=1 ide2: none,media=cdrom machine: q35 memory: 8192 meta: creation-qemu=7.1.0,ctime=1672872380 name: testxpen net0: virtio=CE:42:B4:E5:97:37,bridge=vmbr1,firewall=1 numa: 0 ostype: l26 scsi0: local-zfs:vm-100-disk-0,iothread=1,size=32G scsihw: virtio-scsi-single smbios1: uuid=44f382c2-7279-44a9-b06e-af44c19d713d sockets: 1 vmgenid: 89bdbe7c-9d16-4405-ad60-cab2cf605383 udevadm output from within xpenology: # udevadm info /dev/nvme0n1 P: /devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/nvme0n1 N: nvme0n1 E: DEVNAME=/dev/nvme0n1 E: DEVPATH=/devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/nvme0n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1c.0/0000:01:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=20332A42BE77 E: SYNO_DEV_DISKPORTTYPE=CACHE E: SYNO_INFO_PLATFORM_NAME=broadwellnk E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=916119 /etc.defaults/extensionPorts: # cat /etc.defaults/extensionPorts [pci] pci1="0000:00:1c.0" Note that the versions I used to test this method are: Proxmox VE 7.3-3 TinyCore RedPill 0.9.3.0 arpl 1.0 beta 9 DSM 7.1.1 2 Quote
rinseaid Posted January 8, 2023 #39 Posted January 8, 2023 (edited) A couple of other notes on this: When using q35 machine type, assigning more than 2 cores results in a kernel panic during boot. I tried using both kvm64 and host CPU types, but both resulted in the same error. Switching back to i440fx fixes this, but this disables the NVMe being detected by DSM. This may be specific to my system's hardware - i7-8700 on Q370 chipset. A few corrections - I no longer believe q35 vs. i440fx impacts the kernel panics at boot. In my observations, this seems to occur only with a PCI device assigned, and over some threshold of amount of CPU cores PLUS memory assigned to the VM - in my case, both 2 cores and 8GB and 4 cores and 4GB booted without panic, but any increase to either resource resulted in kernel panic at boot. I believe I've found a solution enabling direct boot in the arpl Advanced menu - I had stumbled across this thread: https://github.com/fbelavenuto/arpl-modules/issues/94 mentioning this fix. I now have the following configuration booting reliably: Machine type: q35 BIOS: OVMF (I'm using this because with my specific PCI SATA controllers, Seabios hangs on POST for >1 minute (Option ROM issue?) Cores: 4 Memory: 8GB Direct boot enabled in arpl As an alternative to adding the 'args' line to the VM config to add a USB device for arpl, you can add a SATA device of the image: qm set <vmid> --sata0 local-lvm:0,import-from=/path/to/arpl.img Make sure to replace the bolded with specifics for your environment, and set boot order in your VM config as well. arpl correctly detects the drive as a SATA DOM, I haven't tested with TCRP. Edited January 8, 2023 by rinseaid Corrections 1 Quote
alirz1 Posted February 5, 2023 #40 Posted February 5, 2023 On 3/4/2022 at 3:41 PM, ryancc said: So I migrated my xpenology server from DS918+ model to DS3622xs, and the nvme cache no longer works since the model number no longer exist in libsynonvme.so.1. I dig into the libsynonvme.so.1 and found it might check your pcie location to have the nvme drive works properly. After inspect the file, I found it just checks /etc.defaults/.extensionPorts and we just need to modify that. Here are the steps: 1. Check your nvme pci location(in my case it's 0000:00:01.0😞 udevadm info /dev/nvme0n1 P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/nvme/nvme0/nvme0n1 2. Modify /etc.defaults/.extensionPorts to have the port number match your nvme location. cat /etc.defaults/extensionPorts [pci] pci1="0000:00:01.0" 3. I did not even restart and the nvme cache drive already appears. Hope this helps anyone who is looking to solve this. Update: If you worry a system update will revert this modification, just add a startup script with root: sed -i 's/03.2/[your_pci_last_three_digs]/g' /etc.defaults/extensionPorts In this way, no matter what version you goes, this will always stay. Can you have a cache drive in DS918+? Quote
aportnov Posted February 8, 2023 #41 Posted February 8, 2023 (edited) Hello everyone. Please help me, I'm trying to make an nvme ssd cache, the system doesn't see it, what's wrong? Model DVA3219. Edited February 8, 2023 by aportnov Quote
jeffestewart Posted March 21, 2023 #42 Posted March 21, 2023 Thank you all for all of the information in this post. I have been able to confirm that my system sees the NVME drive. My question is how do I edit the extensionPorts file. Sorry for the NOOB question. I have looked everywhere and haven't found the process. thank you Quote
gokeeper Posted January 17, 2024 #43 Posted January 17, 2024 On 1/4/2023 at 7:02 PM, rinseaid said: Add a PCI device, choose your NVMe controller, and make sure to check the 'PCI Express' checkbox. This checkbox is only available in a Q35 machine, and at least in my case was the key to making this work. Here's what mine looks like - this is my NVMe drive: Thanks so much for the Q35 and pcie option hint, I was struggling on this for the last few hours, now everything works perfectly! Quote
Mikael Juber Posted January 7 #44 Posted January 7 how about this...? using startech.com (PEX8M2E2) pcie to dual NVME card... both NVME disk in the same port V1902@Synology:/$ udevadm info /dev/nvme0n1 P: /devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:00.0/0000:09:00.0/nvme/nvme0/nvme0n1 N: nvme0n1 E: DEVNAME=/dev/nvme0n1 E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:00.0/0000:09:00.0/nvme/nvme0/nvme0n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=1 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:00.0/0000:09:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S4EVNMFN938003H E: SYNO_DEV_DISKPORTTYPE=INVALID E: SYNO_INFO_PLATFORM_NAME=broadwellnk E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_SAS=no E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=92789 V1902@Synology:/$ udevadm info /dev/nvme1n1 P: /devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:08.0/0000:0b:00.0/nvme/nvme1/nvme1n1 N: nvme1n1 E: DEVNAME=/dev/nvme1n1 E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:08.0/0000:0b:00.0/nvme/nvme1/nvme1n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:08.0/0000:0b:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S4EVNMFN938010Z E: SYNO_DEV_DISKPORTTYPE=INVALID E: SYNO_INFO_PLATFORM_NAME=broadwellnk E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_SAS=no E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=93037 V1902@Synology:/$ cat /etc.defaults/extensionPorts [pci] pci1="0000:00:01.0" V1902@Synology:/$ anyone can help...? Quote
cferra Posted January 7 #45 Posted January 7 (edited) 1 hour ago, Mikael Juber said: how about this...? using startech.com (PEX8M2E2) pcie to dual NVME card... both NVME disk in the same port V1902@Synology:/$ udevadm info /dev/nvme0n1 P: /devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:00.0/0000:09:00.0/nvme/nvme0/nvme0n1 N: nvme0n1 E: DEVNAME=/dev/nvme0n1 E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:00.0/0000:09:00.0/nvme/nvme0/nvme0n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=1 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:00.0/0000:09:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S4EVNMFN938003H E: SYNO_DEV_DISKPORTTYPE=INVALID E: SYNO_INFO_PLATFORM_NAME=broadwellnk E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_SAS=no E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=92789 V1902@Synology:/$ udevadm info /dev/nvme1n1 P: /devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:08.0/0000:0b:00.0/nvme/nvme1/nvme1n1 N: nvme1n1 E: DEVNAME=/dev/nvme1n1 E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:08.0/0000:0b:00.0/nvme/nvme1/nvme1n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:08.0/0000:0b:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S4EVNMFN938010Z E: SYNO_DEV_DISKPORTTYPE=INVALID E: SYNO_INFO_PLATFORM_NAME=broadwellnk E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_SAS=no E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=93037 V1902@Synology:/$ cat /etc.defaults/extensionPorts [pci] pci1="0000:00:01.0" V1902@Synology:/$ anyone can help...? You have to edit the file using VI sudo -s password vi /etc.defaults/extensionPorts edit to the following [pci] pci1="0000:09:00.0” pci2=“0000:0b:00.0” save / reboot Edited January 7 by cferra Quote
Peter Suh Posted January 7 #46 Posted January 7 1 hour ago, Mikael Juber said: how about this...? using startech.com (PEX8M2E2) pcie to dual NVME card... both NVME disk in the same port V1902@Synology:/$ udevadm info /dev/nvme0n1 P: /devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:00.0/0000:09:00.0/nvme/nvme0/nvme0n1 N: nvme0n1 E: DEVNAME=/dev/nvme0n1 E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:00.0/0000:09:00.0/nvme/nvme0/nvme0n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=1 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:00.0/0000:09:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S4EVNMFN938003H E: SYNO_DEV_DISKPORTTYPE=INVALID E: SYNO_INFO_PLATFORM_NAME=broadwellnk E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_SAS=no E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=92789 V1902@Synology:/$ udevadm info /dev/nvme1n1 P: /devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:08.0/0000:0b:00.0/nvme/nvme1/nvme1n1 N: nvme1n1 E: DEVNAME=/dev/nvme1n1 E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:08.0/0000:0b:00.0/nvme/nvme1/nvme1n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.0/0000:07:00.0/0000:08:08.0/0000:0b:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S4EVNMFN938010Z E: SYNO_DEV_DISKPORTTYPE=INVALID E: SYNO_INFO_PLATFORM_NAME=broadwellnk E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_SAS=no E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=93037 V1902@Synology:/$ cat /etc.defaults/extensionPorts [pci] pci1="0000:00:01.0" V1902@Synology:/$ anyone can help...? The answer depends on whether you are using a genuine DS3622xs+ or the XPE bootloader. In the former case, you will need to edit /etc.defaults/extensionPorts directly as cferra suggested, but in the latter case, the bootloader will automatically handle these settings to recognize the cache. Quote
Mikael Juber Posted January 8 #47 Posted January 8 @cferra thanks bro for the help, but after i edit it... save... reboot... the value back again to the original [pci] pci1="0000:00:01.0" can you help...? @Peter Suh i using XPE Bootloader... Redpill Arc Loader on HP microserver gen10 Plus... startech.com (PEX8M2E2) pcie to dual NVME card... think about using your loader newest 1.1.0.0 will arrange time to do it... but i'm new with linux... have tutorial how to do it...? Quote
Peter Suh Posted January 8 #48 Posted January 8 2 hours ago, Mikael Juber said: @cferra thanks bro for the help, but after i edit it... save... reboot... the value back again to the original [pci] pci1="0000:00:01.0" can you help...? @Peter Suh i using XPE Bootloader... Redpill Arc Loader on HP microserver gen10 Plus... startech.com (PEX8M2E2) pcie to dual NVME card... think about using your loader newest 1.1.0.0 will arrange time to do it... but i'm new with linux... have tutorial how to do it...? If you decide to use my mshell, just try it out. It comes with an addon (disks, nvme-cache) that activates the cache. And, as you can see in the capture, the menu is simple enough that you don't need a guide. Just go through the menus in order. Quote
Mikael Juber Posted January 8 #49 Posted January 8 @Peter Suh not supported loader bus type, program exit!!! Quote
Peter Suh Posted January 8 #50 Posted January 8 2 hours ago, Mikael Juber said: @Peter Suh not supported loader bus type, program exit!!! What type of media did you record the img file to? A USB stick? Do you have any additional external storage on your GEN10? Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.