advin

NVMe cache support

Recommended Posts

Hi,

 

 I have H370 AORUS GAMING 3 WIFI (rev. 1.0) (H370 chipset) and I am trying to use Samsung 970 EVO 250GB - NVMe PCIe M.2 2280 SSD (MZ-V7E250BW) in m.2 slot(x4), but my system does not recognize it (no nvme in /dev). Any advice how make it work?

 

Loader  Jun 1.04b DS918+

Intel Core i3-8100t

GIGA-BYTE  H370 AORUS GAMING 3 WIFI

Corsair Vengeance 8GB DDR4 LPX 2400MHz

 

Share this post


Link to post
Share on other sites

Unsure.  The DS918+ image (from Synology) is currently the only PAT file supported on XPenology that has the Synology utilities to configure the cache.

 

However, I was able to see /dev/nvme0n1 and use basic Linux commands to check a NVMe drive on 3615/3617/916 PAT files.  I just never could get it to work without the DSM tools.

https://xpenology.com/forum/topic/6235-setup-unsupported-nvme-cache/?tab=comments#comment-54018

 

Nothing comes back if you execute "nvme list" ?

Share this post


Link to post
Share on other sites

execute "nvme list"

 

root@DSM:~# nvme list
Node             SN                   Model                                    N                                                                                                                                                             amespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- -                                                                                                                                                             -------- -------------------------- ---------------- --------
/dev/nvme0n1     seddddddd      Samsung SSD 970 EVO 250GB                1                                                                                                                                                                       14.35  MB / 250.06  GB    512   B +  0 B   1B2QEXE7
/dev/nvme0n1p1   seddddddd      Samsung SSD 970 EVO 250GB                1                                                                                                                                                                       14.35  MB / 250.06  GB    512   B +  0 B   1B2QEXE7

 

 

 

I was able to compile and load the NVMe module for 1.04b loader so I can see and use my NVMe SSD drive, but Synology doesn't.  NVMe  SSD drive is not listed at all.

 

Any ideas how  Synology to load it?

Share this post


Link to post
Share on other sites

Hi there

 

I just built a new server based on an h370 with two m.2 drives that I would like to use as cached drives. I am not very efficient in Linux but I can follow directions. I'm very interested in how this pans out.

 

Thanx

Craig

 

 

Share this post


Link to post
Share on other sites

hi

Am  using Loader  Jun 1.04b DS918+  that will support Samsung SM961, M.2 (22x80) PCIe 3.0 (x4) NVMe SSD  but  ...

 

I was able to compile and load the Samsung 970 EVO  NVMe module for 1.04b loader so I can see and use my NVMe SSD drive, but Synology doesn't.

 

Also if I run for example "syno_hdd_util --ssd_detect", SSD drive is not listed at all.

.

all the NVMe drives that are officially supported by Synology

Share this post


Link to post
Share on other sites

It's been awhile since I tested this with baremetal DSM, but I don't think you can see NVMe drives at all until you go into "add cache to volume" and they will be listed there.

 

syno_hdd_util --ssd_detect will only return SSD's that are able to be used for disk groups - i.e. SATA SSD's.

 

Share this post


Link to post
Share on other sites

Hi,

 

I'm trying DSM 6.2.x with the 1.04b bootloader on a baremetal supermicro X10SDV-4C-TLN2F which has an M2 NVME 960 evo. I am able to see the disk when logging in with ssh and I am able to create a volume using this tutorial:

 

https://www.reddit.com/r/synology/comments/a7o44l/guide_use_nvme_ssd_as_storage_volume_instead_of


I am even able to see it in the mdadm by running:

mdadm --assemble --scan

 

Unfortunately the volume is not visible in DSM and if I reboot it disappears.. Anybody has a clue on how DSM initializes the volumes at boot? Where could I add some config to automatically initialize it at boot?

p.s. I tried searching for an mdadm.conf file or anything similar but I can't find anything useful...

Share this post


Link to post
Share on other sites

After my earlier posts on this thread, I attempted to hack the NVMe vs. SATA logic present in DSM and was not successful.  The device configuration is set up by a completely customized udev which deliberately separates NVMe from SATA, and all their tools and utilities work quite hard to enforce this.

 

Similar to the reddit poster, I was able to manually create arrays with NVMe members, but any change to the array using Syno tools (as I recall, even a resync) caused it to break.

 

The only solution I have found to reliably use NVMe drives is to use ESXi, set up physical RDM pointers to the NVMe devices and present them to DSM with a virtual controller.  They appear to DSM as SAS/SATA, but the hardware interface is still NVMe with ESXi providing translation services in between.  Performance is essentially unaffected (RAID 1 of enterprise-class NVMe drives configured in this way resulted in well over 1 gigabyte per second throughput).

Edited by flyride

Share this post


Link to post
Share on other sites

Ok I understand, yes I tried already ESXi and I was fairly successful but I wanted to avoid using an Hypervisor since DSM offers docker image management and virtualization.. I wanted to use the single NVME drive as fast storage for eventual applications running there but if it's so complex I guess I will reinstall ESXi back.

 

Thanks for the heads up.

Share this post


Link to post
Share on other sites

What you describe is exactly what I personally run, with DSM wrapped by ESXi only.  Docker on DSM is very useful because it gets direct fast I/O to the NVMe drives.

Share this post


Link to post
Share on other sites

Wait a minute.. Doesnt NVMe cache work with xpenology?

I just put in a samsung 970 evo in the m.2 slot to use as read cache. But it is not detected in DSM.

I have latest loader, latest DSM version and 918+

 

I thought this would work all along... Because it do work on a real 918+ right?

I also put in a SSD that I wanted to use as a single volume. However that one pops up when I want to create cache drive. I thought u couldnt create cache drives with normal SATA SSD's?

Edited by Jamzor

Share this post


Link to post
Share on other sites
В 29.01.2019 в 01:12, Ivan GO сказал:

Where could I add some config to automatically initialize it at boot?

Скрытый текст

1763532064_.thumb.png.5179723dce79333e0245d49976e43b32.png

1295730729_.thumb.png.9fe2b28b2d5ba986b7c6661d8c4d9bac.png

 

One idea. Can you try to create folder on your nvme volume and mount it, for example, to /volume1/Video/test_folder?

Share this post


Link to post
Share on other sites
13 hours ago, Olegin said:
  You can add scripts or commands here... (Reveal hidden contents)

1763532064_.thumb.png.5179723dce79333e0245d49976e43b32.png

1295730729_.thumb.png.9fe2b28b2d5ba986b7c6661d8c4d9bac.png

 

One idea. Can you try to create folder on your nvme volume and mount it, for example, to /volume1/Video/test_folder?

 

One of you experts please if u know comment for me.. I input my 500GB samsung 970 EVO in my motherboard thinking it would work..

Its such a pain because I had to unscrew my entire m-itx box and the motherboard to get to the slot underneath the motherboard.. And now I put the m2 disk there..

Now it wont work as cache drive? Can I do anything with it? Or do I have to either leave it there and hope for future compatibility or unscrew everythign again, remove it and sell it on 2nd hand market?

Im running on barbone 918+ latets loader latest firmware updates for synology..

 

Thank you!

 

 

@flyride

Share this post


Link to post
Share on other sites
13 hours ago, Olegin said:
  You can add scripts or commands here... (Reveal hidden contents)

1763532064_.thumb.png.5179723dce79333e0245d49976e43b32.png

1295730729_.thumb.png.9fe2b28b2d5ba986b7c6661d8c4d9bac.png

 

One idea. Can you try to create folder on your nvme volume and mount it, for example, to /volume1/Video/test_folder?

 

I tried that already, I created the md raid, and mounted it but I could not see it in the Web UI as a Volume. Anyway now I switched back to ESXi so I cannot try that anymore.

Share this post


Link to post
Share on other sites
On ‎1‎/‎29‎/‎2019 at 1:29 PM, Jamzor said:

Wait a minute.. Doesnt NVMe cache work with xpenology?

I just put in a samsung 970 evo in the m.2 slot to use as read cache. But it is not detected in DSM.

I thought this would work all along... Because it do work on a real 918+ right?

I also put in a SSD that I wanted to use as a single volume. However that one pops up when I want to create cache drive. I thought u couldnt create cache drives with normal SATA SSD's?

 

  • DS918 DSM has the software hooks to use NVMe as cache.  DS3615/17 does not.
  • NVMe is NOT SUPPORTED by DSM as a regular disk device - therefore you cannot use it in a Storage Pool on a barebones install
  • Your hardware must also support NVMe and DSM must recognize your SSD. If DSM can use an NVMe drive, it will only show it when you are ready to add cache - it does not (ever) appear in the disk list in Disk Manager (see https://www.youtube.com/watch?v=oN5poqyuN54)
  • A regular SATA SSD can definitely be used as cache, and it will show up as a device in Disk Manager and when creating a cache

 

2 hours ago, Jamzor said:

 

One of you experts please if u know comment for me.. I input my 500GB samsung 970 EVO in my motherboard thinking it would work..

Its such a pain because I had to unscrew my entire m-itx box and the motherboard to get to the slot underneath the motherboard.. And now I put the m2 disk there..

Now it wont work as cache drive? Can I do anything with it? Or do I have to either leave it there and hope for future compatibility or unscrew everythign again, remove it and sell it on 2nd hand market?

Im running on barbone 918+ latets loader latest firmware updates for synology..

 

Restating the above in another way:  Running a barebone DS918 image, you might be able to use NVMe cache. I don't think anyone has tracked which motherboards and NVMe SSDs work and which don't. 

 

You can definitely use your NVMe drive if you run DSM as an ESXi virtual machine.  In that circumstance, you have several options:

  • Configure your NVMe as an ESXi datastore, and build a large virtual disk out of the free space.  Attach that to a virtual SATA controller.  The device will appear to DSM as a SATA disk but at NVMe speed.  The virtual disk can be used as a regular disk (you can create a Storage Pool with it) or as cache.  There is some minor overhead for ESXi to fully virtualize the disk, and you lose a little bit of storage because of the VMFS filesystem and whatever else you might be keeping on the datastore.
  • Configure your NVMe with a physical Raw Device Mapping (physical RDM, or pRDM).  This is a pointer to the NVMe drive that you can then attach to the virtual SATA controller.  ESXi will then translate SATA/SCSI to NVMe but the disk in its entirety appears to DSM.  Again, this will work as a regular disk or cache, and is theoretically faster than a virtual disk.
  • The NVMe SSD could be passed through directly to the VM for native access.  This wouldn't be any better than baremetal, but since we don't understand exactly why it may or may not work, it could be worth a try.  However, it would be limited to NVMe cache only just like baremetal.

My signature links to a tutorial that talks about how to set up the first two options above.

Share this post


Link to post
Share on other sites
8 hours ago, flyride said:

 

  • DS918 DSM has the software hooks to use NVMe as cache.  DS3615/17 does not.
  • NVMe is NOT SUPPORTED by DSM as a regular disk device - therefore you cannot use it in a Storage Pool on a barebones install
  • Your hardware must also support NVMe and DSM must recognize your SSD. If DSM can use an NVMe drive, it will only show it when you are ready to add cache - it does not (ever) appear in the disk list in Disk Manager (see https://www.youtube.com/watch?v=oN5poqyuN54)
  • A regular SATA SSD can definitely be used as cache, and it will show up as a device in Disk Manager and when creating a cache

 

 

Restating the above in another way:  Running a barebone DS918 image, you might be able to use NVMe cache. I don't think anyone has tracked which motherboards and NVMe SSDs work and which don't. 

 

You can definitely use your NVMe drive if you run DSM as an ESXi virtual machine.  In that circumstance, you have several options:

  • Configure your NVMe as an ESXi datastore, and build a large virtual disk out of the free space.  Attach that to a virtual SATA controller.  The device will appear to DSM as a SATA disk but at NVMe speed.  The virtual disk can be used as a regular disk (you can create a Storage Pool with it) or as cache.  There is some minor overhead for ESXi to fully virtualize the disk, and you lose a little bit of storage because of the VMFS filesystem and whatever else you might be keeping on the datastore.
  • Configure your NVMe with a physical Raw Device Mapping (physical RDM, or pRDM).  This is a pointer to the NVMe drive that you can then attach to the virtual SATA controller.  ESXi will then translate SATA/SCSI to NVMe but the disk in its entirety appears to DSM.  Again, this will work as a regular disk or cache, and is theoretically faster than a virtual disk.
  • The NVMe SSD could be passed through directly to the VM for native access.  This wouldn't be any better than baremetal, but since we don't understand exactly why it may or may not work, it could be worth a try.  However, it would be limited to NVMe cache only just like baremetal.

My signature links to a tutorial that talks about how to set up the first two options above.

 

 

Thank you mate!

IIm not that knowledgeable about these things so Im not 100% sure what it means to run ESXi tbh.. Maybe I can find out more about this option if the barebone doesnt work.

But about this barebone setup. It is detected by my BIOS that I have the NVMe cache drive connected to the motherboard. But when I go in to DSM and go in to cache drive and click "ADD", it only shows me the regular SATA SSD and not the NVMe SSD. This is my problem at the moment as my plan was to use the NVMe disk as cache and the SATA SSD as a storage disk.

Share this post


Link to post
Share on other sites
5 hours ago, bearcat said:

@Jamzor Your 970 EVO is on the "compatible list" for the original 918+ so there might be hope...

it might be a driver issue related to your "unknown" mainboard?

Hi.

It is compatible for sure as Ive read many on reddit that use it.

I dont know with driver issue.. Im pretty sure I have the latest BIOS for my motherboard: MSI B250I pro.

What else could I do? It shows up in BIOS so its read by the motherboard at least.. Except for this I have no idea what to do...?

Share this post


Link to post
Share on other sites
On 1/30/2019 at 11:15 PM, flyride said:

 

  • DS918 DSM has the software hooks to use NVMe as cache.  DS3615/17 does not.
  • NVMe is NOT SUPPORTED by DSM as a regular disk device - therefore you cannot use it in a Storage Pool on a barebones install
  • Your hardware must also support NVMe and DSM must recognize your SSD. If DSM can use an NVMe drive, it will only show it when you are ready to add cache - it does not (ever) appear in the disk list in Disk Manager (see https://www.youtube.com/watch?v=oN5poqyuN54)
  • A regular SATA SSD can definitely be used as cache, and it will show up as a device in Disk Manager and when creating a cache

 

 

Restating the above in another way:  Running a barebone DS918 image, you might be able to use NVMe cache. I don't think anyone has tracked which motherboards and NVMe SSDs work and which don't. 

 

You can definitely use your NVMe drive if you run DSM as an ESXi virtual machine.  In that circumstance, you have several options:

  • Configure your NVMe as an ESXi datastore, and build a large virtual disk out of the free space.  Attach that to a virtual SATA controller.  The device will appear to DSM as a SATA disk but at NVMe speed.  The virtual disk can be used as a regular disk (you can create a Storage Pool with it) or as cache.  There is some minor overhead for ESXi to fully virtualize the disk, and you lose a little bit of storage because of the VMFS filesystem and whatever else you might be keeping on the datastore.
  • Configure your NVMe with a physical Raw Device Mapping (physical RDM, or pRDM).  This is a pointer to the NVMe drive that you can then attach to the virtual SATA controller.  ESXi will then translate SATA/SCSI to NVMe but the disk in its entirety appears to DSM.  Again, this will work as a regular disk or cache, and is theoretically faster than a virtual disk.
  • The NVMe SSD could be passed through directly to the VM for native access.  This wouldn't be any better than baremetal, but since we don't understand exactly why it may or may not work, it could be worth a try.  However, it would be limited to NVMe cache only just like baremetal.

My signature links to a tutorial that talks about how to set up the first two options above.

 

I have now tried everything I can I belivie. On baremetal. my motherboard: MSI B250i Pro , has the latest BIOS. NVMe drive is recognized in the BIOS but nowhere to be seen in DSM. Not even when trying to add cache drives.

So I belivie my only option is to use ESXi to get the m2 disk in use.
I have no own knowledge in ESXi. Right now I just run baremetal on my own built hardware. How long is the learning curve to get started with ESXi and what exactly will the difference be for me compared to running a baremetal. Will I notice and difference at all once its setup? Is there any limitations compared to baremetal, or any disadvantages?

Will my storagepool follow or do I need to start everything from scratch?

Thank you!

Share this post


Link to post
Share on other sites

Do you have another PC you can use to experiment with ESXi first?  Yes, there is a learning curve, but it's not awful.  It would be wise to do some practice installations using both ESXi and XPEnology on ESXi before trying to convert your main hardware and storage.  And, as always, when you do your "real" installation, your data should be backed up somewhere else in case of catastrophe.

 

Performance-wise, any difference between ESXi and baremetal is negligible. You will be running a hypervisor in addition to XPEnology, but you can give the entire machine (RAM and CPU) to the XPEnology VM. Booting from power-up will take a little bit longer, as ESXi has to boot first, then launch the XPE VM, then DSM must boot.

 

Once you have an ESXi system in place, it's easy to create another test XPEnology system for new code testing, etc, without disturbing your "production" system.  Once you have this option available to you, you'll wonder how you did without it.

 

Some points to consider:

  • Getting ESXi can be a little confusing, VMWare has a lot of different products.  You only need to download ESXi 6.7 installation, and then create an account to obtain a free license key.  You will also want to patch to the latest bugfix/security code before putting up a production environment.  You don't need VSphere, or enterprise licensing, etc.
  • ESXi needs its own OS/boot environment. This is usually on installed to a USB key and is largely read-only.
  • XPEnology on ESXi won't need the USB key anymore, it boots a virtual disk with the same data in it as would be written to the USB key.

  • ESXi has its own filesystems, called "datastores."  You can create virtual storage from datastores and offer that storage as disks to VM operating systems like DSM, or you can "pass through" controllers with drives attached to them, or "RDM" individual drives to attach exclusively to a specific VM (ESXi won't touch the disk or create a filesystem on it).  Datastores can be shared among multiple VM's, passthrough/RDM devices cannot.

  • Controller passthrough requires VT-d support to be enabled in the BIOS (this isn't something you would normally care about).

  • ESXi also requires at least one disk-based datastore identified as "scratch" for temporary files, etc.  It cannot be on the boot device (one of the reasons a USB key is used).
  • If you don't have other datastores (i.e. you passthrough all your other physical drives), both your VM definition and bootloader can be on the scratch.
  • If you use your NVMe disk as scratch, you can't pass it through to XPEnology.  But you could create a virtual disk on it with most of the available space and use that for cache or a small volume.  This also be on the scratch (i.e. use your NVMe disk for scratch, VM definition, bootloader and virtual disk for DSM cache).
  • You can retain your current Storage Pool intact, but you must correctly passthrough the controller or RDM the drives to your XPEnology VM.  This will result in a "migration" installation as long as you don't inadvertently overwrite your drives.
Edited by flyride

Share this post


Link to post
Share on other sites

Actually is not exactly the same, if you want to pass through the controller you have to preallocate ram and you cannot preallocate the full amount. I was able to give the dsm VM just 11Gb of the 16 available while passing through the HBA.

Share this post


Link to post
Share on other sites

You are correct, it will limit the amount of RAM on passthrough.  RAM can remain dynamic if you use RDM, however, which is equally functional.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now