pcdtox02

Is it possible to use a Silverstone PCIe expansion card to use NVMe as cache?

Recommended Posts

I bought this card and a 960EVO NVMe to use as cache.  I'm not finding any info on compatibility and NVMe cache via expansion cards.... Can anyone point me in the right direction?

Share this post


Link to post
Share on other sites
Posted (edited)

nvme is a completely new way of having a device and its not supported in the models we can use for xpenology, i did a driver for nvme in my extra.lzma but using it for dsm as cache does need much more so its not supported, you can read here an can experiment if you want do push things further

 

 

 

Edited by IG-88

Share this post


Link to post
Share on other sites
3 hours ago, IG-88 said:

nvme is a completely new way if having a device and its not supported in the models we can use for xpenology,

 

 

 

Thats disappointing :/ thanks for the help though.

This is probably a silly question: but perhaps running Xpenology via VMware could bypass this? Since the 3.0 kernel will recognize NVMe perhaps Xpenology will just see it a drive if loaded this way?

Share this post


Link to post
Share on other sites
3 hours ago, pcdtox02 said:

This is probably a silly question: but perhaps running Xpenology via VMware could bypass this? Since the 3.0 kernel will recognize NVMe perhaps Xpenology will just see it a drive if loaded this way?

 

you could format the nvme as vmfs and use it as fast virtulal disk

in theory you can mark a virual disk as ssd to make it look like a ssd in the vm, don't know how dsm reacts to this

also yu can read this

 

Share this post


Link to post
Share on other sites
3 hours ago, IG-88 said:

 

you could format the nvme as vmfs and use it as fast virtulal disk

in theory you can mark a virual disk as ssd to make it look like a ssd in the vm,

 

 

I can't find much info on the vmfs format, I'll have to keep hunting.

 

  I'm also trying just a basic SATA SSD (850 evo) to use as cache until I find a way to use the NVMe.  But I can't even get it to show up as an SSD.  Just shows up as HDD and the Cache Advisor is grey out... any thoughts?

Share this post


Link to post
Share on other sites
3 minutes ago, pcdtox02 said:

 

I can't find much info on the vmfs format, I'll have to keep hunting

 

thats the file format of vmware (esxi) when formating a data partition (multiple vm's and host can access it at the same time)

 

 

Quote

  I'm also trying just a basic SATA SSD (850 evo) to use as cache until I find a way to use the NVMe.  But I can't even get it to show up as an SSD.  Just shows up as HDD and the Cache Advisor is grey out... any thoughts?

 

thats documented in synologys faq, afaik it needs two drives (will run in raid1 i guess) for using it as cache

 

 

Share this post


Link to post
Share on other sites
15 minutes ago, IG-88 said:

 

 

thats documented in synologys faq, afaik it needs two drives (will run in raid1 i guess) for using it as cache

 

 

right, I can raid together my NVME and SSD as a volume... I'm just saying neither show up as SSD either before or after.  Both just show up as HDD.  I must be missing a step somewhere in vmware...

Share this post


Link to post
Share on other sites

no when used a cache i guess the raid thing will be done under the hood by dsm, you just have to select the drive(s)

the KB mentions that only specific drives can be used

https://www.synology.com/en-global/knowledgebase/DSM/help/DSM/StorageManager/genericssdcache

 

i know there is a text file as database of ssd's (support_ssd.db) and i'm pretty sure a virtual ssd created with vmware will not be in there, so selecting virtual disks marked as ssd in a xpenology vm might not work or can only work when the disk is in that db

you might search the forum or internet for further info's about that ssd db

Share this post


Link to post
Share on other sites
1 hour ago, IG-88 said:

 

the model and SSD I'm using are on the list and I'm not using virtual disks I'm allowing VMware access to physical disks.  There is so little info out there (and the search function is awful) 

 

I'll keep hunting but I'm starting to think its not even possible to see an SSD through VMware on xpenology.  I appreciate the help, though! :)

Share this post


Link to post
Share on other sites

not sure about what you reference to, vmware or better esxi is the os that has the access over build in driver to the controller and through that to the disk(s)

you cant give "physical access" to the disk to a vm, only thing (afaik) is rdm the disk (not the same as direct access, its just a sector mapping to the disk, not the same as accessing it through the controller and i guess it will look different to the vm then a real disk) or pass through the controller to the vm to give the vm fill controll with driver and full access to the disk, you dont get things like smart with rdm inside vm but get it with controller pass trough

 

Share this post


Link to post
Share on other sites
Posted (edited)
14 hours ago, IG-88 said:

 

you cant give "physical access" to the disk to a vm, 

 

Yeah the option says "physical access" but I see what you mean. Its not truly direct access.  I gave up on doing through a VM and just used a USB loader and the SSD shows up no problem. Its too bad about the NVMe though.  Hopefully Xpenology can get that working sometime in the future.  As far as I know, there is only one Synology model (DS918+) that has NVMe slots built-in anyway so I image it will be awhile :/

 

Thanks again for the help. 

 

  

Edited by pcdtox02

Share this post


Link to post
Share on other sites
On 3/25/2018 at 1:06 AM, IG-88 said:

in theory you can mark a virual disk as ssd to make it look like a ssd in the vm, don't know how dsm reacts to this

also yu can read this

By default, if ESXi is virtualizing any type of SSD, it will present to the guest as SSD. This can be overridden (SSD->HDD or HDD->SSD) as needed, and maybe some drives ESXi can't determine SSD status.

 

From the ESXi console:

[root@esxi:/] esxcli storage core device list (results are trimmed for clarity)
t10.NVMe____INTEL_SSDPE2MX020T4_CVPD6114003E2P0TGN__00000001
   Display Name: Local NVMe Disk (t10.NVMe____INTEL_SSDPE2MX020T4_CVPD6114003E2P0TGN__00000001)
   Size: 1907729
   Device Type: Direct-Access
   Vendor: NVMe
   Model: INTEL SSDPE2MX02
   Is SSD: true

 

The above drive has a vmdk configured on it, and is presented to my Synology VM as /dev/sda:

synology:/run/synostorage/disks/sda$ cat vendor model type
VMware  Virtual disk            SSD

 

On 3/25/2018 at 4:58 AM, pcdtox02 said:

right, I can raid together my NVME and SSD as a volume... I'm just saying neither show up as SSD either before or after.  Both just show up as HDD. 

I don't think there is any supportable way to get a NVMe drive to work natively as a volume within current versions of DSM. Only through virtualization via ESXi.

 

On 3/25/2018 at 4:25 AM, pcdtox02 said:

  I'm also trying just a basic SATA SSD (850 evo) to use as cache until I find a way to use the NVMe.  But I can't even get it to show up as an SSD.  Just shows up as HDD and the Cache Advisor is grey out... any thoughts?

This doesn't make sense to me.  You state that you are allowing ESXi access to physical disks.  If you used a full passthru of your SATA controller, DSM should have full hardware access to the SATA drive and recognize as SSD.  I don't know exactly how RDM would work though.  A vmdk shows up per the above.  Passthru of your NVMe drive won't work at all. Maybe you are mixing up results from your SATA and NVMe drives?

 

On 3/24/2018 at 2:54 PM, pcdtox02 said:

I bought this card and a 960EVO NVMe to use as cache.  I'm not finding any info on compatibility and NVMe cache via expansion cards.... Can anyone point me in the right direction?

I never tried to run NVMe as a cache drive since what I was trying to accelerate was an entire volume, and think the DSM implementation of SSD cache is a poor value for the cost of NVMe SSD.  But I took some scratch space on an NVMe drive (in this case, a Samsung PM961, which is the closest thing I have available to OP's 960 EVO) and created another virtualized SCSI drive as a test:

 

xp1.jpg

 

And here is how it shows up in DSM (as disk #9, also note SATA SSD on passthru controller as disk #10)

xp4.jpg

 

And how both are recognized for SSD cache:

xp5.jpg

 

Also understand that any NVMe "controller" is nothing but a mechanism to map PCI lanes to the NVMe drive.  The driver in ESXi, Windows and Linux is standardized and works with all drives (although some manufacturers like Intel have their own to support extended features). So OP's original request to use NVMe for cache should be possible using ESXi.

 

Share this post


Link to post
Share on other sites

thanks, that was my point (but i dont had esxi at hand to try it, was just from memory as i use esxi/vmware at work and i wont do anything dsm related at work)

vmfs datastore -> virtual disk marked as ssd -> ssd in dsm vm

so does work that way without any tinkering on the ssd db file in dsm and its recognised as ssd and can be used both ways as data volume or as cache

 

 

 

Share this post


Link to post
Share on other sites

I have never edited the ssd db to have SATA SSD recognized on Synology or XPEnology.  I don't think that the ssd db even matches the current compatibility list?  Honestly, I really don't know what the db is for.  For what it's worth, every Intel, Samsung and VM SSD I've tried has been recognized as SSD.  Obviously there are a lot of other SSD products on the market.

 

This is in /etc/rc:

if [ "$PLATFORM" != "kvmx64" -a -f /usr/syno/bin/syno_hdd_util ]; then
        syno_hdd_util --ssd_detect --without-id-log 2>/dev/null
fi

 

I can only guess that syno_hdd_util evaluates SSD status and updates something, otherwise why would Synology run it there.  However, I can't find a reference to syno_hdd_util in /lib/udev, as a hotplug event somehow has to determine HDD/SDD status as well.

Share this post


Link to post
Share on other sites
Posted (edited)
9 hours ago, flyride said:

This doesn't make sense to me.  You state that you are allowing ESXi access to physical disks.  If you used a full passthru of your SATA controller, DSM should have full hardware access to the SATA drive and recognize as SSD.  I don't know exactly how RDM would work though.  A vmdk shows up per the above.  Passthru of your NVMe drive won't work at all. Maybe you are mixing up results from your SATA and NVMe drives?

Everything I mentioned was via VMware Workstation Pro (on Windows 10).  Giving physical access to the drives would make both NVME and SSD show up as HDD in the Synology OS.   I experimented with treating both drives as Sata, NVme, SCSI.... the choice didn't matter. And I always chose "Advanced: Physical Access."  I also tried each drive individually (literally removing the other from the machine) just to test if the PCIe card had been causing the issue. No Dice. 

 

Although, booting the OS from USB works great and there are no issues with the SSD. But (as expected) the NVMe doesn't show up at all.  

 

I've never tried ESXi and I'm honestly a little confused why you would want to run a VM off of bare metal... wouldn't it be better to run the DSM on the machine and run VMs through DSM? I'll have to look into that more.  Although.... now that I type that.....  if ESXi recognized the PCIe card, it would be interesting to see if the NVMe works to its full potential (even if DSM labels it as an SSD)

 

 

Edited by pcdtox02

Share this post


Link to post
Share on other sites

Ah, I didn't see the fact you were running VMWare Workstation.  I don't have any experience with that, as all my work knowledge comes from ESXi.  I'm guessing that the "physical hardware" option of VM Workstation isn't a true passthrough, so XPEnology doesn't have unfettered access to the hardware.  Maybe someone else who knows the product can comment.

 

Most of us build up a server expressly to run XPEnology.  I started with DSM on a baremetal server, but switched to ESXi when I couldn't get NVMe functional.  As far as running DSM on a VM  (and other VM's side-by-side with DSM) versus VMs within DSM... the VMM manager in DSM has limitations that ESXi does not.  For example, it only supports specific OS's and versions.  ESXi is a little bit finicky on hardware, but then so is XPEnology.

 

Regarding NVMe performance under ESXi, I think IG-88 already quoted this post:

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now