Jump to content
XPEnology Community

Enabling SSD Cache Crashes System


bigsid

Recommended Posts

Hi,

 

I'm new to XPEnology and recently purchased hardware to build my own NAS. Everything seems to be OK, except when I try to enable SSD Cache the system crashes.

 

I bought 2 Plextor PX-0128M5S 128GB SSD as they showed as compatible for SSD cache on the Synology website see (http://puu.sh/9Bk6b/373666d309.png). The SSDs are reported as the same when I run syno_hdd_util --ssd_detect...

 

DiskStation> syno_hdd_util --ssd_detect
Model                Firmware     SN                   Dev        is SSD?
WD1002FAEX-00Y9A0    05.01D05     WD-WCAW30439621      /dev/sdg   no
ST3000DM001-1CH166   CC46         Z1F3JALX             /dev/sdf   no
PX-128M5S            1.05         P02411111937         /dev/sde   yes
PX-128M5S            1.05         P02411111938         /dev/sdd   yes
WD30EFRX-68EUZN0     80.00A80     WD-WMC4N1680036      /dev/sdc   no
WD30EFRX-68EUZN0     80.00A80     WD-WMC4N2035566      /dev/sdb   no
WD30EFRX-68EUZN0     80.00A80     WD-WMC4N2034714      /dev/sda   no
If this is not right, please kindly report this to us

 

When I try to create an SSD cache, it shows up the two disks but says they are not compatible: (http://puu.sh/9Bkf0/2b400d1da0.jpg).

 

If I go ahead and try to create an SSD cache anyway, it comes up showing me the following screen: (http://puu.sh/9BksQ/6f89ea423b.png) then the system crashes. It stops responding, SSH sessions timeout, and on the console it comes up with a load of errors I don't understand, ending with [ end trace bff70cf8fdd00e94 ]. I can't scroll back up on the console to see anything more meaningful in the error.

 

System Information is as follows:

 

Model Name - DS3612xs

CPU - 3.39GHz

Cores - 1

Memory - 8192MB

DSM Version - DSM 4.3-3810

 

Thanks

Link to comment
Share on other sites

I've been playing around with this a lot today. I upgraded to from DSM 4.3 to DSM 5.0 with nanoboot to see if that made a difference. I was able to successfully configured SSD cache (DSM 5.0 also supports write caching it seems).

 

Once thing I noticed however, that is common across both DSM 4.3 and 5.0. If you try to enable SSD Caching for a Volume it doesn't work. On 4.3 it crashes the system. On 5.0 it crashes the volume until you remove the SSD cache and then it goes back online with a status of normal.

 

The only time I was able to successfully activate SSD cache was against an iSCSI LUN (Block NOT file). The only problem I have then, is that iSCSI Block LUNs don't seem to work with ESXi 5.5 U1. When I try to configure the iSCSI adapter and target on ESXi it hangs the ESXi system, and the vmkwarning.log is full of iSCSI "wait abort" and "timeout" errors.

Link to comment
Share on other sites

I have the same issue. When enabling SSD cache, the volume says it has crashed and everything starts running super weird. My build does not use VMware (It is just baremetal) and everything else seems to work perfectly. I have tried switching SATA controllers where the SSDs are positioned and AHCI vs IDE vs RAID mode and no luck.

 

Model Name: DS3612xs (Nanoboot)

DSM Version: DSM 5.0-4493 Update 1

Link to comment
Share on other sites

The only time I was able to successfully activate SSD cache was against an iSCSI LUN (Block NOT file). The only problem I have then, is that iSCSI Block LUNs don't seem to work with ESXi 5.5 U1. When I try to configure the iSCSI adapter and target on ESXi it hangs the ESXi system, and the vmkwarning.log is full of iSCSI "wait abort" and "timeout" errors.

Please,

 

Can you explain how you enabled this config? Perhaps, we can try to debug differences about SSD cache for iSCSI LUNs and SSD cache for Volumes.

 

My opinion is that the protection mechanism from Synology is blocking some funtions, and gnoboot/nanoboot developers needs to patch them.

I hope soon iSCSI blockIO and SSD cache will be fixed.

Link to comment
Share on other sites

Is there any way to inform the developers of Nanoboot/Synoboot/Etc about this issue and see what their thoughts are on this and the possibility of it being fixed in future releases? Without this feature my VMware environment is hurting very badly for IOPS. 8 x 4TB HGST 7200RPM drives in RAID 10 only provide about 850 IOPS.

 

Any recommendations to increase IO with my current setup? (except for adding more spindle drives)

Link to comment
Share on other sites

  • 1 month later...
×
×
  • Create New...