Jump to content
XPEnology Community

SSD cache on latest DSM with XPEnology


shomrighausen

Recommended Posts

I believe that I've read all of the threads regarding the non-functioning state of SSD cache. I also verified that I had issues (as described) when trying to create the cache - lock up the NAS/volume. Through further investigation, I read a thread regarding the modification of the /etc/support_ssd.db and /etc.defaults/support_ssd.db files. I've added the information for the SSD drives I had to test with, as follows:

[sSD 830 Series]

brand="Samsung"

size="256GB"

[sSD 830 Series]

brand="Samsung"

size="64GB"

 

I recognize that these are older drives. I have a single 256GB and a pair of 64GB for testing. While I'm able to go through the process for creating the SSD cache (read cache with the single 256GB and read-write cache with the pair of 64GB), the cache creation goes through it's phases and then goes back to screen that lets me click the 'create' button again. The NAS doesn't lockup (which is progress?), but it doesn't successfully create the cache.

 

Does anyone have drives that are on the compatibility list on Synology's site for the DS3615xs to test with? Alternately, has anyone successfully added an SSD cache to their XPEnology NAS? When I go through the process, it appears that it may be testing the SSD(s) for performance or compatibility during the process and they are failing (maybe not enough throughput?) Does anyone know where I can start looking through logs?

 

I guess I'm bored for something to do and this seems like a nice feature that could improve performance... Any insight is appreciated. Thanks.

Link to comment
Share on other sites

I guess I'll ask another couple of questions... does anyone have SSD cache running on actual Synology hardware? I'm starting to wonder if there is a fundamental change when SSD cache is enabled - like a change to the /dev/md0/1/2/etc. or if there is an additional driver that is missing with XPEnology. I'm hoping to get some information before I take my DS1512+ offline and do some testing.

Link to comment
Share on other sites

I'm trying to figure out why it doesn't work and possibly find a solution. It seems like it should be straightforward - it may be a matter of understanding what happens on Synology hardware/software. I'm trying to get more information that may help in that regard.

 

I guess ultimately, it's not a showstopper that it's missing, but a 'nice to have' that I believe several folks want (not need). Since I have a few older SSD collecting dust, it's a good use of my time. :smile: Any input is welcome.

Link to comment
Share on other sites

I have a real Synology 1813+ w 4GB RAM. I had a single raid of 6x3TB REDA and 2 SSD acting as read write cache. My sequential read and write for large files was actually 10-15% worse with 2 SSD in cache mode than without. I didn't notice a lot of random access improvement, but I wouldn't be shocked if it was there, it is hard to easily measure. I instead created a 2 SSD volume and that works wonderfully.

Link to comment
Share on other sites

I setup my DS1512+ with 3x1TB and added a single drive as SSD cache. It created /dev/md3 for the SSD, I'm still digging to find out what else changes to try to figure out what the missing components are within XPEnology. I suspect that it is something simple. Creating the SSD cache was much faster than I thought, after watching the failed 'mounting' message within Storage Manager while trying to create the SSD cache in XPEnology.

 

I understand that I could just create a separate volume with a pair of SSD drives. I'd like to see what type of increase/decrease in performance I would see in my environment - with the SSD cache on the single volume that I use. I expect that it would slow down the sequential data transfer, but there was an option to disable the cache for sequential data.

 

Some interesting numbers doing 'hdparm -Tt /dev/md?' testing average of 5 runs:

 

System                    cached (MB/s)       buffered (MB/s)
DS1512+ (3x1TB SHR)             908                 187.5
DS1512+ (SSD-256GB R0)          941                 249.8
DS214se (2x4TB R0)              373                 139.8
XPEN x5460 (5x6TB SHR)         6667                 424.3
XPEN 3800+ (5x4TB SHR)          830                 249.8

 

If I understand correctly, the cached number tests the throughput of the system without the disk in the mix. The buffered number is essentially sequential read speed with no RAM caching by the system - this should represent sustained sequential read speed.

 

It shows that the SSD cache performs at a higher level of sequential read than the 3x1TB array (both in the DS1512+). Given these results, my current SSD is limited by he hardware - it's a Samsung 830 and should perform at around 488 MB/s sequential (https://wiki.archlinux.org/index.php/SS ... _830_256GB). I believe the current 'flagship' SSD push the number to around 525MB/s (https://wiki.archlinux.org/index.php/SS ... _PRO_512GB).

 

Given that the sequential speed of my x5460-based XPEnology system is already over 400MB/s, the SSD cache would provide little (if any) increase in performance for sequential data transfer.

 

If we are talking about NON-sequential data, I hope that the performance of the SSD cache would hit stride and help with performance. I'm not sure if the SSD cache would help with enumeration of directories with a high file count, MariaDB, inbound writes of smaller files, etc. I'd like to find out. :smile:

Link to comment
Share on other sites

Adding an SSD r/w cache dropped my sequential read/write from a solid 110-113 (max of gigabit link) to in the 90's

 

I am not sure how you can get numbers like that, even on my 24GB RAM quad haswell xeon based xpenology with 10gig ethernet and 5x 7200 hitachi ent drives and jumbo frames, I max at about 180MB / sec sequential.

Link to comment
Share on other sites

I'm testing the speed of the disk locally on the NAS.

 

You can SSH into the NAS and run this command to test the volume/device in question:

 

'hdparm -Tt /dev/md2' <---- generally volume1 within DSM

 

Run it five separate times and manually average. It will output cached and buffered speeds in MB/s.

 

This will help you find the volumes on your NAS:

 

'cat /proc/mdstat' <----- will provide a list of file systems.

 

'df' <----- will provide a list of disk usage information. This shows /dev/md0 as 2451064 1k blocks on each XPEnology system that I have. Additionally, you should be able to identify the device for the main volume via lvm - it may be something like /dev/vg1000/lv. You could run 'hdparm -Tt /dev/vg1000/lv' to get the speed of that volume.

 

You will be bound by the network speed - if you have a 10GB network card in the NAS, you should be able to get greater than 1GB speeds, assuming that your client also has a 10GB card. If you only have 10GB on the NAS and 1GB on each client, you'll only get 1GB speed during the network-based tests. Additionally, there are some tweaks that you can make to the /etc/sysctl.conf to provide additional network throughput, based on the 1GB+ network configuration.

 

In my environment, I have four 1GB NICs bound together on the NAS and have several clients with 2 x 1GB NICs. Additionally, I run VirtualBox with some VMs locally on the NAS. These benefit greatly from additional disk speed, as they are 'local' on the NAS. I can easily bury 2x1GB NIC bandwidth transferring large files. I suspect that I could do the same if I had additional NICs paired up on both the NAS and the clients. Ultimately, the network is the funnel that you're trying to fill.

Link to comment
Share on other sites

on my Xpenology NAS (5x2TB HDD)

 

hdparm -Tt /dev/md2 gets me

/dev/md2:

Timing cached reads: 30620 MB in 2.00 seconds = 15334.99 MB/sec

Timing buffered disk reads: 1478 MB in 3.00 seconds = 492.17 MB/sec

or something very close to it each time

 

Yet I am restricted to 180MB/sec over 10gig (10gig nic in PC and in Xpenology system, direct connected over fiber)

Link to comment
Share on other sites

on my Xpenology NAS (5x2TB HDD)

 

hdparm -Tt /dev/md2 gets me

/dev/md2:

Timing cached reads: 30620 MB in 2.00 seconds = 15334.99 MB/sec

Timing buffered disk reads: 1478 MB in 3.00 seconds = 492.17 MB/sec

or something very close to it each time

 

Yet I am restricted to 180MB/sec over 10gig (10gig nic in PC and in Xpenology system, direct connected over fiber)

And your PC has RAID0 SSD and capable of faster transfer on write?

Link to comment
Share on other sites

  • 4 weeks later...

i ve tried to move forward but no luck

 

i have Kingston 120 GB, controller in ESXi is LSI Logic SAS, so XPEnology recognises that as SSD

I'm using onboard SATA and then make a RDM… ( maybe thats why i got problems.. ? )

 

 

load modules:

insmod syno_flashcache_control.ko
insmod flashcache_syno.ko
insmod flashcache.ko

 

then you can try to start sad cache create from webGUI …( it needs to kill services and unmount /mnt/md2 ( raid ) ) to exec flashcache.

I believe it s executing something like

 flashcache_create -p back cachedev /dev/sda /dev/md2

( link to documentation: https://github.com/facebook/flashcache/ ... -guide.txt )

 

so logs:

creating

[37447.272311] md: bind
[37447.300659] md/raid0:md3: md_size is 234432384 sectors.
[37447.300660] md: RAID0 configuration for md3 - 1 zone
[37447.300661] md: zone0=[sda1]
[37447.300663]       zone-offset=         0KB, device-offset=         0KB, size= 117216192KB
[37447.300664] 
[37447.300667] md3: detected capacity change from 0 to 120029380608
[37447.300920]  md3: unknown partition table
[37455.410960] iSCSI: EXT_INFO: RemoveRelation EPIO(35643233613735322d306162382d3300)
[37455.420114] iSCSI: EXT_INFO: RemoveRelation EPIO(32383033356339382d643034302d3300)
[37455.420351] iSCSI: EXT_INFO: Unload ROD-EPIO(1)
[37455.420369] iSCSI: EXT_INFO: RemoveRelation EPIO(64313538336534372d323261392d3300)
[37455.420429] iSCSI: EXT_INFO: Unload UNMAP Buffer (1)
[37455.420556] iSCSI: EXT_INFO: ReleaseItem(EP_1)
[37456.050026] iSCSI: EXT_INFO: EXIT Extent-Pool
[37456.052023] iSCSI: RODSP_INFO: RODSPS STOP
[37457.344387] nfsd: last server has exited, flushing export cache
[37460.774742] init: crond main process (10155) killed by TERM signal
[37461.415311] init: ftpd main process (15640) killed by TERM signal
[37464.860957] init: smbd main process (15875) killed by TERM signal
[37466.788549] flashache: /dev/md2 exclude check success
[37466.788555] flashache: /dev/md3 exclude check success
[37466.788559] device-mapper: flashcache: Handle create failed
[37466.788561] device-mapper: flashcache: Allocate 457872KB (16B per) mem for 29303808-entry cache(capacity:114468MB, associativity:512, block size:8 sectors(4KB))

 

but then there is problem with udevd , we are hitting that on VMware:

[37467.128362] ------------[ cut here ]------------
[37467.128364] kernel BUG at include/linux/scatterlist.h:63!
[37467.128365] invalid opcode: 0000 [#1] SMP 
[37467.128367] CPU 0 
[37467.128367] Modules linked in: flashcache(O) flashcache_syno(O) syno_flashcache_control(O) cifs udf isofs loop usbhid hid usblp usb_storage bromolow_synobios(P) adt7475 i2c_i801 btrfs synoacl_vfs(P) zlib_deflate libcrc32c hfsplus md4 hmac tn40xx(O) be2net igb i2c_algo_bit dca fuse vfat fat crc32c_intel aesni_intel cryptd ecryptfs sha512_generic sha256_generic sha1_generic ecb aes_x86_64 authenc chainiv des_generic crc32c eseqiv krng ansi_cprng cts rng aes_generic md5 cbc cryptomgr pcompress aead crypto_hash crypto_blkcipher crypto_wq crypto_algapi cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand mperf processor cpufreq_stats freq_table dm_snapshot crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ipv6 zram(C) etxhci_hcd xhci_hcd ehci_hcd uhci_hcd ohci_hcd usbcore usb_common container thermal_sys compat(O) vmw_balloon vmxnet3 vmw_pvscsi e1000e e1000 3w_sas 3w_9xxx mvsas arcmsr megaraid_sas megaraid_mbox megaraid_mm mpt2sas mptsas mptspi mptscsih mptbase scsi_transport_spi scsi_wait_scan sg sata_uli sata_svw sata_qstor sata_sis pata_sis stex sata_sx4 sata_promise sata_nv sata_via sata_sil [last unloaded: rpcsec_gss_krb5]
[37467.128408] 
[37467.128409] Pid: 4477, comm: udevd Tainted: P         C O 3.2.40 #1 VMware, Inc. Synoden/440BX Desktop Reference Platform
[37467.128412] RIP: 0010:[]  [] sg_set_page.part.14+0x4/0x6
[37467.128417] RSP: 0018:ffff88012690b948  EFLAGS: 00010002
[37467.128418] RAX: ffff88012a0efe80 RBX: ffff880137ab13c8 RCX: 30ec8348e5894855
[37467.128419] RDX: 0000000000000002 RSI: ffff88012a145480 RDI: 00000000e86d894c
[37467.128420] RBP: ffff88012690b948 R08: ffffffff812d8ba0 R09: ffff88012a0efe80
[37467.128421] R10: ffff88012a0efe80 R11: 0000000000000020 R12: ffff880137ab1438
[37467.128421] R13: 0000000000001000 R14: 0000000000000001 R15: 0000000000000000
[37467.128423] FS:  0000000000000000(0000) GS:ffff88013fc00000(0063) knlGS:00000000f75509e0
[37467.128424] CS:  0010 DS: 002b ES: 002b CR0: 0000000080050033
[37467.128425] CR2: 00000000f7560d78 CR3: 0000000126aa2000 CR4: 00000000000406f0
[37467.128447] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[37467.128457] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[37467.128458] Process udevd (pid: 4477, threadinfo ffff88012690a000, task ffff88012a43f910)
[37467.128459] Stack:
[37467.128460]  ffff88012690b9a8 ffffffff81217f49 ffff88012a145480 ffff88012a0efe80
[37467.128462]  0101000000001000 ffff880137704180 ffff88012690b9a8 ffff880003dd01d8
[37467.128463]  ffff88012a145480 0000000000000020 0000000000001000 ffff88012a145480
[37467.128465] Call Trace:
[37467.128468]  [] blk_rq_map_sg+0x2b9/0x2f0
[37467.128472]  [] scsi_init_sgtable+0x3c/0x60
[37467.128473]  [] scsi_init_io+0x2d/0x270
[37467.128475]  [] ? scsi_get_command+0x84/0xc0
[37467.128477]  [] scsi_setup_fs_cmnd+0x5e/0x90
[37467.128480]  [] sd_prep_fn+0x16c/0xab0
[37467.128482]  [] ? submit_bio+0x5b/0xf0
[37467.128484]  [] blk_peek_request+0xf5/0x1c0
[37467.128486]  [] scsi_request_fn+0x46/0x4c0
[37467.128487]  [] queue_unplugged.isra.51+0x1f/0x50
[37467.128489]  [] blk_flush_plug_list+0x1a3/0x200
[37467.128491]  [] blk_finish_plug+0x13/0x50
[37467.128494]  [] __do_page_cache_readahead+0x1d9/0x280
[37467.128497]  [] force_page_cache_readahead+0x91/0xd0
[37467.128499]  [] page_cache_sync_readahead+0x3b/0x40
[37467.128500]  [] generic_file_aio_read+0x570/0x710
[37467.128503]  [] do_sync_read+0xde/0x120
[37467.128505]  [] ? security_file_permission+0x92/0xb0
[37467.128507]  [] ? rw_verify_area+0x5c/0xe0
[37467.128508]  [] vfs_read+0xa4/0x180
[37467.128510]  [] sys_read+0x5a/0xa0
[37467.128511]  [] sysenter_dispatch+0x7/0x27
[37467.128512] Code: 5c 41 5d 41 5e 41 5f 5d c3 55 48 89 e5 0f 0b 55 48 89 e5 0f 0b 55 48 89 e5 0f 0b 55 48 89 e5 0f 0b 55 48 89 e5 0f 0b 55 48 89 e5 <0f> 0b 48 8b 3d 74 a9 de 02 55 48 89 e5 48 85 ff 74 05 e8 be 7a 
[37467.128526] RIP  [] sg_set_page.part.14+0x4/0x6
[37467.128528]  RSP 
[37467.128530] ---[ end trace 3bc02b1388a6ebca ]---

 

after a while freeze …

Link to comment
Share on other sites

×
×
  • Create New...