Jump to content
XPEnology Community
  • 0

Weak performance - optimization possibilities


Xyzzy

Question

Hello,

Having moved from OMV, I notice that exactly the same hardware setup (including filesystems used) give much weaker performance under Xpenology - big file transfers via LAN hardly reach 60 MB (megabytes)/s while on OMV I was able to reach 100 MB/s; USB3-connected external drives reach 4 (!) MB/s read speed - 1/10th or of what OMV offered.

 

Is there any way to speed it up. After all, these are transfer speeds thet count in a NAS...

 

Hardware - J3455, 4 GB RAM

 

Regards,

Edited by Xyzzy
Link to comment
Share on other sites

17 answers to this question

Recommended Posts

  • 0

Underneath the covers DSM is mdadm, lvm and either ext4 or btrfs.

OMV is mdadm and ext4 or zfs (yes they have btrfs but it's not suitable for RAID so few pick it).

 

The possible differences are your RAID layout and whether your CPU is allowed to burst mode.  There are threads on improving the CPU performance of the J series processors here.

Post more information about your disk type, array configuration etc.  Have you verified full-duplex 1Gbps connectivity?

 

For reference I am using a J4105 with 4GB RAM running DSM 6.2.3 and it can easily handle full Gbe (100+MBps) to and from a 4-disk RAID5.

 

Can't speak about external drives but your transfer rate seems to be USB2 not USB3 rate there.

Edited by flyride
Link to comment
Share on other sites

  • 0

The disk setup (again, the same as for OMV) is JBOD disks with btrfs for internals and NTFS for externals - so no RAID is involved.

1Gb full duplex and the same MTU (1500) confirmed on both sides (NAS and PC).

The CPU is allowed to boost - but I didn't notice any CPU spikes on file transfers.

Also, with bunch of small files (fe 100GB in few MB files), Synology transfer times are approx 1.5x OMV times.

 

Regarding externals, after reading Synology forums I see that simply DSM is probably broken regarding USB3 handling 😕

 

Link to comment
Share on other sites

  • 0

If you are JBOD then it's limited to the speed of individual drives.  Few desktop-class drives can produce sustained transfer rates >100Mbps.

Large files (lots of sequential r/w) will be faster than lots of small ones.

 

60MBps sustained is pretty typical for a WD Red class 5400rpm drive.  If you want more feedback post the actual drive model numbers.

 

Part of the reason for RAID, and SHR's ability to incorporate dissimilar drives into RAID, is to leverage drives in parallel to improve net transfer rate.  If you want speed why are you configuring JBOD pools?

Link to comment
Share on other sites

  • 0

 

I am comparing identical hardware & setup and seeing DSM being weaker (MUCH weaker considering external USBs) than OMV.

 

OMV speeds were OK for my limited needs, and while internal drives performance in DSM is kind of acceptable (though a bad surprise) - external performance is a bummer.

I cannot re-format the external drives to ext4 (which seems to make difference for DSM sometimes), so unless I find a solution, I will be quitting Xpenology.

It's nice and stuff, but lacking in basic metrics.

 

Also, I will try to look at these Xpenology extra drivers packs - maybe they would make a difference.

 

 

Link to comment
Share on other sites

  • 0

You could be correct, but my point is they are running the same software (at least for core disk access).  If you are unwilling to post your disk drive types, I question whether you really are seeing 100MBps from a single-drive setup on OMV, it's beyond the capabilities of the drive.  Therefore, the tests may not have been comparable in some way.

 

But if you have your mind made up that OMV is better, by all means go on back.

Link to comment
Share on other sites

  • 0

Sure, generally the disks in use on the Xpenology side are Kingston A400 SSDs and Seagate Barracuda 4TB CMR as internals and Seagate Backup Hub Plus and WD Elements Desktop as externals.

 

In case of large file transfer (10GB, SSD->LAN->SSD) I could get average 40 MB/s now (with short jumps to 60MB/s) vs sustained 105MB/s before (I guess just max of what my switch can handle). CPU load in 1-digit regions on the NAS.

Unfortunately I didn't do any other benchmarks on the OMV, but DSM feels much more sluggish overall - not only transfer speeds, but also opening/listing directories, seeking inside files etc.

 

Link to comment
Share on other sites

  • 0

The below symbolically sums the situation perfectly (both regarding network transfers and USB drive speed). Same drive run from Ubuntu booted from a stick on the same computer consistently reaches over 100 MB/s.

There's probably some small but significant incompatibility between my platform (AsRock J3455B-ITX) and Xpenology, but more interesting is that identical problems seem to plague original Synology owners for years, like https://community.synology.com/enu/forum/3/post/122991 or numerous network speed issues (yes, I disabled IP encryption and IPv6 already and no, I need to keep NTFS).

So unless there is miracle recovery solution, I will just back off.

 

admin@NAS:/$ sudo hdparm -t /dev/sdr

/dev/sdr:
 Timing buffered disk reads: 150 MB in  3.01 seconds =  49.82 MB/sec
admin@NAS:/$ sudo hdparm -t /dev/sdr

/dev/sdr:
 Timing buffered disk reads:   4 MB in  4.67 seconds = 877.73 kB/sec
admin@NAS:/$ sudo hdparm -t /dev/sdr

/dev/sdr:
 Timing buffered disk reads:   2 MB in 29.33 seconds =  69.83 kB/sec
admin@NAS:/$ sudo hdparm -t /dev/sdr

/dev/sdr:
 Timing buffered disk reads:  54 MB in  3.67 seconds =  14.73 MB/sec
admin@NAS:/$ sudo hdparm -t /dev/sdr

/dev/sdr:
 Timing buffered disk reads:   2 MB in  4.34 seconds = 471.65 kB/sec
admin@NAS:/$ sudo hdparm -t /dev/sdr

/dev/sdr:
 Timing buffered disk reads:  42 MB in  7.83 seconds =   5.36 MB/sec

 

Link to comment
Share on other sites

  • 0

There is something very wrong.  I am not trying to talk you out of your decision (or say that your results are inaccurate) but if DSM performed like that nobody would use it.

 

Here's the output from my J4105-ITX and a WD whitelabel 5400 RPM drive:

root@archive:~# hdparm -t /dev/sdb

/dev/sdb:
 Timing buffered disk reads: 540 MB in  3.01 seconds = 179.63 MB/sec
root@archive:~# hdparm -t /dev/sdb

/dev/sdb:
 Timing buffered disk reads: 542 MB in  3.02 seconds = 179.59 MB/sec
root@archive:~# hdparm -t /dev/sdb

/dev/sdb:
 Timing buffered disk reads: 536 MB in  3.00 seconds = 178.66 MB/sec
root@archive:~#

 

and from my Skylake E3 system with SATA SSDs:

root@nas:~# hdparm -t /dev/sde

/dev/sde:
 Timing buffered disk reads: 1540 MB in  3.00 seconds = 512.91 MB/sec
root@nas:~# hdparm -t /dev/sde

/dev/sde:
 Timing buffered disk reads: 1540 MB in  3.00 seconds = 513.08 MB/sec
root@nas:~# hdparm -t /dev/sde

/dev/sde:
 Timing buffered disk reads: 1538 MB in  3.00 seconds = 512.31 MB/sec
root@nas:~#

 

Link to comment
Share on other sites

  • 0

addendum: USB3-connected WD Red 4TB at 5400rpm on J4105-ITX

 

Quote

root@archive:~# hdparm -t /dev/sdq

/dev/sdq:
 Timing buffered disk reads: 504 MB in  3.00 seconds = 167.96 MB/sec
root@archive:~# hdparm -t /dev/sdq

/dev/sdq:
 Timing buffered disk reads: 510 MB in  3.00 seconds = 169.82 MB/sec
root@archive:~# hdparm -t /dev/sdq

/dev/sdq:
 Timing buffered disk reads: 512 MB in  3.00 seconds = 170.54 MB/sec

 

Link to comment
Share on other sites

  • 0

I upgraded firmware for the mobo to the latest version, downloaded fresh Ubuntu, plugged instead of DSM and booted - all disk stats are perfect.

Replug DSM stick and reboot - problems are back.

I really have no ideas - I imagine that in case of hw incompatibilities/issues I should have errors all over the place. Only ones (kind of acceptable):

 

admin@NAS:/var/log$ dmesg |grep -i error
[    0.376515] mce: [Hardware Error]: Machine check events logged
[    0.376557] mce: [Hardware Error]: CPU 0: Machine Check: 0 Bank 4: e600000000020408
[    0.376559] mce: [Hardware Error]: TSC 0 ADDR fef5a780
[    0.376563] mce: [Hardware Error]: PROCESSOR 0:506c9 TIME 1600766688 SOCKET 0 APIC 0 microcode 2e
 

 

Link to comment
Share on other sites

  • 0

from the specs the ASRock J3455B-ITX has only 2 x SATA, is sounds like you have more then 2 disks "SSDs" and HDD(s?)

what controller and how are the disks spread out? (the pcie slot is only pcie 2.0 and has 2 lanes)

at least on the internal sata ports there should be normal performance

 

can you provide a whole dmesg so we might have a look? (you might send it to me and flyride as PM)

 

what loader and DSM version are you using, you might change from 918+ to 3615 and test with this (3615/17 and 918+ use different kernels 3.10.105 vs 4.4.59)

or even try dsm 6.1 (loader 1.02b), depends on how much time you are willing to invest, OMV is not that bad and if it does the job it might be a wast of time on your side to switch to xpenology

 

edit: found this

https://xpenology.com/forum/topic/12867-user-reported-compatibility-thread-for-dsm-62/?do=findComment&comment=165276

"AsRock J3455B-ITX, generic Marvell 88SE9215 PCIE controller /w 4x SATA III"

thats a one pcie lane chip so 4 disks have to share a max. of 500MB/s of the pcie bus (might be even less, the 500MB/s are the theory's max.)

Edited by IG-88
Link to comment
Share on other sites

  • 0
On 9/21/2020 at 8:04 PM, Xyzzy said:

The below symbolically sums the situation perfectly (both regarding network transfers and USB drive speed). Same drive run from Ubuntu booted from a stick on the same computer consistently reaches over 100 MB/s.

There's probably some small but significant incompatibility between my platform (AsRock J3455B-ITX) and Xpenology, but more interesting is that identical problems seem to plague original Synology owners for years, like https://community.synology.com/enu/forum/3/post/122991 or numerous network speed issues

 

having a low performance on internal drives, nic speed issues and usb3 issues are three complete different things (was nic performance was measured independent from disks?, if its just hdd/ssd of desktop - network - nas's data from disk then it should be reduced to just measure  the network speed)

 

usb3 can be a huge pile of sh..., there are so many interfering hardware issues, starting from crappy cables and enclosures to bluetooth and wifi (or other RF hardware)

 

most basic and serious would be if the ahci/sata performance to low, that would need to be checked and also having a look on the cpu at the same time

 

Link to comment
Share on other sites

  • 0

Sent dmesg via a private message.

 

Some details I missed earlier:

- for all these testing I have one UV500 attached to mobo, 2 Barracudas attached to the controller (one not initialized in DSM), 2 external HDDs attached to USB3 ports (one with USB2, one with USB3 interface)

- loader 1.04b DS918+, DSM 6.2.3-25426

 

What I tried more:

- run Debian 9 live from USB stick - no speed issues

- tried 1.03b for DS3617xs, but apparently my NIC is not supported (machine started, just no network connectivity to it), so got back to 1.04b (1.03b actually did something, I needed to "migrate" disks to DS918+ after changing loader back).

- tried playing around with BIOS settings (enable/disable legacy boot support and likes) - no change

 

I didn't test network directly, so I think network issues just reflect poor disk performance.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...