Jump to content
XPEnology Community

Benchmarking your synology


Hostilian

Recommended Posts

Since there have been some reports of a few complaints about performance on the new 1.04b - 6.2.1 loader - I thought I'd look into a simple benchmark for Synology..

There are other threads using movie conversions (and another one that uses dd for file transfer benchmark) but I wanted something quick and easy for a CPU test - with no installation..

 

Turns out you can use DD to do both..

Connect to the Synology using SSH, and run the following..

 

CPU..

dd if=/dev/zero bs=1M count=1024 | md5sum

(it basically gives it a CPU intensive task to do - good enough for a rough guide)

 

Question I have, if anyone's familiar, which disk does the dev/zero reside on? Is it in RAM or is it any and all of the disks in my machine as the system partition is synched to each disk?

If it happens to use a HDD, rather than a SSD, what difference does that make and can we specify a different location (if disk is a bottleneck for this test)?

 

DISK.. (I needed to use Sudo for this)

sudo dd bs=1M count=256 if=/dev/zero of=/volumeX/share/testx conv=fdatasync

 

Where volumeX/share is the volume and share you want to copy the file to.. It's case sensitive..

 

Be careful!  :)

 

 

 

Not sure if it tests using all cores (the second VM tested had 4 CPUs as opposed to 2)

Now, the 1st of these machines is under load (web cams) but it gives a rough idea!  :)

 

For two bare metal machines I have..

1.. J3455-ITX on DSM 6.2.1 DS918+ . 12GB.

CPU = 217MB/s average  (its under load, slightly)

Disk = 146MB/s to HDD, 347MB/s to SSD.

 

2.. i3 4130 on Asus H87i-plus DSM 6.2U2 DS3617. 8GB

CPU = 663 MB/s

Disks later

 

2 Syno 'test' VMs running on ESXi..

3. ESXi with i7 3770s on ASus P8z77v-pro DSM 6.21 DS3617. 16GB

Both had CPU = 630MB/s

Disks later but a quick test showed me I had an issue with one of my drives..

Link to comment
Share on other sites

HP Microserver Gen8 with Xeon E3-1265L Running 6.1.6 Update 1

Haldi@NAS:~$  dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.18873 s, 491 MB/s ;
2147483648 bytes (2.1 GB) copied, 4.41996 s, 486 MB/s
4294967296 bytes (4.3 GB) copied, 8.73961 s, 491 MB/s

 

Raid5: 4x6TB WD Red

Haldi@NAS:~$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/PUBLIC/testx conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 1.5732 s, 171 MB/s
Haldi@NAS:~$ sudo dd bs=1M count=1024 if=/dev/zero of=/volume1/PUBLIC/testx conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.09723 s, 262 MB/s
Haldi@NAS:~$ sudo dd bs=1M count=2048 if=/dev/zero of=/volume1/PUBLIC/testx conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 7.58289 s, 283 MB/s
Haldi@NAS:~$ sudo dd bs=1M count=4096 if=/dev/zero of=/volume1/PUBLIC/testx conv=fdatasync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 16.5216 s, 260 MB/s

 

And on SSD:

Haldi@NAS:~$ sudo dd bs=1M count=256 if=/dev/zero of=/volume2/web/testx conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 1.93058 s, 139 MB/s
Haldi@NAS:~$ sudo dd bs=1M count=1024 if=/dev/zero of=/volume2/web/testx conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 6.16394 s, 174 MB/s
Haldi@NAS:~$ sudo dd bs=1M count=2048 if=/dev/zero of=/volume2/web/testx conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 39.1163 s, 54.9 MB/s
Haldi@NAS:~$ sudo dd bs=1M count=4096 if=/dev/zero of=/volume2/web/testx conv=fdatasync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 75.7243 s, 56.7 MB/s
Edited by Polanskiman
Added code tag.
Link to comment
Share on other sites

Loader: jun 1.03b (DS3617xs)

DSM: 6.2.1-23824 Update 1

CPU: AMD Ryzen 7 1700X

HDD: SHR2 11x WD Red

# dd if=/dev/zero bs=1M count=1024 | md5sum
1073741824 bytes (1.1 GB) copied, 1.69736 s, 633 MB/s

# dd bs=1M count=256 if=/dev/zero of=/volume1/share/testx conv=fdatasync
268435456 bytes (268 MB) copied, 0.497428 s, 540 MB/s

 

Edited by ACiDxCHRiST
Link to comment
Share on other sites

Hint: Repeat the test a number of times and report the median value.

 

Loader: jun 1.04b (DS918)

DSM: 6.2.1-23824U1

Hardware/CPU: J4105-ITX

HDD: WD Red 8TB RAID 5 (4 drives)

Results:

dd if=/dev/zero bs=1M count=1024 | md5sum

CPU: 422 MBps

dd bs=1M count=256 if=/dev/zero of=testx conv=fdatasync

WD Red RAID 5-4: 157 MBps

 

______________________________________________________________________________

 

My main rig is not 6.2.1, but I thought to record the results on the NVMe platform.  I'll repeat at such a date where it is converted to 6.2.1.

 

Loader: jun 1.02b (DS3615xs)

DSM: 6.1.7-15284U2

Hardware: ESXi 6.5

CPU: E3-1230v6

HDD: Intel P3500 2GB NVMe RAID 1, WD Red 4TB RAID 10 (8 drives)

Results:

dd if=/dev/zero bs=1M count=1024 | md5sum

CPU: 629MBps (I do have one other active VM but it's pretty idle)

dd bs=1M count=256 if=/dev/zero of=testx conv=fdatasync

NVMe RAID 1: 1.1 GBps

WD Red RAID 10-8: 371 MBps

 

The only thing above that can be directly compared is haldi's RAID5 @ 171MBps vs. mine at 157MBps, although the drives are quite different designs.

Edited by flyride
added 6.2.1 platform
Link to comment
Share on other sites

grafik.thumb.png.5147183f66df6efaaa7414d15124f5f9.png

 

System seems idle,  time for tests on 6.2 23739 with loader 1.03b

Haldi@NAS:~$ dd if=/dev/zero bs=1M count=1024 | md5sum
1073741824 bytes (1.1 GB) copied, 2.12229 s, 506 MB/s
2147483648 bytes (2.1 GB) copied, 4.43114 s, 485 MB/s
4294967296 bytes (4.3 GB) copied, 8.62319 s, 498 MB/s

 

Run1: on HDD 4x6TB WD Red RAID 5 btrfs

Haldi@NAS:~$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/PUBLIC/testx conv=fdatasync
268435456 bytes (268 MB) copied, 2.0936 s, 128 MB/s
1073741824 bytes (1.1 GB) copied, 4.55824 s, 236 MB/s
2147483648 bytes (2.1 GB) copied, 7.05817 s, 304 MB/s
4294967296 bytes (4.3 GB) copied, 12.616 s, 340 MB/s

Run2:

268435456 bytes (268 MB) copied, 1.67735 s, 160 MB/s
1073741824 bytes (1.1 GB) copied, 4.47612 s, 240 MB/s
2147483648 bytes (2.1 GB) copied, 7.12402 s, 301 MB/s
4294967296 bytes (4.3 GB) copied, 12.5485 s, 342 MB/s

Run3:

268435456 bytes (268 MB) copied, 1.76509 s, 152 MB/s
1073741824 bytes (1.1 GB) copied, 3.87784 s, 277 MB/s
2147483648 bytes (2.1 GB) copied, 6.91835 s, 310 MB/s
4294967296 bytes (4.3 GB) copied, 13.5713 s, 316 MB/s

 

Run1: on SSD

Haldi@NAS:~$ sudo dd bs=1M count=256 if=/dev/zero of=/volume2/web/testx conv=fdatasync
268435456 bytes (268 MB) copied, 1.52167 s, 176 MB/s
1073741824 bytes (1.1 GB) copied, 6.23936 s, 172 MB/s
2147483648 bytes (2.1 GB) copied, 11.8554 s, 181 MB/s
4294967296 bytes (4.3 GB) copied, 33.1486 s, 130 MB/s

Run2:

268435456 bytes (268 MB) copied, 1.49403 s, 180 MB/s
1073741824 bytes (1.1 GB) copied, 5.84423 s, 184 MB/s
2147483648 bytes (2.1 GB) copied, 17.7619 s, 121 MB/s
4294967296 bytes (4.3 GB) copied, 48.323 s, 88.9 MB/s

Run3:

268435456 bytes (268 MB) copied, 2.03784 s, 132 MB/s
1073741824 bytes (1.1 GB) copied, 5.97847 s, 180 MB/s
2147483648 bytes (2.1 GB) copied, 23.1773 s, 92.7 MB/s
4294967296 bytes (4.3 GB) copied, 55.1492 s, 77.9 MB/s
Edited by Polanskiman
Added code tag.
Link to comment
Share on other sites

Hello!

 

I have the following problem: "md5sum: command not found"

 

Anyway Im using a different approach, here is another way to check your CPU singlecore speed:

time $(i=0; while (( i < 9999999 )); do (( i ++ )); done)
  • Uses only one CPU core
  • Very simple, no preparations needed
  • The lower the output value is the better

 

I compared various setups using this command, here are my results:

 

My old Synology DS214
real    8m52.640s
user    8m40.240s
sys     0m0.070s

 

XPEnology inside VMware virtual machine running on Windows 7, CPU: Intel i7 2700K @ 4.2Ghz
real    0m32.051s
user    0m32.018s
sys     0m0.032s

 

XPEnology on ASROCK J4205-ITX
C1E: disabled, SpeedtStep: disabled, PowerMode: Sport
real    1m55.765s
user    1m55.794s
sys     0m0.003s

 

XPEnology on ASROCK J4205-ITX
C1E: disabled, SpeedtStep: ENABLED, PowerMode: Sport
real    1m11.878s
user    1m11.890s
sys     0m0.010s

 

 

I wanted to find out is SpeedStep working or not. Now it is very clear to me that enabling SpeedStep is a must on my J4205 board, improved single core speed by +64%. I really feel its effect on the Web GUI and on photo conversions.

 

 

Edited by cdrvbfhq
Link to comment
Share on other sites

Using the same test as cdrvbfhq:

HP Proliant ML110 G5 - Xeon 3065 - 8GB ECC with DSM 6.1.7-15284 Update 2

real    1m5.064s
user    1m4.322s
sys     0m0.379s

 

 @haldi What are you using to generate that picture showing "System Overwiev" ?


 

Edited by bearcat
Added a Q
  • Like 1
Link to comment
Share on other sites

Some interesting results across different platforms and setups.

But I'm wondering that if we are trying to see where the performance issue may be, the tests should be on the same hardware with 1) the previous loader, 2) the latest and 3) a non-syno NAS, eg OMV. Still lots of variables that could make a difference, (Linux kernel, drivers). Maybe another test could be to check perf with clean installs, and then an upgrade.

Link to comment
Share on other sites

There are many variables at play and this isn't meant to be a precise reading but more a rough guide to performance on your own hardware..

It should help people find out where there may be issues OR bottlenecks in their own setup..

 

Whilst the loader versions 'may' make a difference, it's more likely to be the CPU hardware that dictates CPU performance. Disk Controllers and Disk types are also more likely to dictate disk performance (funnily enough)..

As long as people list their hardware relevant to the benchmarks (ie CPU, disk controllers and disks etc - along with the loader versions) it should be good enough to find out where any problems might lie..

 

It's also important to use the same benchmark method to compare like with like, so the post above (sorry, cdrvbfhq) is muddying the waters a bit..

 

It was definitely not meant to be a benchmark pissing-contest (that you get with CPU overclocking or Graphics benchmarks)..  :)

 

Link to comment
Share on other sites

Same "testrig" as above, 
HP Proliant ML110 G5, "out of the box", using Jun's loader v1.02b DS3615xs,  DSM 6.1.7-15284 Update 2

admin@ML110:~$  grep MHz /proc/cpuinfo
cpu MHz         : 2333.000
cpu MHz         : 2333.000
admin@ML110:~$ dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.71034 s, 396 MB/s

 

Volume1=2*250GB Raid 0, btrfs

admin@ML110:~$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/test conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.19464 s, 122 MB/s

 

Volume2=2*250GB Raid 0, ext4

admin@ML110:~$ sudo dd bs=1M count=256 if=/dev/zero of=/volume2/test conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.77279 s, 96.8 MB/s

As can be seen, volume2 seems to be generally slower than volume1, the difference is the filesystem used.

Can anyone confirm this ?

Edited by Polanskiman
Added code tag.
Link to comment
Share on other sites

6 минут назад, bearcat сказал:

As can be seen, volume2 seems to be generally slower than volume1, the difference is the filesystem used.

Can anyone confirm this ?

It seem's that you are right.

Results of my tests:

 

Raid1 2 disks 4 tb in mirror ext4:

268435456 bytes (268 MB) copied, 2.57591 s, 104 MB/s

 

Basic 1 disk 4tb btrfs:

268435456 bytes (268 MB) copied, 1.7589 s, 153 MB/s

Link to comment
Share on other sites

9 minutes ago, flyride said:

 

Something is flawed about this.  Write cache turned on?  I don't think there is a spinning disk on the planet that can write 153 MBps.

Dunno but WD Black drives (on userbench) say that the 1TB 2013 model can write at 139MBs and the 4TB 2016 model at 175MBs (Sustained @ 133 and 170 respectively)!!

They're definitely a lot better than they used to be - and of course, this isn't limited by network speed..  :)

 

 

Link to comment
Share on other sites

Having some "free" time, I did a test with one of my Microservers.

HP N54L - 16GB RAM, DS3615xs DSM 6.2-23739 Update 2

5 * WD 3TB RED, SHR-1, ext4.

 

Some median values: 

(variations of: dd if=/dev/zero bs=1M count=256 | md5sum)

268435456 bytes (268 MB) copied, 0.839746 s, 320 MB/s

536870912 bytes (537 MB) copied, 1.71475 s, 313 MB/s

1073741824 bytes (1.1 GB) copied, 3.40958 s, 315 MB/s

2147483648 bytes (2.1 GB) copied, 6.94679 s, 309 MB/s

4294967296 bytes (4.3 GB) copied, 13.5109 s, 318 MB/s

8589934592 bytes (8.6 GB) copied, 28.5729 s, 301 MB/s

 

(variations of: sudo dd bs=1M count=256 if=/dev/zero of=/volume1/test conv=fdatasync)

268435456 bytes (268 MB) copied, 1.34079 s, 200 MB/s

1073741824 bytes (1.1 GB) copied, 3.84433 s, 279 MB/s

2147483648 bytes (2.1 GB) copied, 8.32476 s, 258 MB/s

4294967296 bytes (4.3 GB) copied, 16.427 s, 261 MB/s

8589934592 bytes (8.6 GB) copied, 30.308 s, 283 MB/s

 

To "spice" the numbers, I have a 120GB SSD as read-cache, and I can see it getting good use during this test :-)

Link to comment
Share on other sites

10 hours ago, ideasman69 said:

with 4 drives in RAID5 on the J3455B-ITX:

1073741824 bytes (1.1 GB) copied, 7.00778 s, 153 MB/s

 

with 4 drives in RAID5 on the J3455M:

1073741824 bytes (1.1 GB) copied, 7.98305 s, 135 MB/s

I'd love to know how you're getting such good performance on that board @Hostilian! Or is that just 1x disk?

RAID 5 sucks, that's why.. :)

 - Slow write, marginally better read, ridiculously slow to rebuild volumes...

Yes, there is more disk wasted but IMO it's worth the trade-off..

My disks are all either RAID 1 or single disk (where I don't really need fault tolerance for a particular disk). Sometimes RAID0 where I need speed over fault tolerance..

All have external backups..

Link to comment
Share on other sites

43 minutes ago, Hostilian said:

RAID 5 sucks, that's why.. :)

but it doesn't. when i use any of the 3615 boot loaders- it maxes out the network connection - 115MBs write / 115MB/s read

 

with the 1.04b bootloader and 6.2.1 - write speeds are much slower. i dont think this has anything to do with RAID5 vs not - its something specifically with this 918+ boot loader.

Link to comment
Share on other sites

On 11/10/2018 at 11:15 PM, ideasman69 said:

with 4 drives in RAID5 on the J3455B-ITX:

1073741824 bytes (1.1 GB) copied, 7.00778 s, 153 MB/s

 

with 4 drives in RAID5 on the J3455M:

1073741824 bytes (1.1 GB) copied, 7.98305 s, 135 MB/s

 

with the same hardware and same 4 drive RAID5 volume but using the 1.02b 3615 bootloader and DSM 6.1.7:

1073741824 bytes (1.1 GB) copied, 1.17239 s, 916 MB/s

 

results speak for themselves 😉

Edited by ideasman69
Link to comment
Share on other sites

21 hours ago, ideasman69 said:

 

with the same hardware and same 4 drive RAID5 volume but using the 1.02b 3615 bootloader and DSM 6.1.7:

1073741824 bytes (1.1 GB) copied, 1.17239 s, 916 MB/s

 

results speak for themselves 😉

916MB/s for 4 drives in RAID5? Are they SSDs?

If not, it sounds VERY high (for 4 disks) for Read and ridiculously high for Write for RAID5.

For a normal disk - ignoring the parity in R5 - that's over 225MB/s per disk. I don't have any (non-SSD) disks that get anywhere near that..

 

Yes.. It should be higher than 153MB/s - so there looks to be something wrong..

Do you mind pasting the commands you're running - to benchmark it?

Edited by Hostilian
Link to comment
Share on other sites

26 minutes ago, Hostilian said:

916MB/s for 4 drives in RAID5? Are they SSDs?

If not, it sounds VERY high (for 4 disks) for Read and ridiculously high for Write for RAID5.

 

nah just standard drives but this high speed burst is due to the inbuilt cache on each of the drives. using a 10GB file gives a real world result:

sh-4.3# sudo dd bs=1M count=10240 if=/dev/zero of=/volume1/backups/test.file
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 44.0262 s, 244 MB/s

using a smaller test like 256MB basically only writes to the cache:

sh-4.3# sudo dd bs=1M count=256 if=/dev/zero of=/volume1/backups/test.file
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 0.278405 s, 964 MB/s

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...