Recommended Posts

Posted (edited)

Hello,

 

I am noticing that my SHR volume with 6 drives (2 SSDs, and 4 HDDs) and with a 1TB SSD read-only cache is very slow, especially when running a virtual machine. Please find attached a screenshot and details of my drives and some benchmarks, which show some very disappointing results.

 

I just do not know how to make it fast. I should get at least normal HDDs speed. SHR seems to be a Synology version of RAID 5, which is fast compared to a single HDD. However, it is very slow.

 

More information:

  • DS3617xs

  • Intel Xeon E5-2630L V3 (8 cores, 16 threats)

  • 32 GBs DDR4 RAM ECC 2133Mhz

  • DSM 6.2.3-25426 Update 2

  • June Loader 1.03

  • Total capacity 19TBs (SHR with 4HDDs)

  • 1TB SSD cache with 2 Samsung SSDs (Raid 0, Read-only cache)

 

Is it normal ? Can you please let me know what I should do to get "normal" speed?

 

PS: I ran SMART Quick Test and no drives have problems. There are also no bad sectors.

 

Annotation 2020-08-23 032629.png

Annotation 2020-08-23 033708.png

Edited by Skalyx

Share this post


Link to post
Share on other sites

ST8000DM004 drives are SMR drives.  They are not going to perform well in a RAID array under stress.

Share this post


Link to post
Share on other sites
12 minutes ago, flyride said:

ST8000DM004 drives are SMR drives.  They are not going to perform well in a RAID array under stress.

Thanks for the answer.

What should I do? 😕

Share this post


Link to post
Share on other sites

Use them for another purpose?  If you want to maximize the performance of your array they will have to be replaced, preferably with drives that are closely performance matched. You might have other issues that inhibit your system performance, but half a RAID array with SMR drives will be very noticeable on writes.  Reads should be okay though.

 

These are the relevant specs for the drives you have:

WD6002FFWX - CMR, 7200 rpm, 128MB cache, 227MBps max rate, 6 heads, 6 tebibytes

WD80EFZX - CMR, 5400 rpm, 128MB cache, 178MBps max rate, 7 heads, 8 tebibytes

ST8000DM004 - SMR, 5400 rpm, 256MB cache, 190MBps max rate, 8 heads, 8 tebibytes

 

The Seagate drives have a larger cache to mask the SMR write performance.

 

FMI: https://blocksandfiles.com/2020/04/15/seagate-2-4-and-8tb-barracuda-and-desktop-hdd-smr/

Share this post


Link to post
Share on other sites

As always: SHR + SMR = 💩💩💩

 

😉

 

Beside SMR I always prefer normal RAID modes like 1, 1+0, 5, 6, etc. with equal disks over SHR. Usually most of the strange „my volume crashed!“ have a SHR background. On normal mdam RAIDs you‘ll always have the possibility to mount the file system on another Linux system.

Share this post


Link to post
Share on other sites
6 hours ago, flyride said:

Use them for another purpose?  If you want to maximize the performance of your array they will have to be replaced, preferably with drives that are closely performance matched. You might have other issues that inhibit your system performance, but half a RAID array with SMR drives will be very noticeable on writes.  Reads should be okay though.

 

These are the relevant specs for the drives you have:

WD6002FFWX - CMR, 7200 rpm, 128MB cache, 227MBps max rate, 6 heads, 6 tebibytes

WD80EFZX - CMR, 5400 rpm, 128MB cache, 178MBps max rate, 7 heads, 8 tebibytes

ST8000DM004 - SMR, 5400 rpm, 256MB cache, 190MBps max rate, 8 heads, 8 tebibytes

 

The Seagate drives have a larger cache to mask the SMR write performance.

 

FMI: https://blocksandfiles.com/2020/04/15/seagate-2-4-and-8tb-barracuda-and-desktop-hdd-smr/

 

So my "WD80EFZX" is the slowest... I see. Will a read/write SSD cache help?

 

2 hours ago, jensmander said:

As always: SHR + SMR = 💩💩💩

 

😉

 

Beside SMR I always prefer normal RAID modes like 1, 1+0, 5, 6, etc. with equal disks over SHR. Usually most of the strange „my volume crashed!“ have a SHR background. On normal mdam RAIDs you‘ll always have the possibility to mount the file system on another Linux system.

Oh yes, that is a very interesting advantage. However, I was looking for flexibility and speed. Based on my research a few years back, I think SHR is faster than RAID 5. I will look further into that.

 

Nevertheless, I have run single disk benchmarks and it looks like my drives have good speed according to their theoretical ones. However, my SSDs do not at all! Please refer to the attached screenshots. Is it normal? What should I do? I am using them for SSD cache read only and I ensured nothing was using my disks during the benchmarks.

I verified if the SATA ports on my motherboard is SATA 3 and not. It 100% is.

 

Furthermore, my benchmarks speed went back to normal after a shut down and reboot. Only the SSDs are slow.

 

Thanks!

 

 

Annotation 2020-08-23 142841.png

Annotation 2020-08-23 142829.png

Share this post


Link to post
Share on other sites

Afaik this happens when the SSD‘s internal cache is full. The value of 270MB/s is typical for this. 

Share this post


Link to post
Share on other sites
Posted (edited)
4 hours ago, Skalyx said:

So my "WD80EFZX" is the slowest... I see. Will a read/write SSD cache help?

Based on my research a few years back, I think SHR is faster than RAID 5. I will look further into that.

Furthermore, my benchmarks speed went back to normal after a shut down and reboot.

 

I'm afraid you are missing the point.  Please go back and read the article I linked about SMR behavior and all will make sense.  I gave you the specs so that you could help match drive replacements, not to say that there was something wrong with the WD80EFZX.  It's correct that the WD80EFZX is slower than the WD Red Pro drive, but the two SMR drives have a much bigger impact on the overall performance regardless of their burst stats.

 

Use a cache or don't.  Personally, I don't as it involves risk of data corruption that I don't choose to have. 

 

The statement about SHR being faster than RAID5 is completely false.  At best, SHR *IS* RAID5.  Depending on the choices you make with drives, it becomes a conjoined series of RAID5 and RAID1 arrays using subpartitions on larger drives.  Those larger drives tend to see more IOPS than the smaller ones, which reduces throughput on the smaller ones.  It's always going to be equal or slower, but you may not notice the difference.

Edited by flyride

Share this post


Link to post
Share on other sites
9 hours ago, jensmander said:

Afaik this happens when the SSD‘s internal cache is full. The value of 270MB/s is typical for this. 

 

Well my cache is only at 34%, so it is not full. I would expect at least 450MB/s. Do you think the 34% is impacting my SSDs perf that much?

 

6 hours ago, flyride said:

 

I'm afraid you are missing the point.  Please go back and read the article I linked about SMR behavior and all will make sense.  I gave you the specs so that you could help match drive replacements, not to say that there was something wrong with the WD80EFZX.  It's correct that the WD80EFZX is slower than the WD Red Pro drive, but the two SMR drives have a much bigger impact on the overall performance regardless of their burst stats.

 

Use a cache or don't.  Personally, I don't as it involves risk of data corruption that I don't choose to have. 

 

The statement about SHR being faster than RAID5 is completely false.  At best, SHR *IS* RAID5.  Depending on the choices you make with drives, it becomes a conjoined series of RAID5 and RAID1 arrays using subpartitions on larger drives.  Those larger drives tend to see more IOPS than the smaller ones, which reduces throughput on the smaller ones.  It's always going to be equal or slower, but you may not notice the difference.

 

Yes, I understand. Thank you for explaining me that. I have read the link and understand the problem much better.

 

Yes, the cache is risky. That is why I only chose to have SSD read-only cache as I am afraid of data corruption.

Share this post


Link to post
Share on other sites
6 hours ago, Skalyx said:

Well my cache is only at 34%, so it is not full. I would expect at least 450MB/s. Do you think the 34% is impacting my SSDs perf that much?

 

No, I meant the internal cache of your SSDs, not the cache you created within DSM. Every HDD/SSD has an internal cache. If this cache is full the performance or speed will drop. 

Share this post


Link to post
Share on other sites
6 hours ago, jensmander said:

 

No, I meant the internal cache of your SSDs, not the cache you created within DSM. Every HDD/SSD has an internal cache. If this cache is full the performance or speed will drop. 

Oh yes, it makes sense. Thank you!

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.