TNa681 Posted January 18, 2022 Share #1 Posted January 18, 2022 Since I do have a DS3515xs system with 1.03b and a 10GB NIC I am looking for a speed improvement. Currently I do have a Raid 0 with 4x4TB WD RED CRM 64MB getting around 450mb/s read/write. Since sata SSD caching won't help that much I am thinking about to max. out the ram to 32GB. Would that be a good idea? Quote Link to comment Share on other sites More sharing options...
flyride Posted January 18, 2022 Share #2 Posted January 18, 2022 (edited) I hope you mean 450 MBps (megabytes per second) not mbps (megabits). It will burst at full 10Gbps (about 1.1 GBps) until the RAM fills up. With 32GB of RAM, that will take about 20 seconds. Then throughput will drop to 450 MBps and you will see your drives go to maximum as it tries to catch up. You need more spindles (more drives) or faster drives if you want to fully leverage 10Gbe. As you have found, Reds are not terribly fast. At some point the bandwidth available to your SATA controller matters also. A controller with a PCIe 2.0 x2 uplink cannot saturate a 10Gbps card. Edited January 18, 2022 by flyride Quote Link to comment Share on other sites More sharing options...
pocopico Posted January 18, 2022 Share #3 Posted January 18, 2022 (edited) Hi, A single 7200 spindle will max out at 75-100 IOPS. If you use small block sizes (e.g. 4K) to perform the tests then per each drive you will do a 4K*75=300KB/s multiplied by four in your case = 1200KB/s = 1.2MB/s . SATA 7200 drives can do better on sequential loads e.g block sizes e.g. 1MB. If you do 300 IOPS with a block size of 1MB/s you can get 300MB/s. If you wish to get more out of your system you need either faster disks or more disk spindles. On my system with 8* 480GB SSDs i'm able to get more around 25000-30000 IOPS. It really depends on what you want to achieve to measure against that. Make sure you are able to also drive the expected values from the host side as well. A nice tool to perform benchmarks is microsoft diskspd, available for windows and linux. Edited January 18, 2022 by pocopico Quote Link to comment Share on other sites More sharing options...
TNa681 Posted January 19, 2022 Author Share #4 Posted January 19, 2022 @flyride Thanks for your reply. Sure, was talking about Mbps 😉. Since it seems to be the only option (besides the fact of adding more disks) so I'll give it a try. All drives are connnected to the onboard SATA ports. Which ist in this case via INtel P67 Express Chipset with 2xSATA6.0 and 2x SATA3.0 but I think that shouldn't be the bottle neck as single speed of these HDDs is less than a SATA3 connection can handel, right? @pocopico Thanks for your input. I think I'm trying the RAM caching first and see how it works for me. Usually I the biggest files I transfer are 4K video files not bigger than 2-3GB. Rather rare I transfer complete folders with approx. 100GB each which would be only possible to speed up with additional disks as I understood. Quote Link to comment Share on other sites More sharing options...
pocopico Posted January 19, 2022 Share #5 Posted January 19, 2022 (edited) 1 hour ago, TNa681 said: @flyride Thanks for your reply. Sure, was talking about Mbps 😉. Since it seems to be the only option (besides the fact of adding more disks) so I'll give it a try. All drives are connnected to the onboard SATA ports. Which ist in this case via INtel P67 Express Chipset with 2xSATA6.0 and 2x SATA3.0 but I think that shouldn't be the bottle neck as single speed of these HDDs is less than a SATA3 connection can handel, right? @pocopico Thanks for your input. I think I'm trying the RAM caching first and see how it works for me. Usually I the biggest files I transfer are 4K video files not bigger than 2-3GB. Rather rare I transfer complete folders with approx. 100GB each which would be only possible to speed up with additional disks as I understood. What worths mentioning, is that i hit 100% CPU usage on a single CPU each time i perform an SMB test. You should also monitor CPU usage with htop command. Edited January 19, 2022 by pocopico Quote Link to comment Share on other sites More sharing options...
TNa681 Posted January 19, 2022 Author Share #6 Posted January 19, 2022 (edited) @pocopico OK, but don't know how to do that? Can you link an explenation? Edit: like that... https://www.cyberciti.biz/faq/install-htop-on-macos-unix-desktop-running-macbook-pro/ You have to know that most of the time I'm using macOS Edited January 19, 2022 by TNa681 Quote Link to comment Share on other sites More sharing options...
flyride Posted January 19, 2022 Share #7 Posted January 19, 2022 7 hours ago, TNa681 said: All drives are connnected to the onboard SATA ports. Which ist in this case via INtel P67 Express Chipset with 2xSATA6.0 and 2x SATA3.0 but I think that shouldn't be the bottle neck as single speed of these HDDs is less than a SATA3 connection can handel, right? I wasn't referring to the SATA speed itself, but rather the aggregate from the controller. There are motherboard implementations (and a number of plug-in cards) out there where a SATA controller is implemented with just 1x or 2x PCIe 2.0 uplink which cannot handle the burst SATA bandwidth in aggregate if all the ports are filled. P67 motherboards typically have SATA ports connected directly to the PCH (chipset) which is interfaced to the CPU via DMI (a big pipe for all chipset/CPU traffic) so you don't need to worry about this. 1 Quote Link to comment Share on other sites More sharing options...
TNa681 Posted January 19, 2022 Author Share #8 Posted January 19, 2022 Thats why I bought in the past a Syba SD-PEX40099 instead of a SI-PEX40064 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.