Does RAM caching make sense for 3615xs?


Recommended Posts

Since I do have a DS3515xs system with 1.03b and a 10GB NIC I am looking for a speed improvement.

Currently I do have a Raid 0 with 4x4TB WD RED CRM 64MB getting around 450mb/s read/write.

 

Since sata SSD caching won't help that much I am thinking about to max. out the ram to 32GB.

 

Would that be a good idea?

Link to post
Share on other sites

I hope you mean 450 MBps (megabytes per second) not mbps (megabits).

 

It will burst at full 10Gbps (about 1.1 GBps) until the RAM fills up.  With 32GB of RAM, that will take about 20 seconds.

Then throughput will drop to 450 MBps and you will see your drives go to maximum as it tries to catch up.

 

You need more spindles (more drives) or faster drives if you want to fully leverage 10Gbe.  As you have found, Reds are not terribly fast.

 

At some point the bandwidth available to your SATA controller matters also.  A controller with a PCIe 2.0 x2 uplink cannot saturate a 10Gbps card.

Edited by flyride
Link to post
Share on other sites

Hi,

 

A single 7200 spindle will max out at 75-100 IOPS. If you use small block sizes (e.g. 4K) to perform the tests then per each drive you will do a 4K*75=300KB/s multiplied by four in your case = 1200KB/s = 1.2MB/s . SATA 7200 drives can do better on sequential loads e.g block sizes e.g. 1MB. If you do 300 IOPS with a block size of 1MB/s you can get 300MB/s. If you wish to get more out of your system you need either faster disks or more disk spindles. 

 

On my system with 8* 480GB SSDs i'm able to get more around 25000-30000 IOPS. 

 

It really depends on what you want to achieve to measure against that. Make sure you are able to also drive the expected values from the host side as well. A nice tool to perform benchmarks is microsoft diskspd, available for windows and linux.

 

Edited by pocopico
Link to post
Share on other sites

@flyride 

Thanks for your reply. Sure, was talking about Mbps 😉. Since it seems to be the only option (besides the fact of adding more disks) so I'll give it a try.

 

All drives are connnected to the onboard SATA ports. Which ist in this case via INtel P67 Express Chipset with 2xSATA6.0 and 2x SATA3.0 but I think that shouldn't be the bottle neck as single speed of these HDDs is less than a SATA3 connection can handel, right?

 

@pocopico

Thanks for your input. I think I'm trying the RAM caching first and see how it works for me. Usually I the biggest files I transfer are 4K video files not bigger than 2-3GB. Rather rare I transfer complete folders with approx. 100GB each which would be only possible to speed up with additional disks as I understood.

Link to post
Share on other sites
1 hour ago, TNa681 said:

@flyride 

Thanks for your reply. Sure, was talking about Mbps 😉. Since it seems to be the only option (besides the fact of adding more disks) so I'll give it a try.

 

All drives are connnected to the onboard SATA ports. Which ist in this case via INtel P67 Express Chipset with 2xSATA6.0 and 2x SATA3.0 but I think that shouldn't be the bottle neck as single speed of these HDDs is less than a SATA3 connection can handel, right?

 

@pocopico

Thanks for your input. I think I'm trying the RAM caching first and see how it works for me. Usually I the biggest files I transfer are 4K video files not bigger than 2-3GB. Rather rare I transfer complete folders with approx. 100GB each which would be only possible to speed up with additional disks as I understood.

 

What worths mentioning, is that i hit 100% CPU usage on a single CPU each time i perform an SMB test. You should also monitor CPU usage with htop command.

Edited by pocopico
Link to post
Share on other sites
7 hours ago, TNa681 said:

All drives are connnected to the onboard SATA ports. Which ist in this case via INtel P67 Express Chipset with 2xSATA6.0 and 2x SATA3.0 but I think that shouldn't be the bottle neck as single speed of these HDDs is less than a SATA3 connection can handel, right?

 

I wasn't referring to the SATA speed itself, but rather the aggregate from the controller.  There are motherboard implementations (and a number of plug-in cards) out there where a SATA controller is implemented with just 1x or 2x PCIe 2.0 uplink which cannot handle the burst SATA bandwidth in aggregate if all the ports are filled.  P67 motherboards typically have SATA ports connected directly to the PCH (chipset) which is interfaced to the CPU via DMI (a big pipe for all chipset/CPU traffic) so you don't need to worry about this.

  • Like 1
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.