• 0

PC->NAS/NAS->PC 10GBE Transfer Rates CRAZY Up/Down!! Solution Please?


Can someone check out this video i made and Tell me if this is right? Or How to Fix these wild variances in speeds?

Transferring 25GB file back/forth PC-NAS

Speeds are ALL over the place!

Problem is, every transfer starts fast, then goes up/down and all over! 



Xpen DS918+ all ssd's on Raid 0 & Intel X540-T2 10GBE Direct connect   to   PC w/970 Pro NVME m.2 Drive & Dell Qlogic BCM57810 10GBE card



Are speeds supposed to always vary like this??? Is it like this for everyone else???




Edited by Captainfingerbang
Added Video
Link to post
Share on other sites

1 answer to this question

Recommended Posts

  • 0

What type/how many SSD's on the NAS?  The PC to NAS copy looks like the RAM cache on the NAS fills up and you realize the actual write capability.  But that seems pretty slow.  It wouldn't surprise me however if you are using consumer SSD's, they have really bad write performance.  They will burst for a short while until their local write cache is full, and then they get real slow.  Lastly, it will take the NAS a short while to flush its RAM cache before it will go back to the burst speed (this might actually affect your NAS->PC test too, depending on the timing).  You can prove to yourself when drives keep running for a bit after the copy to the NAS is done.


However, I'm confused by your video, at 1:28 are you initiating the copy from NAS to PC or were you running both copies at the same time?  It seems to start at 20% which is weird.  But if you ran in each direction at the same time your measurements would be highly skewed.  The NAS to PC write seems slow regardless; two decent SSD's of any type should be able to fill a 10Gbit pipe, and your NVMe drive should write a lot faster than that.


Don't forget you are testing both sides of the connection at all times.  The 970 Pro is a good NVMe SSD but it won't sustain indefinite writes either, and in particular it does thermal throttle.  What CPU/motherboard on your client side?


Are you using jumbo frames?  I'm not sure that's your problem right now but if you want to max the 10Gbit link it will be necessary using CIFS/NFS etc.


Over time, I've been able to tweak out any variability or drops for reads or writes and now my system's 10Gbit transfer rate is 1+Gbps flat indefinitely.  The biggest factors in achieving that were: 1) enterprise-class SSD's on the NAS, 2) jumbo frames and 3) adequate cooling on the client NVMe SSD.


Oh, and I'm using RAID F1 (currently with five disks) but it will handle the write load with three enterprise drives.  RAID 0 should not be necessary for the performance you're looking for.


EDIT: just curious are you also using DSM's SSD cache (because that will probably give you worse performance with this setup)?

Edited by flyride
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.