karlpox

Slow Transfer Speeds from Bonded Interface

Recommended Posts

Hi,

 

Anybody else experiencing slow transfer speed from bonded lan ports? I have 4 intel nic ports bonded together.

 

7WkCNTN.jpg

 

ele52Zc.png

 

Any thoughts on this? I am using the latest DSM version 5.2 5592 Update 2

 

Karl

Share this post


Link to post
Share on other sites

Because your server has 4 bonded LAN ports but the PC you're pulling the file down onto only has a single Gb port?

Share this post


Link to post
Share on other sites

And what kind of switches are you using and what is your Bond configuration? Many configurations will not increase single connection speed, meaning one computer will still top out at 1gbps, but you can have multiple computers each getting up to 1gbps speed. Everyone likes a car analogy...it means you are adding lanes to the road but not increasing the speed limit. However, if your computer and switch BOTH support LACP, and your computer has 4 interfaces as well, you *might* be able to get more than 1gbps. That would be rare in a home environment.

Share this post


Link to post
Share on other sites
However, if your computer and switch BOTH support LACP, and your computer has 4 interfaces as well, you *might* be able to get more than 1gbps.

Even with LACP, bandwidth is limited to one channel (1Gb) per conversation. Only multiple connection will show better network utilization.

Share this post


Link to post
Share on other sites

@XPEH - I don't think that is correct. If everything from end to end supports LACP, I think you get true aggregation, though your connection speed will still show 1gps you will be able to get >1gbps throughput.

 

 

 

Edit - it will depend on your implementation to determine how this will be handled. If your algorithm is a hash of macs or same port selection, you will definitely only see one connection speed worth. For one stream, you will top out at 1gbps.

Share this post


Link to post
Share on other sites

I am not aware of any practical implementation today that overcome this limitation.

After a lot of experimentation in different environments, I always ended up with 10Gb solutions if I needed a single connection bandwidth exceeding 1Gb. For multiple simultaneous streams, bonded 1Gb links work great. I have seen as much as 3Gb+ throughput for 4x1Gb bonds.

Share this post


Link to post
Share on other sites

Perhaps I am wrong technically. My testing was done on Server 2012R2 and Windows 8, which both natively support SMB3. It is likely that the multichannel functionality may have helped achieve better results. In testing with Windows 7 I am seeing the same thing you described.

Share this post


Link to post
Share on other sites

@andyf

Nope my pc is @ 2gbps. You can view the 2nd screenshot.

 

@b0fh

At my 2nd screenshot you can see my pc's connection. Im at 2gbps. Im not expecting something very high but I can't n even maximize 1 connection from the 4gbps. I did try copying from 2 pcs. The transfer speed will drop somewhere near 50% when I did my testing. Those are 4 intel nic cards. The builtin realtek gbe is way faster than the 4gbps, I can transfer around 115mbps from that connection alone. My switch supports lacp. Both clients and the xpenology is using the same switch.

 

Im also using 4 wd 3tb reds.

 

@xpeh

Is that the case with synology only? Or everything? Before I switched to xpenology. I was using whs 2011. I was transferring around 120-130mbps. Its like the lacp is working as a load balancer.

Share this post


Link to post
Share on other sites
@xpeh

Is that the case with synology only? Or everything? Before I switched to xpenology. I was using whs 2011. I was transferring around 120-130mbps. Its like the lacp is working as a load balancer.

Full theoretical speed of 1Gbps = 125MBps (125 MegaByte per sec). With Ethernet overhead is slightly less and typically ~ 115-120MBps.

If you are getting real life transfers over 100MBps its great and also means that you have fast (SSD) disks or RAID, that doesn't limit your performance.

LACP can work faster, but only if multiple channels loaded in parallel from different targets/sources. It also limitation of Cisco LACP for example.

If you have 4 x 1Gb LAG on the server and four different fast workstations with 1Gb NIC, each of them (WS) will get 1Gb bandwidth, but on the server you will see closer to 4Gb transfer.

There are different algorithms and hash methods to overcome in some cases this limitation. For example multi-path mode with iSCSI, there packets rotate between different NICs and may provide total transfer more efficiently, but still single conversation is limited to single NIC troughput.

Its like multi-line freeway. Single car will not go faster, but if run multiple cars to fill all the lanes, you will transfer more load in the same time.

Share this post


Link to post
Share on other sites

@xpeh

great explanation. I never knew it was gonna work like that. I always though that it would be 1+1=2. Very disappointing on my end since I upgraded all my clients at home to 2gbps so I can transfer files faster. I just tried transferring the same file from 2 separate computers and it did use somewhere between 1-1.76gbps of the bonded nics and both of the clients where running 50% of the 2gbps which is 1gb.

Share this post


Link to post
Share on other sites

Good news is that if you use 1+1 bond and utilizing 50% for one task, the other half of your network is still available for some other jobs (with different targets) and not competing for bandwidth that much.

Share this post


Link to post
Share on other sites

How do you get working the Intel lan card?

 

My HP N54L doesn't recognize it.

 

I have a USB 3 card installed and the doesn't recognize it too.

 

I am so disappointed of that.

 

I had a Bond with two lan cards on an Dell 2824 switch

 

 

Gesendet von iPhone mit Tapatalk

Share this post


Link to post
Share on other sites