Sign in to follow this  
karlpox

link bond / LACP

Recommended Posts

Anybody tried this? Does it make your network speed x2?

 

I installed 2x Intel 2ports intel nics with link aggregation support. Setup the switch to accept link aggregation on specific ports. Now I bonded 4 ports on my xpenology setup. My transfer speed are between 60-80mbps. I was transferring around 90-120mbps before the bond and on the realtek lan port (builtin).

 

Karl

Share this post


Link to post
Share on other sites

I've tried too and ity doesn't work fine. I used the motherboard chip and a ethernet card, both are 1GB.

 

Perhaps with the DSM 5.2 it will work fine with différents kind of LAG... Wait and see.

 

K-Li

Share this post


Link to post
Share on other sites

i have a bond on HPMSG8 LACP on Netgear switch

max 115 MB/s, server also LACP, 2 cards

Share this post


Link to post
Share on other sites

It does not work that way. You can get 2Gbit link, but it will not work with speed higher than 1Gbit.

I tried server and client with dual ethernet, got 2Gbit link on both sides, but still got 115-120MB/s max.

Share this post


Link to post
Share on other sites

On a single 1Gb link I can manage around 110MB transfer and on a dual 1Gb channel group (2Gb link) I have managed around 170MB transfer rate. This is on my Cisco 2960G switch.

Share this post


Link to post
Share on other sites

Yes, I tested this using two different machines (ips) pushing data at the same time to my (port channeled) NAS. All of my machines have a single nic (with the exception of my gaming pc) so the only way to exceed (a transfer of 120Mb) and verify this setup to my NAS was to use two machines at the same time (reached 170Mb).

 

In my case, my main objective was to make sure my NAS had the throughput when asked... on a daily basis a single link is fine, and in those other times, it can push more then 110 ~120Mb when needed.

 

EDIT: Found this post which basically reiterates what I just said (second poster)...

 

viewtopic.php?f=2&t=4898

 

There was also a post from another user here a few months ago who showed their findings from going from 1nic to 2nic's, all the way to 4nics. Each time was about a 50% gain. Kind of similar results when using multiple graphic cards for gaming, for two cards about a 50% increase over single, three cards is about 25% increase over double and with four cards about 15% increase over tripple. (ill have to find that post as it was here a few months ago)

 

For the OP, if you want to test your setup, your going to need multiple pcs hammering away at your NAS to see if you exceeded your 1Gb link.

Share this post


Link to post
Share on other sites

As others have said, bonding is only useful when you have multiple simultaneous activities, as each stream is limited to a single GBE capacity. You really need to have a specific usage pattern to regularly exceed what a single GBE will do. Even if you have multiple simultaneous systems accessing the NAS, chances (unless you have SSDs in NAS) are that the disks will not be able to keep up with your clients. Add in a little bit of extra head movement on your NAS HDD and you will not be able to aggrigate to more than what a single gig link could do.

 

bonding gets easier to do in 5.2, as you dont have to worry about configuring your switch in LACP.

Share this post


Link to post
Share on other sites

Im still on DSM 5.1-5022 Update 5. All of the 4 ports in my Xpenology isnt working together although the bond is created. I tried transferring files from xpen to my htpc (single nic) at 100-120mbps, then I transferred files from my main setup (2 nics @ 2gbps) at 3-4mbps. My current setup is basically just using the single nic in my xpenology setup.

Share this post


Link to post
Share on other sites
Sign in to follow this