Jump to content
XPEnology Community

Recommended Posts

Posted

I've been playing around with Link Aggregation. From testing I identified that the Synology OS doesn't change the hash algorithm on the Network Bonding interface. This results in reduced outbound performance from your NAS (All TX traffic will exit the same NIC interface).

 

After searching all over I cam across this link: http://forum.synology.com/enu/viewtopic ... 4c68d30be0

 

To fix you need to add "xmit_hash_policy=layer3+4" to your Bonding Options within your "/etc/sysconfig/network-scripts/ifcfg-bond0"

 

To make this permanent across reboots, you'll want to open /etc/sysconfig/network-scripts/ifcfg-bond0 and change the BONDING_OPTS setting and insert "xmit_hash_policy=layer3+4" behind the mode=4 setting. Result is as follows:

BONDING_OPTS="mode=4 xmit_hash_policy=layer3+4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast"

 

To change it back simply remove the xmit_hash_policy command and the default is Layer2.

 

I found this extremely helpful and wanted to share it here, will be a good reference to keep in case anyone else comes across this.

 

Cheers,

 

Kill_Seeker

Posted

@killseeker

Could you pls write the guide in detail to help novice like me to fix it. I mean what app, what command required and the steps in detail to fix it.

Tks.

Posted

I just make a very little manual:

 

# Prerequisite: the bond must be already configured and running (on DSM and on LAN-Switch)

 

# Login as "root" with SSH

 

#open the config-file in vi:

vi /etc/sysconfig/network-scripts/ifcfg-bond0

#search the line:

BONDING_OPTS="mode=4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast"

# press "i" to enter the "input-mode" from vi and change it to (all in one single line!):

BONDING_OPTS="mode=4 xmit_hash_policy=layer3+4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast

 

# press "esc" and then ":wq" (without the " of course :wink: ) (wq stands for "write and quit vi afterwards")

 

# reboot and you're done

 

Now I must test if the change results in an improvment of the speed. :wink:

Posted

with 5.2 there are also additional lag options that are simpler to implement that LACP.

 

I was able to see more than 1GBE worth of outbound without using this trick.

 

still, very good post!

Posted

Hello

 

Unfortunately, by this amendment, my ompleten networks away.

Can not access the DS.

Even if I install a different network card, I do not get a link up.

 

I can now undertake that I have access again to the NAS?

 

Thank you

Posted

is there a maximum of how many nics you can bond? on my box i have a total of 5 ge ports but when i try to bond 4 of them only two end up being bonded. i use lag with lacp on cisco switch.

  • 1 month later...
Posted
is there a maximum of how many nics you can bond? on my box i have a total of 5 ge ports but when i try to bond 4 of them only two end up being bonded. i use lag with lacp on cisco switch.

If anyone is having the same issue, I had changed mac addresses wrong on my usb drive config file.

Posted
is there a maximum of how many nics you can bond? on my box i have a total of 5 ge ports but when i try to bond 4 of them only two end up being bonded. i use lag with lacp on cisco switch.

 

For Max nics you can bond, in the Cisco world that would be up to 16, but only 8 would be active at any given time. The others in the channel group would be used as hot spares.

×
×
  • Create New...