killseeker Posted June 11, 2015 #1 Posted June 11, 2015 I've been playing around with Link Aggregation. From testing I identified that the Synology OS doesn't change the hash algorithm on the Network Bonding interface. This results in reduced outbound performance from your NAS (All TX traffic will exit the same NIC interface). After searching all over I cam across this link: http://forum.synology.com/enu/viewtopic ... 4c68d30be0 To fix you need to add "xmit_hash_policy=layer3+4" to your Bonding Options within your "/etc/sysconfig/network-scripts/ifcfg-bond0" To make this permanent across reboots, you'll want to open /etc/sysconfig/network-scripts/ifcfg-bond0 and change the BONDING_OPTS setting and insert "xmit_hash_policy=layer3+4" behind the mode=4 setting. Result is as follows: BONDING_OPTS="mode=4 xmit_hash_policy=layer3+4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast" To change it back simply remove the xmit_hash_policy command and the default is Layer2. I found this extremely helpful and wanted to share it here, will be a good reference to keep in case anyone else comes across this. Cheers, Kill_Seeker
Balrog Posted June 11, 2015 #2 Posted June 11, 2015 Thank you very much for posting these discoveries! I will test this at my HP Microserver Gen 8 and make some new tests.
tunglt75 Posted June 12, 2015 #3 Posted June 12, 2015 @killseeker Could you pls write the guide in detail to help novice like me to fix it. I mean what app, what command required and the steps in detail to fix it. Tks.
Balrog Posted June 12, 2015 #4 Posted June 12, 2015 I just make a very little manual: # Prerequisite: the bond must be already configured and running (on DSM and on LAN-Switch) # Login as "root" with SSH #open the config-file in vi: vi /etc/sysconfig/network-scripts/ifcfg-bond0 #search the line: BONDING_OPTS="mode=4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast" # press "i" to enter the "input-mode" from vi and change it to (all in one single line!): BONDING_OPTS="mode=4 xmit_hash_policy=layer3+4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast # press "esc" and then ":wq" (without the " of course ) (wq stands for "write and quit vi afterwards") # reboot and you're done Now I must test if the change results in an improvment of the speed.
killseeker Posted June 12, 2015 Author #5 Posted June 12, 2015 That's awesome Balrog!!! Thanks for making such an easy to follow guide
mervincm Posted June 15, 2015 #7 Posted June 15, 2015 with 5.2 there are also additional lag options that are simpler to implement that LACP. I was able to see more than 1GBE worth of outbound without using this trick. still, very good post!
at3tb Posted June 15, 2015 #8 Posted June 15, 2015 Hello Unfortunately, by this amendment, my ompleten networks away. Can not access the DS. Even if I install a different network card, I do not get a link up. I can now undertake that I have access again to the NAS? Thank you
dachoeks3 Posted June 19, 2015 #9 Posted June 19, 2015 is there a maximum of how many nics you can bond? on my box i have a total of 5 ge ports but when i try to bond 4 of them only two end up being bonded. i use lag with lacp on cisco switch.
dachoeks3 Posted July 20, 2015 #10 Posted July 20, 2015 is there a maximum of how many nics you can bond? on my box i have a total of 5 ge ports but when i try to bond 4 of them only two end up being bonded. i use lag with lacp on cisco switch. If anyone is having the same issue, I had changed mac addresses wrong on my usb drive config file.
kei78 Posted July 21, 2015 #11 Posted July 21, 2015 is there a maximum of how many nics you can bond? on my box i have a total of 5 ge ports but when i try to bond 4 of them only two end up being bonded. i use lag with lacp on cisco switch. For Max nics you can bond, in the Cisco world that would be up to 16, but only 8 would be active at any given time. The others in the channel group would be used as hot spares.
Recommended Posts