easyronny Posted December 23, 2019 #1 Posted December 23, 2019 (edited) All, I got a few years a Xpenology system up and running. I started with an AMD Am1 with 1 nic. This week i upgrade my system with the below config. Only im not able to get more speed on the network then 122MB and that looks like a single nic performance. My Xpenology is connected to a Zyxel GS1900 24port switch which should be able to use 802.3AD and provide more speed. Remark if try to copy data from 4 pcs at the same time all connected with a gigabit connection to the Zyxel switch and then the 122MB will be shared between 4 pcs. I also disabled 802.3AD and let Xpenology do the bonding part only that did not increase speed. My config is: Asrock p67 extreme 6 i7 2600k 8GB memory. (onboard nic and sata controllers disabled in bios) ATI 5450 Dell perc h200 (it mode) Intel 350T4v2 (quad nic @ 1 bond) 6x Ironwolf 8TB Loader 3615 1.03b (latest) DSM 6.2.2 (build latest) If someone has a tip or an idea please let me know, Many thanks for your help, ERonny Edited December 23, 2019 by easyronny Quote
flyride Posted December 23, 2019 #2 Posted December 23, 2019 Based on your description, I think you understand a single client device won't get any additional bandwidth from a bonding configuration. The only benefit will be from multiple, concurrent clients. Even with concurrent clients, there is a pseudo-random method (usually hashing of IP address, IP socket ports or MAC) that causes a TCP/IP stream to traverse a specific ethernet port. How it works is specifically dependent upon the bonding method used. But having 4 clients running concurrently does not mean that all four ports will be used. It is possible to have 4 clients hash to a single port and be no better off. Your switch maximum throughput performance may be a factor as well. Quote
easyronny Posted December 25, 2019 Author #3 Posted December 25, 2019 On 12/23/2019 at 7:29 PM, flyride said: Based on your description, I think you understand a single client device won't get any additional bandwidth from a bonding configuration. The only benefit will be from multiple, concurrent clients. See my remark, The maximum speed is 122MB if I use four pcs each device will have arround 30MB. I see some people online (youtube) that will have 210MB with a dual NIC that is connected with two PCs On 12/23/2019 at 7:29 PM, flyride said: Even with concurrent clients, there is a pseudo-random method (usually hashing of IP address, IP socket ports or MAC) that causes a TCP/IP stream to traverse a specific ethernet port. How it works is specifically dependent upon the bonding method used. But having 4 clients running concurrently does not mean that all four ports will be used. It is possible to have 4 clients hash to a single port and be no better off. I can change on my switch MAC filterd or MAC/IP filterd.Which one should you advise? On 12/23/2019 at 7:29 PM, flyride said: Your switch maximum throughput performance may be a factor as well. I did not find an artikel with an exact performance review only from the vendor site the switch should be able to deliver at least a dual NIC 1Gb at the same time. Quote
flyride Posted December 25, 2019 #4 Posted December 25, 2019 2 hours ago, easyronny said: See my remark, The maximum speed is 122MB if I use four pcs each device will have arround 30MB. I see some people online (youtube) that will have 210MB with a dual NIC that is connected with two PCs If you do not bond the four ports and instead assign a discrete IP to each port and connect a device directly to it (which is the equivalent of your statement above) you will definitely see throughput that approximates 4x 120MBps. The NAS disk performance will then be the limiting factor. When you bond, that perfect distribution is not assured. 2 hours ago, easyronny said: I can change on my switch MAC filterd or MAC/IP filterd.Which one should you advise? I cannot advise you on that. If your issue is poor hash distribution, you'll have to experiment and see. There are no standards for exactly how a hash is computed for bonding, so it will be up to the switch firmware and Intel driver. The reality is that the hashing methodology really only works with a target client pool of 2n or 3n (when n is the number of bonded ports). Quote
easyronny Posted February 8, 2020 Author #5 Posted February 8, 2020 (edited) Hi All @flyride, @IG-88 I tested many times the network speed and it was not going to be faster; I was giving it almost up until I found something yesterday evening. When I connect a USB drives to my Synology, and shared that USB disk over the network the performance will be higher. I did a copy form the USB disk that was attached to my synology over wired network to my laptop. I also started at the same time a second copy from the internal SHR (8 disks) to my PC that is also wired connected, at that moment the network speed was going over the 210MB per second (WebUI of DSM). Could it be that the Dell Perc H200 (in IT Mode) is working at the max. of his performance when it is around 120/130 MB per second? If that could be the case then I probably would like to replace that card. If I decided to replace the Dell Perc H200 for the below controller, should this give a performance boost to the SHR disks? IO Crest SI-PEX40137 (https://www.sybausa.com/index.php?route=product/product&product_id=1006&search=8+Port+SATA 1 x ASM 1806 (pcie bridge) + 2 x Marvell 9215 (4port sata) ~$100 (only the card is needed because the Dell Perc H200 is using the same cables) If I order two of these cards for the same machine will that give an issue? Because I also would like to upgrade my 8 disks to 12 disks because my 8 disks are currently almost full. Are 12 disks still the max. with DSM 6.2.2 running with Jun loader 1.03B DS3615XS? Many thanks for all you time and help, With kind regards, ERonny Edited February 8, 2020 by easyronny Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.