unmesh Posted June 18, 2019 Share #1 Posted June 18, 2019 (edited) Having had good luck with using multiple NICs to increase file transfer speeds between a Windows 10 desktop and a baremetal Xpenology install, I'd like to do the same with a Xpenology-on-ESXi-6.7 system. To this end, I installed a quad-port Intel Gigabit NIC alongside the built-in i217LM and VMware sees the 4 new physical NICs. I'm at a complete loss for how to get an additional NIC mapped over to the Xpenology VM. I don't want to do Teaming but rather use SMB 3.0 Multichannel end-to-end so that the network does not have to be configured. I will leave the other three Gigabit ports unconnected for now. Should I add a physical NIC to the existing vswitch as a additional uplink or will the E1000 driver only allow 1Gbps of throughput to the VM? If so, do I create an additional vswitch for one of the new Gigabit Ethernet ports? Any help will be greatly appreciated. Thanks. Edited June 18, 2019 by unmesh Quote Link to comment Share on other sites More sharing options...
Olegin Posted June 18, 2019 Share #2 Posted June 18, 2019 Show your XPEN VM configuration. Quote Link to comment Share on other sites More sharing options...
unmesh Posted June 18, 2019 Author Share #3 Posted June 18, 2019 (edited) 10 hours ago, Olegin said: Show your XPEN VM configuration. Not able to access my server at the moment but the VM configuration has two VCPUs, 2GB of memory, a hard disk for the bootloader on an IDE controller, another for a RDM'ed hard drive on a SATA controller and a E1000 network adapter connected to vswitch0. The Lenovo TS140 came with a i217LM on the motherboard which shows as vmnic0 configured as the uplink on vswitch0 The 4 new ports are showing up as vmnic1 through vmnic4 but are not configured to anything Since my opening post, I did try adding one of the new ports as an additional uplink to vswitch0 but that did nothing for file transfer performance. Edited June 18, 2019 by unmesh Quote Link to comment Share on other sites More sharing options...
Olegin Posted June 18, 2019 Share #4 Posted June 18, 2019 Now I have no ESXI with DSM, but seems you need to use E1000e driver on VM with DSM. Quote Link to comment Share on other sites More sharing options...
flyride Posted June 18, 2019 Share #5 Posted June 18, 2019 Is SMB3 MC supported on DSM? I think it's still highly experimental: https://www.reddit.com/r/synology/comments/90gc61/smb_3_multichannel_support_on_dsm_62/?utm_source=BD&utm_medium=Search&utm_name=Bing&utm_content=PSR1 There is lots of evidence that bonding ports with only a single workstation doesn't improve throughput (even though Windows lies to you about it). Alternatively, add supported 10 gig NIC and pass that through to the VM. If you are point-to-point you can use SFP+ cards and a SFP+ DAC instead of a switch, and there are a lot of 2-port 10Gbe cards available if you need to scale to two high speed clients. Works great for me. Quote Link to comment Share on other sites More sharing options...
unmesh Posted June 19, 2019 Author Share #6 Posted June 19, 2019 (edited) I haven't done stress testing but it absolutely splits traffic across both GE NICs as reported by both the Windows 10 desktop and the baremetal DSM. I suppose I could time the transfer with a watch too and report back to this thread After some experimentation, I managed to reproduce the results under ESXi. I created a new vswitch, attached an unused GE port as an uplink, created a new port group that used this vswitch and added the port group as a new network interface to the DSM VM. I then configured SMB to use DSM3 through the GUI and by making an edit to smb.conf and was off and running. One thing that had concerned me was how Windows would know which two IP addresses to use for the server. It turns out that my network attached drive from the single port days was enough to make the connection. I had considered using 10G but put it off because I wanted to do multipoint and wasn't ready to invest in a 10G switch and a bunch of 10G NICs. I discovered that Multiport GE NICs are very cheap on Ebay; my i340-T4 was only $15! I will disconnect the cables to the second port everywhere for now and hope that gets rid of any potential instability or data corruption issues. Edited June 19, 2019 by unmesh Quote Link to comment Share on other sites More sharing options...
Olegin Posted June 19, 2019 Share #7 Posted June 19, 2019 (edited) 10 часов назад, flyride сказал: Is SMB3 MC supported on DSM? The guys say that it works after changing the settings in smb.conf, link to post in Russian, I translated the main text: Цитата Confirm - SMB multichannel is working. Edit the config /etc/samba/smb.conf At the end of the file you need to add: server multi channel support=yes And restart the server. You need to turn off the bonding of the NICs. P.S. I can't test it, may be someone try. Edited June 19, 2019 by Olegin 1 Quote Link to comment Share on other sites More sharing options...
unmesh Posted June 21, 2019 Author Share #8 Posted June 21, 2019 Olegin, That is what I had done too. Thanks. Quote Link to comment Share on other sites More sharing options...
dc94 Posted August 5, 2020 Share #9 Posted August 5, 2020 working like a charm... i can copy files from ssd in my computer to a microserver gen 8 and viceversa with xpenology 6.2-23739 ds3615sx :) Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.