Jump to content
XPEnology Community

Using Multiple NICs on ESXi 6.7


unmesh

Recommended Posts

Having had good luck with using multiple NICs to increase file transfer speeds between a Windows 10 desktop and a baremetal Xpenology install, I'd like to do the same with a Xpenology-on-ESXi-6.7 system. To this end, I installed a quad-port Intel Gigabit NIC alongside the built-in i217LM and VMware sees the 4 new physical NICs. I'm at a complete loss for how to get an additional NIC mapped over to the Xpenology VM. I don't want to do Teaming but rather use SMB 3.0 Multichannel end-to-end so that the network does not have to be configured. I will leave the other three Gigabit ports unconnected for now.

 

Should I add a physical NIC to the existing vswitch as a additional uplink or will the E1000 driver only allow 1Gbps of throughput to the VM? If so, do I create an additional vswitch for one of the new Gigabit Ethernet ports?

 

Any help will be greatly appreciated.

 

Thanks.

Edited by unmesh
Link to comment
Share on other sites

10 hours ago, Olegin said:

Show your XPEN VM configuration.

 

Not able to access my server at the moment but the VM configuration has two VCPUs, 2GB of memory, a hard disk for the bootloader on an IDE controller, another for a RDM'ed hard drive on a SATA controller and a E1000 network adapter connected to vswitch0.

 

The Lenovo TS140 came with a i217LM on the motherboard which shows as vmnic0 configured as the uplink on vswitch0

 

The 4 new ports are showing up as vmnic1 through vmnic4 but are not configured to anything

 

Since my opening post, I did try adding one of the new ports as an additional uplink to vswitch0 but that did nothing for file transfer performance.

 

Edited by unmesh
Link to comment
Share on other sites

Is SMB3 MC supported on DSM?  I think it's still highly experimental:

https://www.reddit.com/r/synology/comments/90gc61/smb_3_multichannel_support_on_dsm_62/?utm_source=BD&utm_medium=Search&utm_name=Bing&utm_content=PSR1

 

There is lots of evidence that bonding ports with only a single workstation doesn't improve throughput (even though Windows lies to you about it).

 

Alternatively, add supported 10 gig NIC and pass that through to the VM.  If you are point-to-point you can use SFP+ cards and a SFP+ DAC instead of a switch, and there are a lot of 2-port 10Gbe cards available if you need to scale to two high speed clients.  Works great for me.

Link to comment
Share on other sites

I haven't done stress testing but it absolutely splits traffic across both GE NICs as reported by both the Windows 10 desktop and the baremetal DSM. I suppose I could time the transfer with a watch too and report back to this thread :-)

 

After some experimentation, I managed to reproduce the results under ESXi. I created a new vswitch, attached an unused GE port as an uplink, created a new port group that used this vswitch and added the port group as a new network interface to the DSM VM. I then configured SMB to use DSM3 through the GUI and by making an edit to smb.conf and was off and running.

 

One thing that had concerned me was how Windows would know which two IP addresses to use for the server. It turns out that my network attached drive from the single port days was enough to make the connection.

 

I had considered using 10G but put it off because I wanted to do multipoint and wasn't ready to invest in a 10G switch and a bunch of 10G NICs. I discovered that Multiport GE NICs are very cheap on Ebay; my i340-T4 was only $15!

 

I will disconnect the cables to the second port everywhere for now and hope that gets rid of any potential instability or data corruption issues.

Edited by unmesh
Link to comment
Share on other sites

10 часов назад, flyride сказал:

Is SMB3 MC supported on DSM? 

The guys say that it works after changing the settings in smb.conf, link to post in Russian, I translated the main text:

Цитата

Confirm - SMB multichannel is working.
Edit the config /etc/samba/smb.conf
At the end of the file you need to add:


server multi channel support=yes


And restart the server.
You need to turn off the bonding of the NICs.

P.S. I can't test it, may be someone try.

Edited by Olegin
  • Thanks 1
Link to comment
Share on other sites

  • 1 year later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...