Jump to content
XPEnology Community

Very bad network performance on ESXi 6.0


Riza

Recommended Posts

Hey guys,

 

I hope anybody can help me, because I've already tried nearly everything. I'm running XPEnology 6.1 Update 4 with Juns Loader 1.02b on a ESXi 6.0 VM. The ESXi Host is a HP Microserver Gen8 and i'm already using the hpvsa-5.5.0-88OEM driver. I've created 3 basic volumes, all on a seperate disk. If i move a file from one disk to another on the DSM interface, the transmission is really quick (more then 100mb/s). If I write a file on one of the shares from a Windows-VM on the same ESXi-Host, it is also quick (about 70mb/s). But if read a file on the SMB share, it starts at 6-7mb/s and gets a bit faster, but not more than 20mb/s.

 

I have created a new XPE-VM, changed the network-card of the VM, gave the VMs more RAM, but nothing helped. Does anyone have an idea, why it is working perfect in one direction but so bad in the other?!

 

I've found some threads, where the people wrote, that it works nicely on a baremetal installation. But i really need the Windows-vm on that machine and I also like ESXi very much.

Link to comment
Share on other sites

Copying files between volumes/folders on different drives in DSM will be through the internal sata controllers/mounted esx storage controllers, not a virtual nic so thats not a 'network transfer'. Is the windows VM on the same storage controller as the DSM volume you are copying between? Copying through samba/drive mapping will be in/out traffic on the virtual nic and also storage controllers, so that could be a bottleneck somehow. The speed of the drives could make a difference (5400/7200) read/write capacity in your setup too. 

Link to comment
Share on other sites

just hit the same issue...

every network connection (transfer) it starts 50-60 Mbps and than it goes down to 11Mbps and fluctuates there.

 

it doesn't matter 1,2,3 nics / Bond / e1000 or vmxnet3 -> transfer between 2 machines stays ~11Mbps .

 

*** if i open a new transfer from another machine i get the same results, but i now have 2 transfers that have ~11Mbps EACH.

*** if i attach another drive , transfer between raid volume and drive it's working ok - 50-60 MBps .

 

it seems to be an issue in NIC management ?

is there a vmwaretools package for DS3617xs & DSM 6.1.3-15152 ? 

 

Link to comment
Share on other sites

On 24.9.2017 at 0:27 PM, mythnick said:

just hit the same issue...

every network connection (transfer) it starts 50-60 Mbps and than it goes down to 11Mbps and fluctuates there.

 

it doesn't matter 1,2,3 nics / Bond / e1000 or vmxnet3 -> transfer between 2 machines stays ~11Mbps .

 

*** if i open a new transfer from another machine i get the same results, but i now have 2 transfers that have ~11Mbps EACH.

*** if i attach another drive , transfer between raid volume and drive it's working ok - 50-60 MBps .

 

it seems to be an issue in NIC management ?

is there a vmwaretools package for DS3617xs & DSM 6.1.3-15152 ? 

 

But is it slow in both directions. If I copy a file from my DSM share to my Windows VM on the same Host (same storage controller but different drive), it's slow as described (max. 11-13mb/s). But if I copy a file from the same Windows VM to the same DSM share, i get stable 60-70mb/s.

 

There are openvmtools, which you can use on Xpenology, but it's not like the package for windows or linux. There are no drivers inside this package. Only tools to read some information about the VM and to shut it down.

 

The crazy thing is, that it is working perfect in one direction, but absolutely slow in the other.

Link to comment
Share on other sites

  • 7 months later...
On 25/9/2017 at 2:36 PM, Riza said:

The crazy thing is, that it is working perfect in one direction, but absolutely slow in the other.

 

Exactly the same issue I'm having with my baremetal Gen8. No ESXi here. It's like 1 gbit/s in 100 mbit/s out. Did you manage to find any fix for this ?

Edited by jmayoralas
Link to comment
Share on other sites

  • 1 month later...

i always struggled with this issue, started copying on 100MB then after a min it dropped to 5-10 MB

i never got this resolved in 5.2 or 6.1 , seems realated to compatibility in esxi vs cougar vs drivers

 

si what i did, verry simple , i bought a microsd card 16 GB, installed ESXI on it, setup HP gen 8 in AHCI mode for sata controller

bought an ASMEDIA sata controller card, asmedia is verry important, since marvel is not supported

i boot from micro sd, my datastore is created on a small 2.5 ssd disk on that 2 port sata controller

i create a virtual machine, create a virtual disk 100 GB for my synology ... install and run

then i passthrough the cougar controller from the HP gen 8 , so i attach this card to my virtual host in stead of RDM mappings

 

now my synology sees the 4 bays drives, and i also have S.M.A.R.T

and offcourse full speed copiyn

Link to comment
Share on other sites

  • 2 months later...

I had this issue when I was dabbling with Synology on ESXi 2 years ago - though didn't follow it though as I didnt really have any need to install it inside ESXi until now..

 

First off.. Screw you Intel..

Your non-K CPUs have VT-d, but your more expensive 'K' CPUs (in the same range) have VT-d disabled??

Ultimate Dick-Move on your part..

 

Anyway, I'm testing on my old workstation - spec. 3770k, p8z77v pro TB, 16GB, ESXi 6.7 (fully patched), a selection of disks (for testing at present), Intel dual port NIC in there too..

Syno DSM 6.2.1 installed..

 

I get a solid 112MBs when reading from the disks across GBit ethernet.

When writing, it starts off at 112MBs (for about 6 seconds) then it drops to about 50MBs.. (average across the whole file is 63MB/sec)

Pretty sure Synology 6.2.1 isnt the issue (well, I hope it isnt) as copying (writing) using file manager within Syno is about the same speed.

So it's VMware and/or hardware config..

 

Passthrough isn't an option due to Intel being ass-hats with vt-d (I suppose I could buy a 3770 cpu second-hand on ebay instead), so am I really stuck with this poor performance?

 

Any VT-d workarounds?

 

Thanks!

Link to comment
Share on other sites

OK.. Since I tested the above (which was tested with Open VM tools installed), I created a new VM using DSM DS3615 6.2 (u2).

Performance was definitely slower than before (down to 20MB/s) before I installed the Open VM Tools (v 10.1.15)..

 

Does anyone know if the newest version of the tools (10.3.x) is in the works? Would be good to see if it made any difference..

 

#H

 

Link to comment
Share on other sites

  • 1 month later...

I just got my virtual NAS setup in ESXi 6.7 with 2 cores of an E3-1270 V2, 2GB of RAM and gave it an E1000E NIC and a vmxnet3 in active/failover (For compatibility, in case one stops working with an update)

 

I have 3 x 4TB 3.5" disks and a super old 1TB Samsung drive all in SHR and I get around 400MB/s read, 330MB/s write. So this is either fixed, or your guys setup has something wrong. I thought I would post, as this thread could put people off!

Link to comment
Share on other sites

11 hours ago, MooseMan123 said:

I just got my virtual NAS setup in ESXi 6.7 with 2 cores of an E3-1270 V2, 2GB of RAM and gave it an E1000E NIC and a vmxnet3 in active/failover (For compatibility, in case one stops working with an update)

 

I have 3 x 4TB 3.5" disks and a super old 1TB Samsung drive all in SHR and I get around 400MB/s read, 330MB/s write. So this is either fixed, or your guys setup has something wrong. I thought I would post, as this thread could put people off!

 

Hi man,

 

i moved my lab to freenas and to linux+ZFS. But i am still interested in XPEnology. 

Can you please share your vmdk?

 

regards,

Dr.Mythnick

Link to comment
Share on other sites

20 hours ago, MooseMan123 said:

I just got my virtual NAS setup in ESXi 6.7 with 2 cores of an E3-1270 V2, 2GB of RAM and gave it an E1000E NIC and a vmxnet3 in active/failover (For compatibility, in case one stops working with an update)

 

I have 3 x 4TB 3.5" disks and a super old 1TB Samsung drive all in SHR and I get around 400MB/s read, 330MB/s write. So this is either fixed, or your guys setup has something wrong. I thought I would post, as this thread could put people off! 

400MB/s read on 1x E1000E NIC**? That's DISK performance (or at least internal ESXi transfer speeds) - and this thread is primarily about NETWORK performance!!

The OP also had separate volumes on single disks, where you have a RAID volume across 3 disks - which is obviously quicker..  Not comparing like with like!

VMWare did fix an AHCI bug since the original post - so that, at least, resolves one issue..

 

 

 

Link to comment
Share on other sites

It's disk performance in that your disks ARE the bottleneck - in your example..

You did read that the OP was using one disk per volume, yeah?

 

I think we can agree that the issues could lie in multiple areas; the disks, the hardware (RAID controller - there are known issues with a particular IBM adapter, for example), the network configuration (not just internally in ESXi - for all of the previous posts) AND/OR the ESXi patch level..

Edited by Hostilian
Link to comment
Share on other sites

  • 4 months later...

thats not true guys

if your dsm is on esxi and the vmware machine is on the same esxi box 

it uses internal switching, so can bypass the speed of the nic 

so if you truely want to test network speed the dsm has to be on one esxi box and the virtual machine is on another

thanks 

Link to comment
Share on other sites

  • 3 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...