Jump to content
XPEnology Community

1.03b and DSM 6.2.1 - 10GbE only getting 1Gb speed


SYS64738

Recommended Posts

First - a **HUGE** thanks to everyone here.  I've learned so much the last few years by reading the posts and wisdom you've all shared.    I'm hoping you won't mind a plea for help, I'm going nuts trying to figure out why I can't get transfer speeds beyond 1Gb out of my 10GbE cards.

 

Here's my setup -

XPEnology box 1: (my old/original unit)

  •     Dell R510, 12 bay chassis. 2x L5640 CPU (6 cores each), 64GB RAM. ESXi 6.7.  (12x2TB Dell Enterprise SAS drives - 6G/s)
  •     Passing through PERC  flashed to LSI IT mode    
  •     Passing through Intel X520-DA2 (dual port 10GbE SFP+)
  •     Xpenology VM has 8 cores, 32GB RAM, 1.02b and DSM 5.2-5967 Update 8.  NIC1 is 1gb (VMXNET3), NIC3 is 10GbE from passthru.

XPEnology box 2: (my upgrade unit)

  •     Dell R510, 12 bay chassis. 2x L5640 CPU (6 cores each), 64GB RAM. ESXi 6.7.  (12x6TB Dell Enterprise SAS drives - 6G/s)
  •     Passing through PERC  flashed to LSI IT mode
  •     Passing through Intel X520-DA2 (dual port 10GbE SFP+)
  •     Xpenology VM has 8 cores, 32GB RAM, 1.03b and DSM 6.2.1-23824 Update 4.   NIC1 is 1gb (E1000e), NIC3 is 10GbE from passthru.

 

- 10GbE network is ONLY for NAS-to-NAS communication for the purposes of backup/sync.

- 10GbE cards are directly connected via SFP+ cables, no switch in place. 

- On each VM, LAN1 (1Gb) is IP 192.168.1.xxx.  Public/user/Share subnet for my network.

- On each VM, LAN3 (10GbE) is IP 10.10.10.xxx. Private subnet for storage syncs between the 2 XPEnology devices. Jumbo Frames (9000) enabled on each device.  10GbE full duplex confirmed.

- Each VM uses NIC1 (1Gb) for general/user/Share access.  This works perfectly and I get enough speed to saturate the link.  Happy with this!


The issue is trying to copy data between the units via 10GbE.  No matter what I do, I can't get above 1Gb speed.   I specifically set Sync target as the 10.10.10.xxx address, and during the sync I can watch the performance widget to verify that traffic is flowing over the proper 10GbE NIC (LAN3.)  But it's just S-L-O-W.


MY TESTING:

Each VM can ping the other using their 10GbE 10.10.10.xxx link with 0 dropped frames.

 

DD testing gives respectable numbers for disk performance, so that shouldn't be the bottleneck:

            root@SYNOLOGY1:/volume1/dd if=/dev/zero of=tmp.dat bs=2048k count=50k
            51200+0 records in
            51200+0 records out
            107374182400 bytes (107 GB) copied, 94.2371 s, 1.1 GB/s


            root@SYNOLOGY1:/volume1/dd if=tmp.dat of=/dev/null bs=2048k count=50k
            51200+0 records in
            51200+0 records out
            107374182400 bytes (107 GB) copied, 166.321 s, 646 MB/s

 

 

NETIO gives expected results when testing the 1Gb and 10GbE interfaces:

 

            NETIO - Network Throughput Benchmark, Version 1.30
            (C) 1997-2008 Kai Uwe Rommel

            1Gb TCP connection established.
            Packet size  1k bytes:  110.98 MByte/s Tx,  98.96 MByte/s Rx.
            Packet size  2k bytes:  110.92 MByte/s Tx,  96.81 MByte/s Rx.
            Packet size  4k bytes:  109.58 MByte/s Tx,  107.45 MByte/s Rx.
            Packet size  8k bytes:  109.39 MByte/s Tx,  109.76 MByte/s Rx.
            Packet size 16k bytes:  110.02 MByte/s Tx,  110.44 MByte/s Rx.
            Packet size 32k bytes:  108.88 MByte/s Tx,  110.71 MByte/s Rx.

 

            10GbE TCP connection established.
            Packet size  1k bytes:  465.73 MByte/s Tx,  324.62 MByte/s Rx.
            Packet size  2k bytes:  463.38 MByte/s Tx,  384.20 MByte/s Rx.
            Packet size  4k bytes:  478.48 MByte/s Tx,  495.68 MByte/s Rx.
            Packet size  8k bytes:  487.35 MByte/s Tx,  504.69 MByte/s Rx.
            Packet size 16k bytes:  496.55 MByte/s Tx,  521.40 MByte/s Rx.
            Packet size 32k bytes:  504.88 MByte/s Tx,  511.07 MByte/s Rx.

 

Test Sync job is basic:  20 large files totaling 500GB. Manually activated. No compression. No SSH.  Not block-level. No file indexing.

Disk Utilization (Resource Monitor) during test sync never exceeds 25% read or write, with the copy/sync topping out at 150 MB/s, which is what I get when performing the same test across the 1Gb NICs.

 

 

I'm stumped.  All feedback/help is appreciated!!

Link to comment
Share on other sites

  • 2 weeks later...

I know this is a silly question, but did you check your SFP cable to make sure it is an Intel based 10Gb sfp? If so, and you are having these issues, it sounds like it could be a driver issue or a configuration issue. If I remember right, the 5 and 7 series intel cards can have issues without "intel" cables. Also Check your "SMB" file services settings and see where the minimum and maximum are configured for. I use SMB1 for the minimum and SMB3 for the max. Check your other settings under file services too. Anything not needed should be disabled. Check all your vm switches, port groups, kernels etc. Make sure everything is set to jumbo frames. MTU 9000. Verify matching drivers/firmware, updates etc as well. Check BIOS to make sure cards are registering as 10GB as well. Post some screen shots too if you can. That will help a lot of us narrow it down for you.

 

If you haven't yet, also upgrade your vmware to the 6.7U1 from dell https://www.dell.com/support/home/us/en/04/drivers/driversdetails?driverid=53n67

Link to comment
Share on other sites

  • 4 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...