First - a **HUGE** thanks to everyone here. I've learned so much the last few years by reading the posts and wisdom you've all shared. I'm hoping you won't mind a plea for help, I'm going nuts trying to figure out why I can't get transfer speeds beyond 1Gb out of my 10GbE cards.
Here's my setup -
XPEnology box 1: (my old/original unit)
Dell R510, 12 bay chassis. 2x L5640 CPU (6 cores each), 64GB RAM. ESXi 6.7. (12x2TB Dell Enterprise SAS drives - 6G/s)
Passing through PERC flashed to LSI IT mode
Passing through Intel X520-DA2 (dual port 10GbE SFP+)
Xpenology VM has 8 cores, 32GB RAM, 1.02b and DSM 5.2-5967 Update 8. NIC1 is 1gb (VMXNET3), NIC3 is 10GbE from passthru.
XPEnology box 2: (my upgrade unit)
Dell R510, 12 bay chassis. 2x L5640 CPU (6 cores each), 64GB RAM. ESXi 6.7. (12x6TB Dell Enterprise SAS drives - 6G/s)
Passing through PERC flashed to LSI IT mode
Passing through Intel X520-DA2 (dual port 10GbE SFP+)
Xpenology VM has 8 cores, 32GB RAM, 1.03b and DSM 6.2.1-23824 Update 4. NIC1 is 1gb (E1000e), NIC3 is 10GbE from passthru.
- 10GbE network is ONLY for NAS-to-NAS communication for the purposes of backup/sync.
- 10GbE cards are directly connected via SFP+ cables, no switch in place.
- On each VM, LAN1 (1Gb) is IP 192.168.1.xxx. Public/user/Share subnet for my network.
- On each VM, LAN3 (10GbE) is IP 10.10.10.xxx. Private subnet for storage syncs between the 2 XPEnology devices. Jumbo Frames (9000) enabled on each device. 10GbE full duplex confirmed.
- Each VM uses NIC1 (1Gb) for general/user/Share access. This works perfectly and I get enough speed to saturate the link. Happy with this!
The issue is trying to copy data between the units via 10GbE. No matter what I do, I can't get above 1Gb speed. I specifically set Sync target as the 10.10.10.xxx address, and during the sync I can watch the performance widget to verify that traffic is flowing over the proper 10GbE NIC (LAN3.) But it's just S-L-O-W.
MY TESTING:
Each VM can ping the other using their 10GbE 10.10.10.xxx link with 0 dropped frames.
DD testing gives respectable numbers for disk performance, so that shouldn't be the bottleneck:
root@SYNOLOGY1:/volume1/dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB) copied, 94.2371 s, 1.1 GB/s
root@SYNOLOGY1:/volume1/dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB) copied, 166.321 s, 646 MB/s
NETIO gives expected results when testing the 1Gb and 10GbE interfaces:
NETIO - Network Throughput Benchmark, Version 1.30
(C) 1997-2008 Kai Uwe Rommel
1Gb TCP connection established.
Packet size 1k bytes: 110.98 MByte/s Tx, 98.96 MByte/s Rx.
Packet size 2k bytes: 110.92 MByte/s Tx, 96.81 MByte/s Rx.
Packet size 4k bytes: 109.58 MByte/s Tx, 107.45 MByte/s Rx.
Packet size 8k bytes: 109.39 MByte/s Tx, 109.76 MByte/s Rx.
Packet size 16k bytes: 110.02 MByte/s Tx, 110.44 MByte/s Rx.
Packet size 32k bytes: 108.88 MByte/s Tx, 110.71 MByte/s Rx.
10GbE TCP connection established.
Packet size 1k bytes: 465.73 MByte/s Tx, 324.62 MByte/s Rx.
Packet size 2k bytes: 463.38 MByte/s Tx, 384.20 MByte/s Rx.
Packet size 4k bytes: 478.48 MByte/s Tx, 495.68 MByte/s Rx.
Packet size 8k bytes: 487.35 MByte/s Tx, 504.69 MByte/s Rx.
Packet size 16k bytes: 496.55 MByte/s Tx, 521.40 MByte/s Rx.
Packet size 32k bytes: 504.88 MByte/s Tx, 511.07 MByte/s Rx.
Test Sync job is basic: 20 large files totaling 500GB. Manually activated. No compression. No SSH. Not block-level. No file indexing.
Disk Utilization (Resource Monitor) during test sync never exceeds 25% read or write, with the copy/sync topping out at 150 MB/s, which is what I get when performing the same test across the 1Gb NICs.
I'm stumped. All feedback/help is appreciated!!