Jump to content
XPEnology Community

Advice on 10gb network card


neuro1

Recommended Posts

Hi, I was wondering if you guys know if this 10gb card would work?

 

The card is: Broadcom NetExtreme II BCM957710A1020G and includes GBIC module

 

Will this work if I connect my xpenology on one end and a PC on the other end using this cable:

 

 

http://www.cablestogo.com/product/33039 ... ble-orange

 

The reason I can't use the SFP+ cable is because the distance is about 40 feet from the server.

 

Thanks

Link to comment
Share on other sites

Due exactly the same annoyances with the 10GBit/s lack of network standards, I still have not yet made the plunge into the 10Gbit/s craze.

There are too many expensive factors to consider, unlike Ethernet that has been consistent for the past 50 years, the 10Gbit/s network is still in its infancy, feels just like back in the early 80s when computers didn't have a real standard, and every brand was trying out their own thing, trying to make their thing the defactor.

 

Sort of like what happened with BlueRay vs. HDDVD, BlueRay won, and now it's easy for manufacturers and buyers to get these BlueRays without having to kill their braincells trying to figure out which one is better, and which to pick

 

Hopefully in the next 15 years the industry finally stick to only one specific standard for 10 Gbit/s networks, instead of the current 5, of which 2 are the most common, the dust still has not settled, lets see which protocol hardware standard stays on top.

 

In the meantime, I'm having fun with Quad Port Gigabit Ethernet, it sastifies my need for speed, while making things simple and economical.

Cheaper to buy a big roll of CAT6 and make your own cables, you can cut it at any legth to fill your needs.

Pair all 4 Gigabit ports to work as One giant 4 Gbit/s network card, of course that means you will need more Switches, which are still very economical by far compared to 10Gbit/s hardware / fiber cables / etc.

 

Currently I have my main gaming rig 4GBits to Switch, then XPEnology also on 4GBits to the switch, and VMware Server 2x 4GBits, one 4Gbits for the regular network, the other 4Gbits for the DATA transfer network like iSCSI and such, between XPEnology and other servers, and yes other servers / workstations are also on 4Gbits

 

Windows / Linux works remarkably well with a Ports Teaming, which incidentally I learned the true magic of it after seeing Synology 1815+ running full steam all 4 ports to and from a Windows 2012 server, then I tried with linux and my other machines, Now I'm in love with the 4 ports teaming networking.

 

The only costly part are the Managed Gigabit switches necessary to control the traffic, oooorrrr you can build your own multi quad ports Pfsense firewall / switch, that can do exactly the same thing as a Managed switch, but cheaper and more powerful like a full blown Cisco switch :wink::grin:

Link to comment
Share on other sites

Due exactly the same annoyances with the 10GBit/s lack of network standards, I still have not yet made the plunge into the 10Gbit/s craze.

There are too many expensive factors to consider, unlike Ethernet that has been consistent for the past 50 years, the 10Gbit/s network is still in its infancy, feels just like back in the early 80s when computers didn't have a real standard, and every brand was trying out their own thing, trying to make their thing the defactor.

 

Sort of like what happened with BlueRay vs. HDDVD, BlueRay won, and now it's easy for manufacturers and buyers to get these BlueRays without having to kill their braincells trying to figure out which one is better, and which to pick

 

Hopefully in the next 15 years the industry finally stick to only one specific standard for 10 Gbit/s networks, instead of the current 5, of which 2 are the most common, the dust still has not settled, lets see which protocol hardware standard stays on top.

 

In the meantime, I'm having fun with Quad Port Gigabit Ethernet, it sastifies my need for speed, while making things simple and economical.

Cheaper to buy a big roll of CAT6 and make your own cables, you can cut it at any legth to fill your needs.

Pair all 4 Gigabit ports to work as One giant 4 Gbit/s network card, of course that means you will need more Switches, which are still very economical by far compared to 10Gbit/s hardware / fiber cables / etc.

 

Currently I have my main gaming rig 4GBits to Switch, then XPEnology also on 4GBits to the switch, and VMware Server 2x 4GBits, one 4Gbits for the regular network, the other 4Gbits for the DATA transfer network like iSCSI and such, between XPEnology and other servers, and yes other servers / workstations are also on 4Gbits

 

Windows / Linux works remarkably well with a Ports Teaming, which incidentally I learned the true magic of it after seeing Synology 1815+ running full steam all 4 ports to and from a Windows 2012 server, then I tried with linux and my other machines, Now I'm in love with the 4 ports teaming networking.

 

The only costly part are the Managed Gigabit switches necessary to control the traffic, oooorrrr you can build your own multi quad ports Pfsense firewall / switch, that can do exactly the same thing as a Managed switch, but cheaper and more powerful like a full blown Cisco switch :wink::grin:

 

What load-balancing mode are you using? Also, what protocol were you using between Syno and Windows Server to utilize 4 ports. Afaik, there is currently no support for multi threaded smb in syno. Did you have to do anything special for iSCSI? Thanks.

Link to comment
Share on other sites

Lol! I want faster speeds for transferring large video files onto the xpenology server.

 

Anyone else know if the mentioned nic would work for xpenology along with the cable to connect the server to the PC directly without needing a 10gb switch?

 

 

Thanks

Link to comment
Share on other sites

Teaming won't give you more speed between two devices. That's not how it works.

 

That's the common misunderstanding.

 

yes, what you say it's true, it does not mean 1+1+1+1=4

 

it works as 1=1=1=1=4 in parallel

 

you get the full speed of each of the 4 devices,

it's a clean and simple implementation,

when 1 cup is full, the next cup takes the spill, and when then 2nd cup is full, it spills over the 3rd cup, and so on and so forth.

 

This works the same way in Win10 and Server2012, also Linux/BSD and Synology.

Link to comment
Share on other sites

What load-balancing mode are you using? Also, what protocol were you using between Syno and Windows Server to utilize 4 ports. Afaik, there is currently no support for multi threaded smb in syno. Did you have to do anything special for iSCSI? Thanks.

 

in linux i use this:

auto lo
iface lo inet loopback

auto eth1
iface eth1 inet manual
bond-master bond0

auto eth2
iface eth2 inet manual
bond-master bond0

auto eth3
iface eth3 inet manual
bond-master bond0

auto eth4
iface eth4 inet manual
bond-master bond0

# bond0 is the bonding NIC and can be used like any other normal NIC.
# bond0 is configured using static network information.

auto bond0
iface bond0 inet static
address 192.168.1.2
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
dns-nameservers 192.168.1.1
bond-miimon 100
bond-downdelay 200
bond-mode balance-alb
bond-slaves none

auto eth0
iface eth0 inet dhcp

 

in synology this:

05Dgbl7.png

 

SmnGCr3.png

 

in Win10 is a simple bridge, it does the same thing:

nd3zz4S.png

 

in Server2012 you get more options like in linux you can choose the type:

mW4odbc.png

 

for more details about the Bonding options, that is applicable to most OSes just read this

https://help.ubuntu.com/community/UbuntuBonding

 

BTW if you are using DS214play or any non Professional Synology series you are stuck with 1 connection

31rPhU0.png

Edited by Guest
Link to comment
Share on other sites

Teaming won't give you more speed between two devices. That's not how it works.

 

balance-rr mode wants to disagree with you. Also, technically, teaming does give you more speed in other modes as well. Problem is not many protocols are multithreaded . Balance-rr mode just splits packets between network cards no matter if protocol is single or multi-threaded.

Link to comment
Share on other sites

Hi,

 

Any particular reason you want to run 10 Gbps NIC's?

 

LOL! are you starting some data center or something in your house? :razz:

 

That is indeed a very good question.

 

for most people streaming a 1080p movide from Synology or any home server to your projector or media center, or home theather a normal 1 Gbit/s network is more than enough.

 

assuming he wants to stream 4K movies from his Synology box to the home theather PC, the 1 Gbit/s is still more than sufficient, else YouTube and Netflix 4K content won't make it through the ISP slow connections which are still mostly around 25 Mbit/s to 100 Mbit's unless you are lucky to be in some areas with the 1 Gbit/s ISP connections like in Korea, Germany, some parts of US, Japan, and few other places.

Link to comment
Share on other sites

Ok, but what protocols are you using to take advantage of all 4 pipes?

 

all the details are back in the previous page viewtopic.php?f=2&t=15341#p63644

 

P8Kxaa4.jpg

 

reF9uLT.jpg

 

I use I340-T4 for Synology, or any that uses the E1000 series driver in Linux / Synology

http://ark.intel.com/products/49186/Int ... er-I340-T4

 

Windows and VMware can run pretty much all the other variants of Quad Ports

 

5Gbit/s Bond, one from Marvell onboard motherboard + the 4 from the I340-T4

 

DI02jbt.png

Link to comment
Share on other sites

My settlements exactly :smile:

 

I stream 1080p movies (blu ray quality) with over 30Mbps bit-rate with no issue. I'm running all gigabit network network.

No hiccups on my XPenology - it just works like a horse.

 

Honestly yes, nothing wrong with running 10Gig network, if you have the equipment and cash for it, but it's a huge overkill. :mrgreen:

Think about this, you're basically using a huge shot gun to kill a small mouse. :lol:

 

Also there's a lot of money you need to invest, the wire needs to be CAT6a and up, including all connectors put to the patch panels. If you running CAT5e's, you need to change all of it. If you're going to the fiber optic route - then you need to get all fiber cables - Big money here alone! then add the Gigabit Switch on top of that you spending more money on that. Fiber channel route? - think again.

 

You sure you want to start a Data Center?

Link to comment
Share on other sites

Ok, but what protocols are you using to take advantage of all 4 pipes?

 

all the details are back in the previous page viewtopic.php?f=2&t=15341#p63644

 

Yea, I read your previous posts. As far as I know iSCSI cannot take advantage of using load balancing, you need to setup MPIO to use multiple links to increase the throughput. The current SMB protocol used by Synology also cannot take advantage of load balancing(except for mode 0) because it doesn't support multi-threading.(Windows Server does though). So, the way I see it you are still getting 1Gbps throughput. Or, perhaps, you are using something else? Are you running tasks in parallel to max out the links?

 

Honestly yes, nothing wrong with running 10Gig network, if you have the equipment and cash for it, but it's a huge overkill. :mrgreen:

Think about this, you're basically using a huge shot gun to kill a small mouse. :lol:

 

Not really an overkill anymore with ssd transfer speeds and all. I desperately need it for my network as my ssd raid0 can push 1600MB/500MB Read/Write and I am bottlenecked by the network.

Link to comment
Share on other sites

Yea, I read your previous posts. As far as I know iSCSI cannot take advantage of using load balancing, you need to setup MPIO to use multiple links to increase the throughput. The current SMB protocol used by Synology also cannot take advantage of load balancing(except for mode 0) because it doesn't support multi-threading.(Windows Server does though). So, the way I see it you are still getting 1Gbps throughput. Or, perhaps, you are using something else? Are you running tasks in parallel to max out the links?

 

yes, running tasks in parallel, that's how it works by default.

 

it is like a cascade, if the first nic bandwidth is filled, the overload gets spilled over and the 2nd nic card handles the extra stuff, if the 2nd nic card bandwidth overflows, then the 3rd nic car joins the load, and so on with any additional nic card in the system, until all of them are in use.

 

it is still 1gb for each nic card, but you can use all 4 of them concurrently like a pseudo 4Gbit, that's why Windows, Synology and Linux will aknowledge it as a 4Gbit, all 4 nic works like a big trunk.

 

It's load balancing at its finest.

 

If I transfer 100 GB files from machine A to machine B it will hog all the bandwidth for nic1

if I start another transfer of 60 GB files from machine A to machine C, instead of using nic1 it will use nic2, and both nic1 and nic2 will both be running at its full capacity

so if there are more transfer it'll repeat the same for nic3 and 4.

 

so, yes you don't get the full 4Gbit, but you don't need to wait for the queued up transfers to finish 1 before the other, or it most cases it'll just split the bandwidth for all the concurrent trasfer which usually makes all transfer slow to nearly useless speed if you tried that in a single 1 Gbit connection.

 

Is not perfect, but it's good enough for my needs and it's cost efficient.

Link to comment
Share on other sites

I use an Intel X520-DA2 10 Gig Converged Network Adapter.

 

I mainly use my Xpenology box as a SAN for my esxi servers.

 

I connected each server to the xpenology using DAC cables.

 

I peak over 10G at times when accessing the storage over iSCSI.

 

esxi%20network.PNG

 

10GbE isnt cheap, if youre willing to jump in, I think I spent 400$ between 3 network cards and 3 DAC cables.

Link to comment
Share on other sites

Basically, if you want to load balance for multiple clients, multiple 1GbE link-aggregation/bond is a perfectly acceptable solution.

However, if your goal is achieve higher than 1Gb to a single client, I'm afraid the only practical and software agnostic solution is with a faster interface such as 10GbE.

Link to comment
Share on other sites

  • 2 weeks later...

Hi, I got the intel x520-da2 and was wondering how to setup so my xpenology can utilize it.

 

I have connected the ESXI server running xpenology is a VM and have direct connected it to a PC with another intel x52-da2.

 

What do I need to setup inside my xpenology VM options and inside the synology interface?

 

 

Thanks

Link to comment
Share on other sites

Hi, I got the intel x520-da2 and was wondering how to setup so my xpenology can utilize it.

 

I have connected the ESXI server running xpenology is a VM and have direct connected it to a PC with another intel x52-da2.

 

What do I need to setup inside my xpenology VM options and inside the synology interface?

 

Thanks

 

neuro1,

 

There are two ways you can expose Intel X520-DA2 to your XPE VM in ESXi.

 

1) IOMMU (VT-d). Basically this is PCIe passthrough if your CPU and Motherboard supports it. This will allow you to assign your X520 for exclusive use with the XPE VM. XPE has built-in drivers for X520 so it will see it as a physical card. Downside to this approach is that only your XPE VM has access to 10Gb outside the host.

 

2) Assign your X520 to a vSwith. Then assign one of your XPE VM's virtual Network Adapter to use VMXNET3 as adapter type assigned to your 10Gb vSwitch. This is the method I employ so multiple VMs can all talk out over the 10Gb adapter.

 

 

Link to comment
Share on other sites

×
×
  • Create New...