Jump to content
XPEnology Community
  • 0

Transfer Speeds >1Gb/s


cstout

Question

Hello out there! My latest build, an 8x1TB setup in a Dell Optiplex 780 is proving to be my favorite so far. I just did a speed comparison between my real Synology 412+ and this Dell and the Dell is twice as fast. In fact, so fast that I capped at the maximum throughput for a single gigabit ethernet connection.

 

So this got me thinking...

 

Since the DS3615xs supports "up to 2,214.77 MB/s throughput in reading" I'm curious...has anyone achieved anything near this speed in a home setup? If so, I'd love to hear what kind of network hardware you're using to make it possible. I'm sure NIC teaming is involved since a single gig connection can only go so fast...but advertised speeds at 2214 MB/s...is that some kind of 10gig setup? I can't imagine any one client getting speeds that high, but that's some incredible throughput.

Link to comment
Share on other sites

Recommended Posts

  • 0

"DS3615xs delivers up to 2,214.77 MB/s throughput in reading and 231,295 read IOPS."

 

Per Synology's footnote:

"Tested with dual 10GbE connection with Link Aggregation. The performance figures vary on different environments."

 

Might need some $$$ hardware to test that out.

Link to comment
Share on other sites

  • 0

10GbE can be had for fairly cheap if you go with SPF+ connection instead of RJ45.

 

I have one of these in my Xpenology server and its supported with fairly recent XPEnoboot images.

http://www.ebay.com/itm/671798-001-HP-1 ... SwDuJW0Pd9

 

If your server and client is within 5 meters of each other, you can go with a SPF+ Twinax cable like this below to directly connect them without a switch.

http://www.ebay.com/itm/121884479854?_t ... EBIDX%3AIT

 

Between server and client, I can get raw network speed of 10Gbit. For file transfers, I've been able to hit around 500MB/sec on my 12x2TB RAID6 array. I suspect I can reach higher if I had something faster on my client than a single SATA3 SSD.

Edited by Guest
Link to comment
Share on other sites

  • 0

I'm fascinated to know what you guys are doing that needs these kinds of transfer speeds? :smile:

I'd have thought that for 99% of the time the gig connection would be fine.

Aside from that, I'd imagine that number of HDDs, software RAID performance and controller performance would become a limit to data throughput well before the network, even if using the fastest SSDs.

Link to comment
Share on other sites

  • 0

Yeah you can buy a Dell RT8N1 0RT8N1 MNPA19-XTR Mellanox 10GB Single Port NIC for about ~$25 off of Amazon and there are some other low cost options like HP as well.

 

I didn't know that you could connect them together without a 10G Switch though. I thought I would have to pay ~$800 USD or so for such a switch at least so I passed on the idea. If I could connect them directly I would buy at least two 10G NICs and the necessary cable. For connections between servers or NAS devices it would be useful to help move data between them if a large volume needs to be rebuilt.

 

So for example, let's say three HDDs fail in a SHR volume with two disc redundancy. After the failed HDD have been replaced the data is still lost so getting that data from another server or NAS as fast as possible is ideal. Even if it's a little bit faster then 1 gigabit Ethernet (due to HDD / controller limitations ect.) it's still worth it to me IMO.

 

For what it's worth, over 1 gigabit Ethernet I've been able to hit the reasonable real world limit. I've seen it spike to ~113MB/s but typically see it more or less around ~45MB/s.

 

It would be great for a high avalibility setup too if you could get that to work.

Link to comment
Share on other sites

  • 0

I can understand it for data recovery in the situation given, but if my data was that 'critical' I'd probably run a second 'slave' NAS, perhaps lower spec, have real time folder synch using DSM, from the 'master' to the 'slave', in the event of losing 3 drives, then all you do is remap drives to the slave. Thats pretty 'instant recovery :smile: with no need for expensive and complicated network kit :smile:

Link to comment
Share on other sites

  • 0

Out of curiosity I wanted to start a quick discussion on if people were achieving speeds faster than 1Gb/s at home with XPE. I'm not necessarily expressing a need for it, just curious. Personally, I'm thrilled to find my latest build is twice as fast as my real Synology box and I plan on taking advantage of that faster speed immediately. Big thanks to Kanedo and Octavean for pointing out some practical options! I've been getting ~45MB/s on my real Synology and I'm getting a steady, reproducible ~110MB/s on the Dell. It appears (to me) that the steady 110MB/s is a limit of my network and not the Dell or disks which is what started my curiosity about, "I wonder if someone is getting faster speeds than 110 in a home environment." Thanks, all, for contributing to the discussion!

Link to comment
Share on other sites

  • 0
I'm fascinated to know what you guys are doing that needs these kinds of transfer speeds? :smile:

I'd have thought that for 99% of the time the gig connection would be fine.

Aside from that, I'd imagine that number of HDDs, software RAID performance and controller performance would become a limit to data throughput well before the network, even if using the fastest SSDs.

 

When you're just streaming or accessing smaller files, gigabit is indeed enough. However, when you need access to tens or hundreds of gigabyte of data, having 10Gb helps a lot. One example is copying huge video files.

 

You are correct that drive performance could be a limiting factor to your transfer speed. However, almost any HDD or SSD sold today can hit above gigabit, so it's not very hard to saturate your 1Gbit connection at all. To saturate 10Gbit, you do need quite a few performant drives in a RAID0, 5 or 6 configuration.

Edited by Guest
Link to comment
Share on other sites

  • 0
Out of curiosity I wanted to start a quick discussion on if people were achieving speeds faster than 1Gb/s at home with XPE. I'm not necessarily expressing a need for it, just curious. Personally, I'm thrilled to find my latest build is twice as fast as my real Synology box and I plan on taking advantage of that faster speed immediately. Big thanks to Kanedo and Octavean for pointing out some practical options! I've been getting ~45MB/s on my real Synology and I'm getting a steady, reproducible ~110MB/s on the Dell. It appears (to me) that the steady 110MB/s is a limit of my network and not the Dell or disks which is what started my curiosity about, "I wonder if someone is getting faster speeds than 110 in a home environment." Thanks, all, for contributing to the discussion!

 

 

I'm currently hitting north of 500MB/sec copying between XPE to Win10 PC. I have 10GbE card installed on both XPE server and Win10 PC, connected directly to each other using SFP+ Twinax direct attach cable.

 

You are correct that your 110MB/s is the limit of your Gigabit network. Your disk array and cpu can easily handle more than that. The only practical method to break your 110MB/s limit is to upgrade your network gears to 10GbE.

Link to comment
Share on other sites

  • 0

Let me throw another variable into this, what 'measurement' tool is being used? I personally run Paessler PRTG that monitors all my SNMP enabled kit, so I can see the traffic on the LAN ports on my 4 XPE and 1 Syno NASes and my Netgear smart switches. I've noted different max speeds depending on protocol. FTP around 5MB, Rsynch backups - 50MB, samba - up to 200MB. If I look at Resource Monitor, there is a correlation to these. I guess there are network protocol and internal processing/kernal overheads that give these different values? Also, what about MTU and other things that can be messed with (or messed up :smile: ) It might be interesting to try and 'max out' a 10gig connection and see what components/file/services could do that - any volunteers ?:smile:

Link to comment
Share on other sites

  • 0

 

 

I'm currently hitting north of 500MB/sec copying between XPE to Win10 PC. I have 10GbE card installed on both XPE server and Win10 PC, connected directly to each other using SFP+ Twinax direct attach cable.

 

Very cool.

 

So you're using a SFP+ Winax DAC which is copper not Fiber?

 

What cable and what cards are you using? I'm sure there is a Windows driver for the NIC but did DSM / XPEnology have the necessary drivers already? Any additional work you needed to do to get it working?

 

I was looking at some cheap cards but I am unsure of the connector type to get it working in crossover mode. Any details you could provide would be a great help.

 

Thanks in advance.

 

Oct.

Link to comment
Share on other sites

  • 0

 

 

I'm currently hitting north of 500MB/sec copying between XPE to Win10 PC. I have 10GbE card installed on both XPE server and Win10 PC, connected directly to each other using SFP+ Twinax direct attach cable.

 

Very cool.

 

So you're using a SFP+ Twinax DAC which is copper not Fiber?

 

What cable and what cards are you using? I'm sure there is a Windows driver for the NIC but did DSM / XPEnology have the necessary drivers already? Any additional work you needed to do to get it working?

 

I was looking at some cheap cards but I am unsure of the connector type to get it working in crossover mode. Any details you could provide would be a great help.

 

Thanks in advance.

 

Oct.

Link to comment
Share on other sites

  • 0

Quick question...

 

I need do some test in performance (short story, need storage to backup vms in xenserver) and that is a nightmare because the nfs speed in xenserver...

 

So how are you doing to test your performance? And protocols, smb or nfs?

 

I have one old n54l and one gen8 both have xpenology and i can use them to make some tests in lab enviroment...

 

To the N54l i have the follwing values

Avr Read 110 MB/s

Avr Write 98,45 MB/s

 

The test i made is with dd /dev/zero

 

Thanks

Link to comment
Share on other sites

  • 0
10GbE can be had for fairly cheap if you go with SPF+ connection instead of RJ45.

 

I have one of these in my Xpenology server and its supported with fairly recent XPEnoboot images.

http://www.ebay.com/itm/671798-001-HP-1 ... SwDuJW0Pd9

 

If your server and client is within 5 meters of each other, you can go with a SPF+ Twinax cable like this below to directly connect them without a switch.

http://www.ebay.com/itm/121884479854?_t ... EBIDX%3AIT

 

Between server and client, I can get raw network speed of 10Gbit. For file transfers, I've been able to hit around 500MB/sec on my 12x2TB RAID6 array. I suspect I can reach higher if my array wasn't so full.

 

 

Hello, Kandeo.

 

Thank you very much for this info!!! I followed your directions and have 10GbE running on my XPEnology build. I used a SolarFlare 10GbE card in my Mac Pro which I purchased for about $55.

 

I am getting approx 300MB/s writes and 800MB/s reads over AFP. My array consists of eight Seagate 8TB archive drives in SHR2.

 

Thanks again for posting this valuable info!! :grin:

Link to comment
Share on other sites

  • 0

Hello, Kandeo.

 

Thank you very much for this info!!! I followed your directions and have 10GbE running on my XPEnology build. I used a SolarFlare 10GbE card in my Mac Pro which I purchased for about $55.

 

I am getting approx 300MB/s writes and 800MB/s reads over AFP. My array consists of eight Seagate 8TB archive drives in SHR2.

 

Thanks again for posting this valuable info!! :grin:

 

Good to know that AFP protocol doesn't limit the bandwidth too much. I have an Intel X520-DA2 on the way for my Hackintosh.

Link to comment
Share on other sites

  • 0
Good to know that AFP protocol doesn't limit the bandwidth too much. I have an Intel X520-DA2 on the way for my Hackintosh.

 

Yes, I am extremely happy with the performance I am seeing. It's truly amazing.

 

Does OS X have native support for the Intel X520-DA2? If not, where do you find the drivers?

 

I had to search on several of the manufacturers' websites before I found that SolarFlare offered OS X drivers.

Link to comment
Share on other sites

  • 0

Does OS X have native support for the Intel X520-DA2? If not, where do you find the drivers?

 

I had to search on several of the manufacturers' websites before I found that SolarFlare offered OS X drivers.

 

No, OSX doesn't have native drivers for Intel 10Gb cards. However, Smalltree sells a rebranded version of Intel cards made specifically for Macs.

https://www.small-tree.com/categories/1 ... net-cards/

 

Smalltree 10GbE drivers

https://www.small-tree.com/support/down ... y?cat_id=6

 

With some hackery, it is possible to spoof the PCI ID of the Intel cards to match those expected by the Smalltree drivers.

http://www.tonymacx86.com/network/15613 ... ivers.html

 

UPDATE: Success!

http://www.tonymacx86.com/network/15613 ... ost1223519

Edited by Guest
Link to comment
Share on other sites

  • 0

Hello,

 

what if i use this (or something similar) in my XPEnology and 1-1 of these in every other PCs and connect them with this cable. The QSFP port in the XPEnology and 1-1 SFP+ port tin the PCs SFP+ port. In this way i can use 10GBe connection from every 4 PC to the XPEnology? Or this QSFP card and cable is for another thing?

 

Thanks.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...