Jump to content
XPEnology Community

10Gbe setup - will this work with 6.1?


test4321

Recommended Posts

  • 2 weeks later...

he has not much time and wants to mod it further to incorporate all 3 versions of jun's loader so it will take a little longer

but don't worry, if you really need that 10G driver i will add it  shortly to my extra.lzma and you can go on with jun's loader for now

  • Like 1
Link to comment
Share on other sites

1 minute ago, IG-88 said:

he has not much time and wants to mod it further to incorporate all 3 versions of jun's loader so it will take a little longer

but don't worry, if you really need that 10G driver i will add it  shortly to my extra.lzma and you can go on with jun's loader for now

 

Sweet! Hopefully my ebay seller comes through and hurries the shipping up.

 

Thanks!

Link to comment
Share on other sites

On 1/13/2018 at 2:58 AM, test4321 said:

Was reviewing this thread and saw discussion that Mellanox Connect-X 2 might not be supported by the Synology driverset.  I can confirm that the standard Mellanox driver supports Connect-X 2 single and dual port 10GBe on baremetal Xpenology 6.2 with no problem, tested on my own system.  However,  I switched to a ConnectX-3 because PCI 3.0 and SR-IOV support for ESXi.

 

 

On 1/13/2018 at 2:58 AM, test4321 said:

Hey guys,

 

Now for networking I want to go 10GBe SFP+

 

I am looking at:

 

2X - Mellanox ConnectX-2

https://www.ebay.com/itm/391459428428

 

Link to comment
Share on other sites

On 1/13/2018 at 2:58 AM, test4321 said:

Was reviewing this thread and saw discussion that Mellanox Connect-X 2 might not be supported by the Synology driverset.  I can confirm that the standard Mellanox driver supports Connect-X 2 single and dual port 10GBe on baremetal Xpenology 6.2 with no problem, tested on my own system.  However,  I switched to a ConnectX-3 because PCI 3.0 and SR-IOV support for ESXi.

 

 

On 1/13/2018 at 2:58 AM, test4321 said:

Hey guys,

 

Now for networking I want to go 10GBe SFP+

 

I am looking at:

 

2X - Mellanox ConnectX-2

https://www.ebay.com/itm/391459428428

 

Link to comment
Share on other sites

FYI....for testing purposes only....

I bought (2) MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD W/CABLE from eBay.....pretty cheap ($48.00)including (2) SFP+ cables and it worked out of the box. Im using an older motherboard (MSI G41TM-E43) with only 2GB of RAM on a LGA775 Core 2Duo and Im getting 398.7 Write and 452.4 read when transferring files from my Mac Pro 3,1....the Mellanox doesn't work with my Mac Pro so I had to get a SolarFlare SFN5122F Dual Port 10Gbe PCIe Adapter SF329-9021-R6 ($44.00).  Setup was straight forward and speeds are very close to my OWC Thunderbay IV in RAID 0

 

My current setup only have 4 SATAII ports with transfer rate up to 3Gb/s, ( 4 x 3TB Drives) so maybe once I move to a faster motherboard, speeds will increase.

 

Can anyone comment or share what could contribute for speed bottleneck as the numbers of drives start to increase? 8 drives, 12 drives etc...my goal is to build a 12 drive system (36TB)

  • Like 1
Link to comment
Share on other sites

17 minutes ago, Xepnewbie2018 said:

FYI....for testing purposes only....

I bought (2) MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD W/CABLE from eBay.....pretty cheap ($48.00)including (2) SFP+ cables and it worked out of the box. Im using an older motherboard (MSI G41TM-E43) with only 2GB of RAM on a LGA775 Core 2Duo and Im getting 398.7 Write and 452.4 read when transferring files from my Mac Pro 3,1....the Mellanox doesn't work with my Mac Pro so I had to get a SolarFlare SFN5122F Dual Port 10Gbe PCIe Adapter SF329-9021-R6 ($44.00).  Setup was straight forward and speeds are very close to my OWC Thunderbay IV in RAID 0

 

My current setup only have 4 SATAII ports with transfer rate up to 3Gb/s, ( 4 x 3TB Drives) so maybe once I move to a faster motherboard, speeds will increase.

 

Can anyone comment or share what could contribute for speed bottleneck as the numbers of drives start to increase? 8 drives, 12 drives etc...my goal is to build a 12 drive system (36TB)

 

Motherboard change is definitely the first thing to do. They are the dirt cheap part of the build - the more expensive stuff is RAM and CPU. You can probably try a build like mine - LGA1151 is cheap on eBay because of the whole Intel fuckup where they changed the socket. 

 

As far as hard drives - I think you are probably better off just buying 2 SSD drives and putting them as WRITE CACHE instead of getting 12 drives for speed. I dont know if anybody attempted to do this on XPE though. 

 

Also Synology WRITE CACHE is suspect - i have seen conflicting videos where it does improve the speed and where it doesnt at all. 

 

 

 

EDIT: I also noticed Synology started to use RAM as fast storage for Synology device databases. This is applied to Universal Search and something else (I dont remember right now). So maybe in future they will just use RAM drive for every application?

 

 

 

 

 

Edited by test4321
Link to comment
Share on other sites

test4321

Thanks for the advise......I have been looking at a couple of Supermicro server motherboards......still doing research...

 

I have also read the SSD cache reviews...not sure applies to my current use...believe cache would be more useful for files that I would use often...according to the Synology website..." SSD cache can improve the performance of random access by storing frequently accessed data on the SSD cache. SSD cache can be mounted on a volume or iSCSI LUN (Block-Level)."

https://www.synology.com/en-us/knowledgebase/DSM/help/DSM/StorageManager/genericssdcache

 

......currently my plan is for long term redundancy back ups....but I wonder if I decide to run virtual machines in xepnology would take advantage of this....

 

 

 

 

Link to comment
Share on other sites

On 1/31/2018 at 9:15 PM, mervincm said:

I don't think that the mellanox connectX-2 (I tested one in the past) have any sort of built in driver support in XPenology.

 

that is not the case, i already had the adapter ic listed so in theory it was possible to check but as it seems the connectx type name is always used so i added a listing of connectx types and ic familiy

 

looks like every adapter from ConnectX (1) to ConnectX-6 is supported with the drivers that come from synology with dsm 6.1 for ds3615/3617 (916+ comes without this drivers from synology and my extra.lzma has only untested driver as there was no feedback about it, so it should be treated as may work or not - there is a special 916+ section to read about the shortcomings, 916+ is a consumer version and does not provide as much drivers as the business models)

 

looks like cheap ConnectX-2/3 are not such a bad choice as they are natively supported by dsm as it comes from synology

 

Edited by IG-88
  • Thanks 1
Link to comment
Share on other sites

I have Hp microserver gen 8 running the latest DSM 6.1.5-15254. Booting as DS3617xs. I have installed BR-1020 10gbe network card. I am using jun's 1.02b loader with latest extra.lzma file (v4.6 11.03.2018) however it seems that dsm is not recognizing this card. I don't have 10gbe network interfaces shown in menu. Is there any setting in the menu to activate the interface? I am new to 10gb networking... Thanks

Link to comment
Share on other sites

5 hours ago, b4u said:

I am using jun's 1.02b loader with latest extra.lzma file (v4.6 11.03.2018) however it seems that dsm is not recognizing this card. I don't have 10gbe network interfaces shown in menu

 

if it works normal it should just show up, you gave no info from log but most common case would be it might need firmware

so i checked the kernel source and found this

/linux-3.10.x/drivers/net/ethernet/brocade/bna/cna.h
...
#define CNA_FW_FILE_CT  "ctfw-3.1.0.0.bin"
#define CNA_FW_FILE_CT2 "ct2fw-3.1.0.0.bin"
...

i added the files and created a new version 4.7 you can try

-> https://xpenology.com/forum/topic/9508-driver-extension-jun-102bdsm61x-for-3615xs-3617xs-916/

 

 

Edited by IG-88
Link to comment
Share on other sites

Small Test

 

HP SSF  INTEL Xeon D-1527

 

Installed DS3617 V1.02b for DSm 6.1

 

Added LSi HBA 9207

Mellanox MHQH29B-XTR ConnectX 2

 

The system is bare metal

 

Results - The LSI HBA 9207 card is very transparent.  It works fine right out of the box

On the other hand the Mellanox MHQH29B-XTR ConnectX 2 does not show up under network interfaces,

 

With SSH

 

Me@Test:/$ lspci
0000:00:00.0 Class 0600: Device 8086:0c00 (rev 06)
0000:00:01.0 Class 0604: Device 8086:0c01 (rev 06)
0000:00:02.0 Class 0300: Device 8086:0412 (rev 06)
0000:00:03.0 Class 0403: Device 8086:0c0c (rev 06)
0000:00:14.0 Class 0c03: Device 8086:8c31 (rev 04)
0000:00:16.0 Class 0780: Device 8086:8c3a (rev 04)
0000:00:16.3 Class 0700: Device 8086:8c3d (rev 04)
0000:00:19.0 Class 0200: Device 8086:153a (rev 04)
0000:00:1a.0 Class 0c03: Device 8086:8c2d (rev 04)
0000:00:1b.0 Class 0403: Device 8086:8c20 (rev 04)
0000:00:1c.0 Class 0604: Device 8086:8c10 (rev d4)
0000:00:1c.4 Class 0604: Device 8086:8c18 (rev d4)
0000:00:1d.0 Class 0c03: Device 8086:8c26 (rev 04)
0000:00:1f.0 Class 0601: Device 8086:8c4e (rev 04)
0000:00:1f.2 Class 0106: Device 8086:8c02 (rev 04)
0000:00:1f.3 Class 0c05: Device 8086:8c22 (rev 04)
0000:01:00.0 Class 0107: Device 1000:0087 (rev 05)
0000:03:00.0 Class 0c06: Device 15b3:673c (rev b0)
0001:00:02.0 Class 0000: Device 8086:6f04 (rev ff)
0001:00:02.2 Class 0000: Device 8086:6f06 (rev ff)
0001:00:03.0 Class 0000: Device 8086:6f08 (rev ff)
0001:00:03.2 Class 0000: Device 8086:6f0a (rev ff)
0001:00:1f.0 Class 0000: Device 8086:8c54 (rev ff)
0001:00:1f.3 Class 0000: Device 8086:8c22 (rev ff)
0001:06:00.0 Class 0000: Device 1b4b:1475 (rev ff)
0001:08:00.0 Class 0000: Device 1b4b:9235 (rev ff)
0001:09:00.0 Class 0000: Device 8086:1533 (rev ff)
0001:0c:00.0 Class 0000: Device 8086:1533 (rev ff)
0001:0d:00.0 Class 0000: Device 8086:1533 (rev ff)
Me@Test:/$

 

Not sure how to test it out any further, I only have this system for test  this  weekend then I have to give it back.

Link to comment
Share on other sites

 

7 hours ago, RacerX said:

0000:03:00.0 Class 0c06: Device 15b3:673c (rev b0)

just searched the web for the pci device vendor

https://pci-ids.ucw.cz/read/PC/15b3

 

15b3:673c - MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE]

 

in kernel source of dsm 6.1 kernel

/linux-3.10.x/drivers/net/ethernet/mellanox/mlx4/main.c

static DEFINE_PCI_DEVICE_TABLE(mlx4_pci_table) = {
...
        { PCI_VDEVICE(MELLANOX, 0x673c), MLX4_PCI_DEV_FORCE_SENSE_PORT },
        /* MT25408 "Hermon" EN 10GigE */
        { PCI_VDEVICE(MELLANOX, 0x6368), MLX4_PCI_DEV_FORCE_SENSE_PORT },
        /* MT25408 "Hermon" EN 10GigE PCIe gen2 */
        { PCI_VDEVICE(MELLANOX, 0x6750), MLX4_PCI_DEV_FORCE_SENSE_PORT },
        /* MT25458 ConnectX EN 10GBASE-T 10GigE */
        { PCI_VDEVICE(MELLANOX, 0x6372), MLX4_PCI_DEV_FORCE_SENSE_PORT },
        /* MT25458 ConnectX EN 10GBASE-T+Gen2 10GigE */
        { PCI_VDEVICE(MELLANOX, 0x675a), MLX4_PCI_DEV_FORCE_SENSE_PORT },
        /* MT26468 ConnectX EN 10GigE PCIe gen2*/
        { PCI_VDEVICE(MELLANOX, 0x6764), MLX4_PCI_DEV_FORCE_SENSE_PORT },
        /* MT26438 ConnectX EN 40GigE PCIe gen2 5GT/s */
        { PCI_VDEVICE(MELLANOX, 0x6746), MLX4_PCI_DEV_FORCE_SENSE_PORT },
        /* MT26478 ConnectX2 40GigE PCIe gen2 */
        { PCI_VDEVICE(MELLANOX, 0x676e), MLX4_PCI_DEV_FORCE_SENSE_PORT },
        /* MT25400 Family [ConnectX-2 Virtual Function] */
        { PCI_VDEVICE(MELLANOX, 0x1002), MLX4_PCI_DEV_IS_VF },
        /* MT27500 Family [ConnectX-3] */
        { PCI_VDEVICE(MELLANOX, 0x1003), 0 },
        /* MT27500 Family [ConnectX-3 Virtual Function] */
...

and synology seems to use even newer drivers (3.3.-1.0.4) as there is also a mlx5 module, which is not part of the original kernel, so you should work ootb

i guess your dsm 6.1 is running with the card plugged in so have a look at /var/log/dmesg what it says about the card

the driver is natively part of dsm and should load so there should be something in the log about it

 

mellanox official supported cards for the 3.3-1.0.4 driver and needed min. firmware can be found here

http://www.mellanox.com/page/mlnx_ofed_matrix?mtag=linux_sw_drivers

 

Link to comment
Share on other sites

It does..

dmesg -wH
[+0.000002] Backport generated by backports.git v3.18.1-1-0-g5e9ec4c
[  +0.007100] Compat-mlnx-ofed backport release: cd30181
[  +0.000002] Backport based on mlnx_ofed/mlnx_rdma.git cd30181
[  +0.000001] compat.git: mlnx_ofed/mlnx_rdma.git
[  +0.053378] mlx4_core: Mellanox ConnectX core driver v3.3-1.0.4 (03 Jul 2016)
[  +0.000008] mlx4_core: Initializing 0000:03:00.0
[  +0.000033] mlx4_core 0000:03:00.0: enabling device (0100 -> 0102)
[  +0.420818] systemd-udevd[6199]: starting version 204
[  +1.251641] mlx4_core 0000:03:00.0: DMFS high rate mode not supported
[  +0.006420] mlx4_core: device is working in RoCE mode: Roce V1
[  +0.000001] mlx4_core: gid_type 1 for UD QPs is not supported by the devicegid _type 0 was chosen instead
[  +0.000001] mlx4_core: UD QP Gid type is: V1
[  +1.253954] mlx4_core 0000:03:00.0: PCIe BW is different than device's capability
[  +0.000002] mlx4_core 0000:03:00.0: PCIe link speed is 5.0GT/s, device support                             s 5.0GT/s
[  +0.000001] mlx4_core 0000:03:00.0: PCIe link width is x4, device supports x8
[  +0.000087] mlx4_core 0000:03:00.0: irq 52 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 53 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 54 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 55 for MSI/MSI-X
[  +0.000004] mlx4_core 0000:03:00.0: irq 56 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 57 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 58 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 59 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 60 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 61 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 62 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 63 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 64 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 65 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 66 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 67 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:03:00.0: irq 68 for MSI/MSI-X
[  +1.150446] mlx4_en: Mellanox ConnectX HCA Ethernet driver v3.3-1.0.4 (03 Jul  2016)

 

Link to comment
Share on other sites

Adjusted test, removed LSI 9207 and tested the ConnetX2 card in the first slot


 

dmesg -wH
[  +0.019285] Compat-mlnx-ofed backport release: cd30181
[  +0.000002] Backport based on mlnx_ofed/mlnx_rdma.git cd30181
[  +0.000001] compat.git: mlnx_ofed/mlnx_rdma.git
[  +0.061974] mlx4_core: Mellanox ConnectX core driver v3.3-1.0.4 (03 Jul 2016)
[  +0.000008] mlx4_core: Initializing 0000:01:00.0
[  +0.000031] mlx4_core 0000:01:00.0: enabling device (0100 -> 0102)
[  +0.530407] systemd-udevd[5965]: starting version 204
[  +1.141257] mlx4_core 0000:01:00.0: DMFS high rate mode not supported
[  +0.006462] mlx4_core: device is working in RoCE mode: Roce V1
[  +0.000001] mlx4_core: gid_type 1 for UD QPs is not supported by the devicegid _type 0 was chosen instead
[  +0.000001] mlx4_core: UD QP Gid type is: V1
[  +0.750613] mlx4_core 0000:01:00.0: PCIe link speed is 5.0GT/s, device support         s 5.0GT/s
[  +0.000002] mlx4_core 0000:01:00.0: PCIe link width is x8, device supports x8
[  +0.000080] mlx4_core 0000:01:00.0: irq 44 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 45 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 46 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 47 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 48 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 49 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 50 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 51 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 52 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 53 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 54 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 55 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 56 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 57 for MSI/MSI-X
[  +0.000002] mlx4_core 0000:01:00.0: irq 58 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 59 for MSI/MSI-X
[  +0.000003] mlx4_core 0000:01:00.0: irq 60 for MSI/MSI-X
[  +0.822443] mlx4_en: Mellanox ConnectX HCA Ethernet driver v3.3-1.0.4 (03 Jul          2016)
[  +3.135645] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.71         3.00 ($DateTime: 2015/07/28 00:13:30 $)

 

Link to comment
Share on other sites

looks like the driver is working

did you change the nic settings in grub.cfg on your usb flash drive?

like:

set netif_num=3

set mac1=...
set mac2=...
set mac3=...

asuming a one port nic (eth0) with a 2port mellanox whats in the log about ethX

 

cat /var/log/dmesg | grep eth1

cat /var/log/dmesg | grep eth2

 

Link to comment
Share on other sites

If it's working that is news to me. My test is stock DS3617xs 6.1 Jun's Mod V1.02b 7/4/2017 i did not change the usb stick.  Do I need to change it for the test? The mellanox card is two ports. 

cat /var/log/dmesg | grep eth1
cat /var/log/dmesg | grep eth2

just returns nothing

 

I connected the cable from one port to the other port since I do not have a 10GB Switch

 

There are no link link lights and it does not show up with the network interfaces.

Link to comment
Share on other sites

1 hour ago, RacerX said:

If it's working that is news to me.

 

in the way of it detects a hardware present and does not crash, so maybe something else is missing

 

 

1 hour ago, RacerX said:

My test is stock DS3617xs 6.1 Jun's Mod V1.02b 7/4/2017 i did not change the usb stick.  Do I need to change it for the test?

 

i never tried what happens if you insert more nic's and do not change this settings, so yes try to change it, can be easily changed back later

it you dont have the real mac addresses just make some up, its not important for testing it

 

1 hour ago, RacerX said:

The mellanox card is two ports. 

 

i guess so, the model you gave was a 2port

 

1 hour ago, RacerX said:

cat /var/log/dmesg | grep eth1

cat /var/log/dmesg | grep eth2

 

just returns nothing

 

if they are not present then the gui will not show anything about more nic's

Link to comment
Share on other sites

only makes sense to test this if changing the grub.cfg fails, it its about the grub.cfg then the setting it for one card and it will make no difference if it is one or two more ports

from the dmesg it look like the driver loads so the problem might not be the driver itself

 

Edited by IG-88
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...