RacerX

Members
  • Content Count

    31
  • Joined

  • Last visited

  • Days Won

    1

RacerX last won the day on March 18

RacerX had the most liked content!

Community Reputation

3 Neutral

About RacerX

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. FYI You only need to change bios C! for 6. It effects the power button....
  2. The onboard nic (Broadcom BCM95723) works in 5.2, however the performance is much better with the other two pcie cards....
  3. Skiptar Two Issues First, BCM95723 is no longer supported. So in bios disable onboard network card. Next, try a card (Intel CT PCIE Gigabit, HP NC360T) Second, in bios disable CPU C1 power management setting
  4. Hello I'm back again, yesterday I setup a test on one of my N54L's. DMS3615xs_23739 works correctly Hardware wise Intel PCIE Desktop Nic e1000e is in the middle x4 slot 10Gb ConnectX2 in the x16 slot I reflashed the card according the PID from Mellanox The card is native Mellanox Infiniband but the ports can be configured to do IP or Infiniband This is just a test no data involved On the desktop I can see the Intel Nic but not the Mellanox 10GB nic Here is the log. dmesg -wH +0.000002] e1000e 0000:03:00.0 eth0: Intel(R) PRO/1000 Network Connection [ +0.000013] e1000e 0000:03:00.0 eth0: MAC: 3, PHY: 8, PBA No: E46981-005 [ +0.024683] Intel(R) Gigabit Ethernet Network Driver - version 5.3.5.3 [ +0.000005] Copyright (c) 2007-2015 Intel Corporation. [ +0.017744] Intel(R) 10GbE PCI Express Linux Network Driver - version 5.1.3 [ +0.000005] Copyright(c) 1999 - 2017 Intel Corporation. [ +0.019045] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.3.6 [ +0.000004] i40e: Copyright(c) 2013 - 2017 Intel Corporation. [ +0.022818] tn40xx low_mem_msg proc entry initialized [ +0.000007] tn40xx low_mem_counter proc entry initialized [ +0.000003] tn40xx debug_msg proc entry initialized [ +0.000002] tn40xx: Tehuti Network Driver, 0.3.6.12.3 [ +0.041244] qed_init called [ +0.000005] QLogic FastLinQ 4xxxx Core Module qed 8.33.9.0 [ +0.000002] creating debugfs root node [ +0.008795] qede_init: QLogic FastLinQ 4xxxx Ethernet Driver qede 8.33.9.0 [ +0.009530] Loading modules backported from Linux version v3.18.1-0-g39ca484 [ +0.000004] Backport generated by backports.git v3.18.1-1-0-g5e9ec4c [ +0.046503] Compat-mlnx-ofed backport release: c22af88 [ +0.000005] Backport based on mlnx_ofed/mlnx-ofa_kernel-4.0.git c22af88 [ +0.000002] compat.git: mlnx_ofed/mlnx-ofa_kernel-4.0.git [ +0.106092] mlx4_core: Mellanox ConnectX core driver v4.1-1.0.2 (27 Jun 2017) [ +0.000050] mlx4_core: Initializing 0000:02:00.0 [ +0.627691] systemd-udevd[5692]: starting version 204 [ +1.052738] mlx4_core 0000:02:00.0: DMFS high rate mode not supported [ +0.000235] mlx4_core: device is working in RoCE mode: Roce V1 [ +0.000002] mlx4_core: UD QP Gid type is: V1 [ +1.121411] mlx4_core 0000:02:00.0: PCIe link speed is 5.0GT/s, device support s 5.0GT/s [ +0.000007] mlx4_core 0000:02:00.0: PCIe link width is x8, device supports x8 [ +0.000131] mlx4_core 0000:02:00.0: irq 45 for MSI/MSI-X [ +0.000007] mlx4_core 0000:02:00.0: irq 46 for MSI/MSI-X [ +0.000007] mlx4_core 0000:02:00.0: irq 47 for MSI/MSI-X [ +0.000006] mlx4_core 0000:02:00.0: irq 48 for MSI/MSI-X [ +0.000005] mlx4_core 0000:02:00.0: irq 49 for MSI/MSI-X [ +0.232870] mlx4_en: Mellanox ConnectX HCA Ethernet driver v4.1-1.0.2 (27 Jun 2017) Do I need to setup the grub info for the card?
  5. Thanks for the help. I have to return the hardware that I had this weekend to test.
  6. I changed the macs last number sequence to be 1, then 2, then 3 and saved as text but for some reason error 13 over and over ...yuk
  7. I just bought the connectx3 card last week so I wanted to make sure it works (it does with my limited test). I know more about the connect2 cards since I've had them for a long time. I tried changing grub to 3 macs. but now i get error 13 every time I try to install it. Thanks
  8. Lat night I changed the test for proof of concept (Ubuntu 16.04 includes the Kernel Sources in /usr/src) and it works. With the the newer single port card and the scripted Mellanox OFED install. I configured the connetcx3 as root in the terminal. This document was great. https://community.mellanox.com/docs/DOC-2431 I'm changing back to connectx2 (dual port) and DS3617 today.
  9. Hi I have another single port card 02:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3] In another computer if that I could test tomorrow that helps
  10. If it's working that is news to me. My test is stock DS3617xs 6.1 Jun's Mod V1.02b 7/4/2017 i did not change the usb stick. Do I need to change it for the test? The mellanox card is two ports. cat /var/log/dmesg | grep eth1 cat /var/log/dmesg | grep eth2 just returns nothing I connected the cable from one port to the other port since I do not have a 10GB Switch There are no link link lights and it does not show up with the network interfaces.
  11. Adjusted test, removed LSI 9207 and tested the ConnetX2 card in the first slot dmesg -wH [ +0.019285] Compat-mlnx-ofed backport release: cd30181 [ +0.000002] Backport based on mlnx_ofed/mlnx_rdma.git cd30181 [ +0.000001] compat.git: mlnx_ofed/mlnx_rdma.git [ +0.061974] mlx4_core: Mellanox ConnectX core driver v3.3-1.0.4 (03 Jul 2016) [ +0.000008] mlx4_core: Initializing 0000:01:00.0 [ +0.000031] mlx4_core 0000:01:00.0: enabling device (0100 -> 0102) [ +0.530407] systemd-udevd[5965]: starting version 204 [ +1.141257] mlx4_core 0000:01:00.0: DMFS high rate mode not supported [ +0.006462] mlx4_core: device is working in RoCE mode: Roce V1 [ +0.000001] mlx4_core: gid_type 1 for UD QPs is not supported by the devicegid _type 0 was chosen instead [ +0.000001] mlx4_core: UD QP Gid type is: V1 [ +0.750613] mlx4_core 0000:01:00.0: PCIe link speed is 5.0GT/s, device support s 5.0GT/s [ +0.000002] mlx4_core 0000:01:00.0: PCIe link width is x8, device supports x8 [ +0.000080] mlx4_core 0000:01:00.0: irq 44 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 45 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 46 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 47 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 48 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 49 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 50 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 51 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 52 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 53 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 54 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 55 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 56 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 57 for MSI/MSI-X [ +0.000002] mlx4_core 0000:01:00.0: irq 58 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 59 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 60 for MSI/MSI-X [ +0.822443] mlx4_en: Mellanox ConnectX HCA Ethernet driver v3.3-1.0.4 (03 Jul 2016) [ +3.135645] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.71 3.00 ($DateTime: 2015/07/28 00:13:30 $)
  12. It does.. dmesg -wH [+0.000002] Backport generated by backports.git v3.18.1-1-0-g5e9ec4c [ +0.007100] Compat-mlnx-ofed backport release: cd30181 [ +0.000002] Backport based on mlnx_ofed/mlnx_rdma.git cd30181 [ +0.000001] compat.git: mlnx_ofed/mlnx_rdma.git [ +0.053378] mlx4_core: Mellanox ConnectX core driver v3.3-1.0.4 (03 Jul 2016) [ +0.000008] mlx4_core: Initializing 0000:03:00.0 [ +0.000033] mlx4_core 0000:03:00.0: enabling device (0100 -> 0102) [ +0.420818] systemd-udevd[6199]: starting version 204 [ +1.251641] mlx4_core 0000:03:00.0: DMFS high rate mode not supported [ +0.006420] mlx4_core: device is working in RoCE mode: Roce V1 [ +0.000001] mlx4_core: gid_type 1 for UD QPs is not supported by the devicegid _type 0 was chosen instead [ +0.000001] mlx4_core: UD QP Gid type is: V1 [ +1.253954] mlx4_core 0000:03:00.0: PCIe BW is different than device's capability [ +0.000002] mlx4_core 0000:03:00.0: PCIe link speed is 5.0GT/s, device support s 5.0GT/s [ +0.000001] mlx4_core 0000:03:00.0: PCIe link width is x4, device supports x8 [ +0.000087] mlx4_core 0000:03:00.0: irq 52 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 53 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 54 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 55 for MSI/MSI-X [ +0.000004] mlx4_core 0000:03:00.0: irq 56 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 57 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 58 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 59 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 60 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 61 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 62 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 63 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 64 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 65 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 66 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 67 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 68 for MSI/MSI-X [ +1.150446] mlx4_en: Mellanox ConnectX HCA Ethernet driver v3.3-1.0.4 (03 Jul 2016)
  13. Small Test HP SSF INTEL Xeon D-1527 Installed DS3617 V1.02b for DSm 6.1 Added LSi HBA 9207 Mellanox MHQH29B-XTR ConnectX 2 The system is bare metal Results - The LSI HBA 9207 card is very transparent. It works fine right out of the box On the other hand the Mellanox MHQH29B-XTR ConnectX 2 does not show up under network interfaces, With SSH Me@Test:/$ lspci 0000:00:00.0 Class 0600: Device 8086:0c00 (rev 06) 0000:00:01.0 Class 0604: Device 8086:0c01 (rev 06) 0000:00:02.0 Class 0300: Device 8086:0412 (rev 06) 0000:00:03.0 Class 0403: Device 8086:0c0c (rev 06) 0000:00:14.0 Class 0c03: Device 8086:8c31 (rev 04) 0000:00:16.0 Class 0780: Device 8086:8c3a (rev 04) 0000:00:16.3 Class 0700: Device 8086:8c3d (rev 04) 0000:00:19.0 Class 0200: Device 8086:153a (rev 04) 0000:00:1a.0 Class 0c03: Device 8086:8c2d (rev 04) 0000:00:1b.0 Class 0403: Device 8086:8c20 (rev 04) 0000:00:1c.0 Class 0604: Device 8086:8c10 (rev d4) 0000:00:1c.4 Class 0604: Device 8086:8c18 (rev d4) 0000:00:1d.0 Class 0c03: Device 8086:8c26 (rev 04) 0000:00:1f.0 Class 0601: Device 8086:8c4e (rev 04) 0000:00:1f.2 Class 0106: Device 8086:8c02 (rev 04) 0000:00:1f.3 Class 0c05: Device 8086:8c22 (rev 04) 0000:01:00.0 Class 0107: Device 1000:0087 (rev 05) 0000:03:00.0 Class 0c06: Device 15b3:673c (rev b0) 0001:00:02.0 Class 0000: Device 8086:6f04 (rev ff) 0001:00:02.2 Class 0000: Device 8086:6f06 (rev ff) 0001:00:03.0 Class 0000: Device 8086:6f08 (rev ff) 0001:00:03.2 Class 0000: Device 8086:6f0a (rev ff) 0001:00:1f.0 Class 0000: Device 8086:8c54 (rev ff) 0001:00:1f.3 Class 0000: Device 8086:8c22 (rev ff) 0001:06:00.0 Class 0000: Device 1b4b:1475 (rev ff) 0001:08:00.0 Class 0000: Device 1b4b:9235 (rev ff) 0001:09:00.0 Class 0000: Device 8086:1533 (rev ff) 0001:0c:00.0 Class 0000: Device 8086:1533 (rev ff) 0001:0d:00.0 Class 0000: Device 8086:1533 (rev ff) Me@Test:/$ Not sure how to test it out any further, I only have this system for test this weekend then I have to give it back.
  14. Good news Just test bare metal 1.02b. Works fine, updated to DSM 6.1.5-15254. Then updated o DSM 6.1.5-15254 Update 1. Restoring Data (1 Day over 1GB) Yesterday I tested ESXI 6.5 on the N54L even with 16GB it's just too slow. Today I setup a spare Sandy Bridge PC that I had to test, I needed to scrounge around for another stick of memory it's a whooping 6GB but the Core 2 E8500 runs ESXI 6.5 a lot better . I have a Supermicro X9 (Xeon) board that has poor usb boot support. I'm working on getting a hba then give it a better test. I want to test Connectx3, I see it in the device manager of Windows 2016 Essentials I need another to test with (ESXI 6.5, Xpnology 6.1). I was surprised how Widows 2016 essentials runs with a SSD on N54L. I would really like to test Hyper V with issues like images must be in iso format and samba multichannel (experimental) it seems far off. Thanks for the help, thoughts
  15. During a test today. I setup DSM 6.1 DS3617 (2-27-2017) it works properly. I also have NC360T not plugged to the switch. Then I did the upgrade to DSM 6.1.5-15254 it went thru the whole 10 minutes and I knew something was up.. I shutdown with the power button. When it came back up "Diskstation Not Found". I shutdown again and connected NC360T to the switch. This time it found the NC360 two nics. So I logged in and I see the shutdown was not graceful, ok.... I noticed the BCM5723 is gone, it only shows the NC360T. I ran the update to DSM 6.1.5-15254 Update 1 and it is still MIA... As for the test, I was doing a test of Windows 2016 essentials. I pulled the plug and put my drives back in and I received the wonderful "Diskstation not found" from Synology Assistant. I struggled to get all my new data moved to my other box. This synology page was helpful. https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC So the whole week no Xpenology until I read your post in about broadcom in german (google translate) to make sure C1 is disabled, and that did the trick awesome! I always have used DS3615 (i3) . So today I tested DS3617 (XEON ) on accident synoboot is 2/27/2017 It was just a test and I was surprised that the onboard nic installed but when I did the update it disappeared. I was lucky to have the other hp nic to work around it. . I need to use the onboard nic I want to try Mellanox CX324a connectX-3. So if you want me to test it out I can because I'm in between systems at the moment I've been thinking changing from Bare Metal to ESXI so I can run both. There is good howto on youtube right now. Thanks