• Announcements

    • Polanskiman

      DSM 6.2-23739 - WARNING   05/23/2018

      This is a MAJOR update of DSM. DO NOT UPDATE TO DSM 6.2 with Jun's loader 1.02b or earlier. Your box will be bricked.  You have been warned.   https://www.synology.com/en-global/releaseNote/DS3615xs

RacerX

Members
  • Content count

    27
  • Joined

  • Last visited

Community Reputation

0 Neutral

About RacerX

  • Rank
    Junior Member
  1. 10Gbe setup - will this work with 6.1?

    Thanks for the help. I have to return the hardware that I had this weekend to test.
  2. 10Gbe setup - will this work with 6.1?

    I changed the macs last number sequence to be 1, then 2, then 3 and saved as text but for some reason error 13 over and over ...yuk
  3. 10Gbe setup - will this work with 6.1?

    I just bought the connectx3 card last week so I wanted to make sure it works (it does with my limited test). I know more about the connect2 cards since I've had them for a long time. I tried changing grub to 3 macs. but now i get error 13 every time I try to install it. Thanks
  4. 10Gbe setup - will this work with 6.1?

    Lat night I changed the test for proof of concept (Ubuntu 16.04 includes the Kernel Sources in /usr/src) and it works. With the the newer single port card and the scripted Mellanox OFED install. I configured the connetcx3 as root in the terminal. This document was great. https://community.mellanox.com/docs/DOC-2431 I'm changing back to connectx2 (dual port) and DS3617 today.
  5. 10Gbe setup - will this work with 6.1?

    Hi I have another single port card 02:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3] In another computer if that I could test tomorrow that helps
  6. 10Gbe setup - will this work with 6.1?

    If it's working that is news to me. My test is stock DS3617xs 6.1 Jun's Mod V1.02b 7/4/2017 i did not change the usb stick. Do I need to change it for the test? The mellanox card is two ports. cat /var/log/dmesg | grep eth1 cat /var/log/dmesg | grep eth2 just returns nothing I connected the cable from one port to the other port since I do not have a 10GB Switch There are no link link lights and it does not show up with the network interfaces.
  7. 10Gbe setup - will this work with 6.1?

    Adjusted test, removed LSI 9207 and tested the ConnetX2 card in the first slot dmesg -wH [ +0.019285] Compat-mlnx-ofed backport release: cd30181 [ +0.000002] Backport based on mlnx_ofed/mlnx_rdma.git cd30181 [ +0.000001] compat.git: mlnx_ofed/mlnx_rdma.git [ +0.061974] mlx4_core: Mellanox ConnectX core driver v3.3-1.0.4 (03 Jul 2016) [ +0.000008] mlx4_core: Initializing 0000:01:00.0 [ +0.000031] mlx4_core 0000:01:00.0: enabling device (0100 -> 0102) [ +0.530407] systemd-udevd[5965]: starting version 204 [ +1.141257] mlx4_core 0000:01:00.0: DMFS high rate mode not supported [ +0.006462] mlx4_core: device is working in RoCE mode: Roce V1 [ +0.000001] mlx4_core: gid_type 1 for UD QPs is not supported by the devicegid _type 0 was chosen instead [ +0.000001] mlx4_core: UD QP Gid type is: V1 [ +0.750613] mlx4_core 0000:01:00.0: PCIe link speed is 5.0GT/s, device support s 5.0GT/s [ +0.000002] mlx4_core 0000:01:00.0: PCIe link width is x8, device supports x8 [ +0.000080] mlx4_core 0000:01:00.0: irq 44 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 45 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 46 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 47 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 48 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 49 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 50 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 51 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 52 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 53 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 54 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 55 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 56 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 57 for MSI/MSI-X [ +0.000002] mlx4_core 0000:01:00.0: irq 58 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 59 for MSI/MSI-X [ +0.000003] mlx4_core 0000:01:00.0: irq 60 for MSI/MSI-X [ +0.822443] mlx4_en: Mellanox ConnectX HCA Ethernet driver v3.3-1.0.4 (03 Jul 2016) [ +3.135645] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.71 3.00 ($DateTime: 2015/07/28 00:13:30 $)
  8. 10Gbe setup - will this work with 6.1?

    It does.. dmesg -wH [+0.000002] Backport generated by backports.git v3.18.1-1-0-g5e9ec4c [ +0.007100] Compat-mlnx-ofed backport release: cd30181 [ +0.000002] Backport based on mlnx_ofed/mlnx_rdma.git cd30181 [ +0.000001] compat.git: mlnx_ofed/mlnx_rdma.git [ +0.053378] mlx4_core: Mellanox ConnectX core driver v3.3-1.0.4 (03 Jul 2016) [ +0.000008] mlx4_core: Initializing 0000:03:00.0 [ +0.000033] mlx4_core 0000:03:00.0: enabling device (0100 -> 0102) [ +0.420818] systemd-udevd[6199]: starting version 204 [ +1.251641] mlx4_core 0000:03:00.0: DMFS high rate mode not supported [ +0.006420] mlx4_core: device is working in RoCE mode: Roce V1 [ +0.000001] mlx4_core: gid_type 1 for UD QPs is not supported by the devicegid _type 0 was chosen instead [ +0.000001] mlx4_core: UD QP Gid type is: V1 [ +1.253954] mlx4_core 0000:03:00.0: PCIe BW is different than device's capability [ +0.000002] mlx4_core 0000:03:00.0: PCIe link speed is 5.0GT/s, device support s 5.0GT/s [ +0.000001] mlx4_core 0000:03:00.0: PCIe link width is x4, device supports x8 [ +0.000087] mlx4_core 0000:03:00.0: irq 52 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 53 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 54 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 55 for MSI/MSI-X [ +0.000004] mlx4_core 0000:03:00.0: irq 56 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 57 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 58 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 59 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 60 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 61 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 62 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 63 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 64 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 65 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 66 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 67 for MSI/MSI-X [ +0.000003] mlx4_core 0000:03:00.0: irq 68 for MSI/MSI-X [ +1.150446] mlx4_en: Mellanox ConnectX HCA Ethernet driver v3.3-1.0.4 (03 Jul 2016)
  9. 10Gbe setup - will this work with 6.1?

    Small Test HP SSF INTEL Xeon D-1527 Installed DS3617 V1.02b for DSm 6.1 Added LSi HBA 9207 Mellanox MHQH29B-XTR ConnectX 2 The system is bare metal Results - The LSI HBA 9207 card is very transparent. It works fine right out of the box On the other hand the Mellanox MHQH29B-XTR ConnectX 2 does not show up under network interfaces, With SSH Me@Test:/$ lspci 0000:00:00.0 Class 0600: Device 8086:0c00 (rev 06) 0000:00:01.0 Class 0604: Device 8086:0c01 (rev 06) 0000:00:02.0 Class 0300: Device 8086:0412 (rev 06) 0000:00:03.0 Class 0403: Device 8086:0c0c (rev 06) 0000:00:14.0 Class 0c03: Device 8086:8c31 (rev 04) 0000:00:16.0 Class 0780: Device 8086:8c3a (rev 04) 0000:00:16.3 Class 0700: Device 8086:8c3d (rev 04) 0000:00:19.0 Class 0200: Device 8086:153a (rev 04) 0000:00:1a.0 Class 0c03: Device 8086:8c2d (rev 04) 0000:00:1b.0 Class 0403: Device 8086:8c20 (rev 04) 0000:00:1c.0 Class 0604: Device 8086:8c10 (rev d4) 0000:00:1c.4 Class 0604: Device 8086:8c18 (rev d4) 0000:00:1d.0 Class 0c03: Device 8086:8c26 (rev 04) 0000:00:1f.0 Class 0601: Device 8086:8c4e (rev 04) 0000:00:1f.2 Class 0106: Device 8086:8c02 (rev 04) 0000:00:1f.3 Class 0c05: Device 8086:8c22 (rev 04) 0000:01:00.0 Class 0107: Device 1000:0087 (rev 05) 0000:03:00.0 Class 0c06: Device 15b3:673c (rev b0) 0001:00:02.0 Class 0000: Device 8086:6f04 (rev ff) 0001:00:02.2 Class 0000: Device 8086:6f06 (rev ff) 0001:00:03.0 Class 0000: Device 8086:6f08 (rev ff) 0001:00:03.2 Class 0000: Device 8086:6f0a (rev ff) 0001:00:1f.0 Class 0000: Device 8086:8c54 (rev ff) 0001:00:1f.3 Class 0000: Device 8086:8c22 (rev ff) 0001:06:00.0 Class 0000: Device 1b4b:1475 (rev ff) 0001:08:00.0 Class 0000: Device 1b4b:9235 (rev ff) 0001:09:00.0 Class 0000: Device 8086:1533 (rev ff) 0001:0c:00.0 Class 0000: Device 8086:1533 (rev ff) 0001:0d:00.0 Class 0000: Device 8086:1533 (rev ff) Me@Test:/$ Not sure how to test it out any further, I only have this system for test this weekend then I have to give it back.
  10. Good news Just test bare metal 1.02b. Works fine, updated to DSM 6.1.5-15254. Then updated o DSM 6.1.5-15254 Update 1. Restoring Data (1 Day over 1GB) Yesterday I tested ESXI 6.5 on the N54L even with 16GB it's just too slow. Today I setup a spare Sandy Bridge PC that I had to test, I needed to scrounge around for another stick of memory it's a whooping 6GB but the Core 2 E8500 runs ESXI 6.5 a lot better . I have a Supermicro X9 (Xeon) board that has poor usb boot support. I'm working on getting a hba then give it a better test. I want to test Connectx3, I see it in the device manager of Windows 2016 Essentials I need another to test with (ESXI 6.5, Xpnology 6.1). I was surprised how Widows 2016 essentials runs with a SSD on N54L. I would really like to test Hyper V with issues like images must be in iso format and samba multichannel (experimental) it seems far off. Thanks for the help, thoughts
  11. During a test today. I setup DSM 6.1 DS3617 (2-27-2017) it works properly. I also have NC360T not plugged to the switch. Then I did the upgrade to DSM 6.1.5-15254 it went thru the whole 10 minutes and I knew something was up.. I shutdown with the power button. When it came back up "Diskstation Not Found". I shutdown again and connected NC360T to the switch. This time it found the NC360 two nics. So I logged in and I see the shutdown was not graceful, ok.... I noticed the BCM5723 is gone, it only shows the NC360T. I ran the update to DSM 6.1.5-15254 Update 1 and it is still MIA... As for the test, I was doing a test of Windows 2016 essentials. I pulled the plug and put my drives back in and I received the wonderful "Diskstation not found" from Synology Assistant. I struggled to get all my new data moved to my other box. This synology page was helpful. https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC So the whole week no Xpenology until I read your post in about broadcom in german (google translate) to make sure C1 is disabled, and that did the trick awesome! I always have used DS3615 (i3) . So today I tested DS3617 (XEON ) on accident synoboot is 2/27/2017 It was just a test and I was surprised that the onboard nic installed but when I did the update it disappeared. I was lucky to have the other hp nic to work around it. . I need to use the onboard nic I want to try Mellanox CX324a connectX-3. So if you want me to test it out I can because I'm in between systems at the moment I've been thinking changing from Bare Metal to ESXI so I can run both. There is good howto on youtube right now. Thanks
  12. Food for Thought I have N54L it has HP NC107i (based on BCM5723) During a test today. I setup DSM 6.1 works it works properly. I also have NC360T not plugged to the switch. When I did the upgrade to DSM 6.1.5-15254 it went thru the whole 10 minutes and I knew something was up.. I shutdown with the power button. When it came back up "Diskstation Not Found". I shutdown again and connected NC360T to the switch. This time it found the NC360 two nics. So I logged in and I see the shutdown was not graceful, ok.... I noticed the BCM5723 is gone, it only shows the NC360T. I ran the update to DSM 6.1.5-15254 Update 1 and it is still MIA...
  13. DSM 6.1.4 - 15217

    Successfully updated bare metal HP N54L to DSM 6.1.4-15217
  14. HP N54L - DSM 6.1 onwards

    Ran a test today on N54L with NC360T, Installed using Broadcom On-Board Nic DSM 6.1-15047 Setup JBOD Btrfs Works fine, I can see 3 Network Connections Now the big mystery In the control Panel it showed DSM 6.1.3-15152 is available So I ran the update. It goes thru the ten minute thing and then says use Synology Assistant to find the box. Now I can only see two ip's from the NC360T. I connect to one of those and access the N54L It now has DSM 6.1.3-15152 but only 2 Network Connections. The On-board Broadcom Network Connection is gone......
  15. HP N54L - DSM 6.1 onwards

    N54L Bare Metal Test Disabled C1E in bios, select option 3 force first time. Tested with hp NC360T, Synology Assitant finds both but but at 57% error 13 over and over. FAILS Discouraged so I tested with onboard nic, selected Migrate and it works !!!