unmesh Posted January 29, 2021 Share #1 Posted January 29, 2021 I'm currently running with a HP NC360T Intel-based Gigabit NIC and would like to upgrade to a Mellanox ConnectX-2 or -3 NIC. From what I've read so far, it would appear to be a best practice to leave a Gigabit port in the system. Also, I've seen a posting elsewhere that the iperf3 throughput tops out at 3+Gbps which is fine for me. Unless I've misunderstood that, is it possible to have the currently unused onboard Broadcom based NIC play that role given that there is only one PCIe slot available for a x8 card? And can I go directly from my current config to one where the Intel-based NIC is swapped with a Mellanox-based one? Also, how would I edit the MAC address(es) in the bootloader USB as I transition from NC360T NICs -> onboard NIC -> Mellanox NIC? Thanks in advance. P.S.: Once I'm successful with this, I hope to upgrade my Lenovo TS140 to a 10G NIC too though that involves ESXi 7 Quote Link to comment Share on other sites More sharing options...
flyride Posted January 29, 2021 Share #2 Posted January 29, 2021 PCIe x4 is adequate for 10Gbe network card, but there is no reason not to use the Broadcom. You need to be on 6.2.3 however to get Broadcom support without a custom extra.lzma. I get full wire speed throughput (10Gbe through my ConnectX-3) but I have a very high performance array, which may actually be the limitation cited. You don't need to edit the MAC addresses unless you want to be able to wake on LAN, or if you plan to use the Intel NIC elsewhere in your local network (to avoid MAC collision). You don't have to sequence it though, you can change the USB before or after as you wish. Quote Link to comment Share on other sites More sharing options...
unmesh Posted January 29, 2021 Author Share #3 Posted January 29, 2021 Fortunately, I am on 6.2.3! I have a managed switch with a couple of SFP+ ports and thought I'd try my luck with SFP+ NICs and DAC cables though now I'm wondering if I should first try link aggregation over a pair of gigabit connections since I don't have a high performance array. (Both servers have multiport NICs) I need to check whether my switch supports LACP or merely static LAG and what the support situation is on ESXi 7 and on DSM 6.2.3 on baremetal. Decisions, decisions! Thanks for the guidance. Quote Link to comment Share on other sites More sharing options...
unmesh Posted January 30, 2021 Author Share #4 Posted January 30, 2021 (edited) I've ordered the 10G gear (Mellanox ConnectX-3 and DAC cables) and, in the interim, re-enabled the built in NIC in BIOS and confirmed that shows up in the Web GUI. Also, it enumerated after the Intel ports in the added NIC which was helpful. Now waiting for the delivery! Edited January 30, 2021 by unmesh Clarifications Quote Link to comment Share on other sites More sharing options...
unmesh Posted February 5, 2021 Author Share #5 Posted February 5, 2021 I got the Mellanox NIC installed and adjusted grub.cfg with its MAC address but am waiting for my DAC cable and fiber modules. Will I see the relevant SFP information in the GUI or do I need to ssh into a shell and use the command line? Thanks. Quote Link to comment Share on other sites More sharing options...
flyride Posted February 5, 2021 Share #6 Posted February 5, 2021 My ConnectX-3 does not tell me any SFP/DAC information. Quote Link to comment Share on other sites More sharing options...
unmesh Posted February 5, 2021 Author Share #7 Posted February 5, 2021 OK, hopefully the link will "just work"! Thanks Quote Link to comment Share on other sites More sharing options...
unmesh Posted February 6, 2021 Author Share #8 Posted February 6, 2021 It did "just work" with the DAC cable and the switch and DSM GUIs both show a 10G link! My switch has only two SFP+ ports so I need to figure out which device to connect next Quote Link to comment Share on other sites More sharing options...
flyride Posted February 6, 2021 Share #9 Posted February 6, 2021 How many 10Gbe clients do you have? I just use a dual-port Mellanox in the DSM server and DAC to the two clients that I want 10Gbe from. No switch. Quote Link to comment Share on other sites More sharing options...
unmesh Posted February 7, 2021 Author Share #10 Posted February 7, 2021 You mean in a direct connect triangle topology? Did you have to manually configure which client is where or did the OSes figure it out by themselves? Thanks Quote Link to comment Share on other sites More sharing options...
flyride Posted February 7, 2021 Share #11 Posted February 7, 2021 Easiest way for that is to add a hostname override for the nas on each 10Gbe client to point to the network servicing the port they are attached to. Quote Link to comment Share on other sites More sharing options...
unmesh Posted February 9, 2021 Author Share #12 Posted February 9, 2021 (edited) I have 4 small/old hard drives in the N54L so I decided that I would buy a 12TB faster hard drive to go with my 10G NIC. However, when I replaced the 4 drives with this one and tried to reboot, I could not get to the web page which asks me about a new installation. I even booted off an Ubuntu USB stick to make sure the disk and the built-in and Mellanox NICs were still accessible; they were. I then put the original hard drives back but it still won't get on the network. I then flashed a 3615 1.03b bootloader image on a new flash drive with a new VID:PID but that won't boot either. What next? Although I am curous why the original setup is no longer working, I am okay to do a fresh install with just the new drive and restore the content from one of the other NASes. Edited February 9, 2021 by unmesh Clarifications Quote Link to comment Share on other sites More sharing options...
flyride Posted February 9, 2021 Share #13 Posted February 9, 2021 Your original setup is not working because the loader is no longer in a usable state. If you install new, always burn a clean, new copy of the loader. If you are doing this and it still isn't working, a mistake is being made in the loader prep. Quote Link to comment Share on other sites More sharing options...
unmesh Posted February 9, 2021 Author Share #14 Posted February 9, 2021 I will go through the steps again. Should a 3615 1.03b loader work with the Mellanox in the system with the DAC cable plugged in? Do I need to prep/zero the hard drive in any way? Thanks Quote Link to comment Share on other sites More sharing options...
unmesh Posted February 9, 2021 Author Share #15 Posted February 9, 2021 So here are the steps I took on a Windows machine to prepare the USB stick: - Downloaded 3615 1.03b bootloader zip file from the repository and extracted synoboot.img - Used OSFmount to mount partition 0 as a writeable letter drive - Inserted a flash drive into a USB port and used USBdeview to determine its VID:PID - Navigated to the grub folder and edited grub.cfg to change the VID:PID, change the serial number, change/add MAC addresses for the Mellanox and the integrated NICs, and uncomment the menu items for AMD - Save the file, dismount the image - Use Win32 DiskImager to burn synoboot.img to a flash drive - Eject the flash drive, put it into the N54L with a single drive in the left most bay and power on - When Jun's menu shows up on the attached monitor, arrow down to the third/AMD item and hit return - Wait to see if Synology Assistant picks up this DSM or the DHCP Server shows a request/grant - Neither of the two happens I have a blind spot that is causing me to miss something basic Quote Link to comment Share on other sites More sharing options...
flyride Posted February 9, 2021 Share #16 Posted February 9, 2021 Make a 1Gbe card first in the MAC order. But frankly unless you are using wake-on-LAN services, there is no need to do MAC address changes at all. Why are you arrowing down to the "third/AMD" item? You should be using the first entry for baremetal AFAIK. Quote Link to comment Share on other sites More sharing options...
unmesh Posted February 9, 2021 Author Share #17 Posted February 9, 2021 (edited) My (failing) memory though that was what I had done originally and I managed to find some notes that said I should use the first entry so you are correct. I also decide to compare all the BIOS settings against my notes and discovered that C1E had somehow gotten enabled . Weak CMOS battery, perhaps? I suspect that putting back the old USB stick and hard drives will now work. The Mellanox is first in PCIe enumeration so the only way to have a 1Gbe first is to take the card out and optionally put the Intel NIC back. Good news is that the system booted and I could access the GUI at the Mellanox's IP address. Bad news was that a manual install of DSM 6.2.3 got stuck at 56%. Subsequent good news is that an automatic install of the latest DSM completed and the system is up again! Edited February 9, 2021 by unmesh Added late breaking status Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.