Upgrading N54L to 10Gb NIC


Recommended Posts

I'm currently running with a HP NC360T Intel-based Gigabit NIC and would like to upgrade to a Mellanox ConnectX-2 or -3 NIC. From what I've read so far, it would appear to be a best practice to leave a Gigabit port in the system. Also, I've seen a posting elsewhere that the iperf3 throughput tops out at 3+Gbps which is fine for me.

 

Unless I've misunderstood that, is it possible to have the currently unused onboard Broadcom based NIC play that role given that there is only one PCIe slot available for a x8 card? And can I go directly from my current config to one where the Intel-based NIC is swapped with a Mellanox-based one?

 

Also, how would I edit the MAC address(es) in the bootloader USB as I transition from NC360T NICs -> onboard NIC -> Mellanox NIC?

 

Thanks in advance.

 

P.S.: Once I'm successful with this, I hope to upgrade my Lenovo TS140 to a 10G NIC too though that involves ESXi 7

Link to post
Share on other sites

PCIe x4 is adequate for 10Gbe network card, but there is no reason not to use the Broadcom.  You need to be on 6.2.3 however to get Broadcom support without a custom extra.lzma.

 

I get full wire speed throughput (10Gbe through my ConnectX-3) but I have a very high performance array, which may actually be the limitation cited.

 

You don't need to edit the MAC addresses unless you want to be able to wake on LAN, or if you plan to use the Intel NIC elsewhere in your local network (to avoid MAC collision).  You don't have to sequence it though, you can change the USB before or after as you wish.

Link to post
Share on other sites

Fortunately, I am on 6.2.3!

 

I have a managed switch with a couple of SFP+ ports and thought I'd try my luck with SFP+ NICs and DAC cables though now I'm wondering if I should first try link aggregation over a pair of gigabit connections since I don't have a high performance array. (Both servers have multiport NICs)

 

I need to check whether my switch supports LACP or merely static LAG and what the support situation is on ESXi 7 and on DSM 6.2.3 on baremetal.

 

Decisions, decisions!

 

Thanks for the guidance.

Link to post
Share on other sites

I've ordered the 10G gear (Mellanox ConnectX-3 and DAC cables) and, in the interim, re-enabled the built in NIC in BIOS and confirmed that shows up in the Web GUI. Also, it enumerated after the Intel ports in the added NIC which was helpful.

 

Now waiting for the delivery!

Edited by unmesh
Clarifications
Link to post
Share on other sites

I got the Mellanox NIC installed and adjusted grub.cfg with its MAC address but am waiting for my DAC cable and fiber modules.

 

Will I see the relevant SFP information in the GUI or do I need to ssh into a shell and use the command line?

 

Thanks.

Link to post
Share on other sites

It did "just work" with the DAC cable and the switch and DSM GUIs both show a 10G link!

 

My switch has only two SFP+ ports so I need to figure out which device to connect next

Link to post
Share on other sites

I have 4 small/old hard drives in the N54L so I decided that I would buy a 12TB faster hard drive to go with my 10G NIC. However, when I replaced the 4 drives with this one and tried to reboot, I could not get to the web page which asks me about a new installation. I even booted off an Ubuntu USB stick to make sure the disk and the built-in and Mellanox NICs were still accessible; they were.

 

I then put the original hard drives back but it still won't get on the network.

 

I then flashed a 3615 1.03b bootloader image on a new flash drive with a new VID:PID but that won't boot either.

 

What next? Although I am curous why the original setup is no longer working, I am okay to do a fresh install with just the new drive and restore the content from one of the other NASes.

Edited by unmesh
Clarifications
Link to post
Share on other sites

Your original setup is not working because the loader is no longer in a usable state.

 

If you install new, always burn a clean, new copy of the loader.  If you are doing this and it still isn't working, a mistake is being made in the loader prep.

Link to post
Share on other sites

I will go through the steps again.

 

Should a 3615 1.03b loader work with the Mellanox in the system with the DAC cable plugged in?

 

Do I need to prep/zero the hard drive in any way?

 

Thanks

Link to post
Share on other sites

So here are the steps I took on a Windows machine to prepare the USB stick:

 

- Downloaded 3615 1.03b bootloader zip file from the repository and extracted synoboot.img

- Used OSFmount to mount partition 0 as a writeable letter drive

- Inserted a flash drive into a USB port and used USBdeview to determine its VID:PID

- Navigated to the grub folder and edited grub.cfg to change the VID:PID, change the serial number, change/add MAC addresses for the Mellanox and the integrated NICs, and uncomment the menu items for AMD

- Save the file, dismount the image

- Use Win32 DiskImager to burn synoboot.img to a flash drive

- Eject the flash drive, put it into the N54L with a single drive in the left most bay and power on

- When Jun's menu shows up on the attached monitor, arrow down to the third/AMD item and hit return

- Wait to see if Synology Assistant picks up this DSM or the DHCP Server shows a request/grant

- Neither of the two happens

 

I have a blind spot that is causing me to miss something basic

Link to post
Share on other sites

Make a 1Gbe card first in the MAC order.  But frankly unless you are using wake-on-LAN services, there is no need to do MAC address changes at all.

 

Why are you arrowing down to the "third/AMD" item?  You should be using the first entry for baremetal AFAIK.

Link to post
Share on other sites

My (failing) memory though that was what I had done originally and I managed to find some notes that said I should use the first entry so you are correct.

 

I also decide to compare all the BIOS settings against my notes and discovered that C1E had somehow gotten enabled :-(. Weak CMOS battery, perhaps? I suspect that putting back the old USB stick and hard drives will now work.

 

The Mellanox is first in PCIe enumeration so the only way to have a 1Gbe first is to take the card out and optionally put the Intel NIC back. Good news is that the system booted and I could access the GUI at the Mellanox's IP address. Bad news was that a manual install of DSM 6.2.3 got stuck at 56%. Subsequent good news is that an automatic install of the latest DSM completed and the system is up again!

 

Edited by unmesh
Added late breaking status
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.