I need some help and advice on how to get my PCIe cards working in DSM. Specifically, the Ableconn dual NVMe adapter. I am not really sure if this is a hardware or software issue.
I have a baremetal install and it is working quite well. Here is the information:
MB: Supermicro X10SLM+-F
CPU: Intel Xeon E3-1270v3
RAM: 32GB
10Gb Mellanox card in PCIe2 slot
Loader: ARPL v1.1-beta2a
Model: DS3622xs+
Build: 42962
DSM 7.1.1-42962 Update 4
Volume 1: 4x 2.5” 1TB SSD, RAIDF1, Btrfs
Volume 2: 2x 8TB Seagate IronWolf, RAID 1, Btrfs
2U rackmount chassis
DSM is showing 6 filled HD slots, and 6 open ones.
I have 2 PCIe 3x8 slots that I want to install the Ableconn PEXM2-130 Dual PCIe NVMe M.2 SSD Adapter Cards in. I have confirmed with the manufacturer that the motherboard does not support bifurcation, hence these cards have an onboard controller. Despite the card’s support for different flavors of linux, the ARPL loader is not recognizing the card. (I only have one card installed at the moment). I have tried to reconfigure and rebuild the loader, but the DSM is not showing the NVMe drives. I am stuck. I will be using the drives for storage, not caches.
1) Is there a driver available to fix the controller and card recognition, and get where I can use the SSD drives?
2) Is it possible to keep the baremetal installation and “virtualize” the SSD’s through some kind of docker app?
3) Is there a different Model number that would work better for me?
4) Should I scrap the baremetal install, then virtualize everything? If so, which hypervisor should I use? I think I have a registered copy of ESXi laying around, but is that the best one to use?
Question
R2D2
Hello-
I need some help and advice on how to get my PCIe cards working in DSM. Specifically, the Ableconn dual NVMe adapter. I am not really sure if this is a hardware or software issue.
I have a baremetal install and it is working quite well. Here is the information:
MB: Supermicro X10SLM+-F
CPU: Intel Xeon E3-1270v3
RAM: 32GB
10Gb Mellanox card in PCIe2 slot
Loader: ARPL v1.1-beta2a
Model: DS3622xs+
Build: 42962
DSM 7.1.1-42962 Update 4
Volume 1: 4x 2.5” 1TB SSD, RAIDF1, Btrfs
Volume 2: 2x 8TB Seagate IronWolf, RAID 1, Btrfs
2U rackmount chassis
DSM is showing 6 filled HD slots, and 6 open ones.
I have 2 PCIe 3x8 slots that I want to install the Ableconn PEXM2-130 Dual PCIe NVMe M.2 SSD Adapter Cards in. I have confirmed with the manufacturer that the motherboard does not support bifurcation, hence these cards have an onboard controller. Despite the card’s support for different flavors of linux, the ARPL loader is not recognizing the card. (I only have one card installed at the moment). I have tried to reconfigure and rebuild the loader, but the DSM is not showing the NVMe drives. I am stuck. I will be using the drives for storage, not caches.
1) Is there a driver available to fix the controller and card recognition, and get where I can use the SSD drives?
2) Is it possible to keep the baremetal installation and “virtualize” the SSD’s through some kind of docker app?
3) Is there a different Model number that would work better for me?
4) Should I scrap the baremetal install, then virtualize everything? If so, which hypervisor should I use? I think I have a registered copy of ESXi laying around, but is that the best one to use?
I thank you in advance for your help!
Link to comment
Share on other sites
6 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.