Jump to content
XPEnology Community

R2D2

Rookie
  • Posts

    4
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

R2D2's Achievements

Newbie

Newbie (1/7)

0

Reputation

  1. Ok. I figured out how SSH to the server. I think I'm getting closer. Meanwhile, I learned how to cut and paste from PowerShell. Here are the results. Can you tell me what the next steps would be? It looks like it has the same address for the two drives. Thanks. Superuser@DiskStation:/$ cat /etc.defaults/extensionPorts [pci] pci1="0000:00:01.1" pci2="0000:00:01.1" Superuser@DiskStation:/$ udevadm info /dev/nvme0n1 P: /devices/pci0000:00/0000:00:01.1/0000:02:00.0/0000:03:00.0/0000:04:00.0/nvme/nvme0/nvme0n1 N: nvme0n1 E: DEVNAME=/dev/nvme0n1 E: DEVPATH=/devices/pci0000:00/0000:00:01.1/0000:02:00.0/0000:03:00.0/0000:04:00.0/nvme/nvme0/nvme0n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.1/0000:02:00.0/0000:03:00.0/0000:04:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S46ENB0K506332R E: SYNO_DEV_DISKPORTTYPE=INVALID E: SYNO_INFO_PLATFORM_NAME=broadwellnk E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=801762 Superuser@DiskStation:/$ udevadm info /dev/nvme1n1 P: /devices/pci0000:00/0000:00:01.1/0000:02:00.0/0000:03:08.0/0000:06:00.0/nvme/nvme1/nvme1n1 N: nvme1n1 E: DEVNAME=/dev/nvme1n1 E: DEVPATH=/devices/pci0000:00/0000:00:01.1/0000:02:00.0/0000:03:08.0/0000:06:00.0/nvme/nvme1/nvme1n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=1 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.1/0000:02:00.0/0000:03:08.0/0000:06:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S46ENB0K505780K E: SYNO_DEV_DISKPORTTYPE=INVALID E: SYNO_INFO_PLATFORM_NAME=broadwellnk E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=802840
  2. I don't think I have the file or the folder: root@arpl:# cat /etc.defaults/extensionPorts Result- No such file or directory Do I need to create the etc.defaults directory and create the extensionPorts file in vi? And another question I though of- Since Tiny core is running from RAM, how do I get it saved on the USB stick? Would I be better off turning off the server, then creating/editing the USB stick from a live Linux CD, then reinstalling? Thanks.
  3. Thank you for that link. I am getting closer. Here is where I am at: I confirmed with the card manufacturer that my two SSD’s are supported, Evo 970. I did: udevadm info /dev/nvme1n1 and found one of the drives. I also did the same for nvme0n1, and found the other drive. I confirmed with different serial numbers. Both results show 0000:00:01.1, so I don’t know if it’s a problem. Further in the stream of numbers, though one has 0000:03:08.0, and the second has 0000:03:00.0. But, I can’t find where the folder /etc.defaults/extensionPort is located. I am getting the result of: “no such file or directory found”. This where I am stuck now. It has been quite a few years since I tinkered under the hood of Linux. I am sure it is something simple, but I just don’t know the next step. Please advise, thanks.
  4. Hello- I need some help and advice on how to get my PCIe cards working in DSM. Specifically, the Ableconn dual NVMe adapter. I am not really sure if this is a hardware or software issue. I have a baremetal install and it is working quite well. Here is the information: MB: Supermicro X10SLM+-F CPU: Intel Xeon E3-1270v3 RAM: 32GB 10Gb Mellanox card in PCIe2 slot Loader: ARPL v1.1-beta2a Model: DS3622xs+ Build: 42962 DSM 7.1.1-42962 Update 4 Volume 1: 4x 2.5” 1TB SSD, RAIDF1, Btrfs Volume 2: 2x 8TB Seagate IronWolf, RAID 1, Btrfs 2U rackmount chassis DSM is showing 6 filled HD slots, and 6 open ones. I have 2 PCIe 3x8 slots that I want to install the Ableconn PEXM2-130 Dual PCIe NVMe M.2 SSD Adapter Cards in. I have confirmed with the manufacturer that the motherboard does not support bifurcation, hence these cards have an onboard controller. Despite the card’s support for different flavors of linux, the ARPL loader is not recognizing the card. (I only have one card installed at the moment). I have tried to reconfigure and rebuild the loader, but the DSM is not showing the NVMe drives. I am stuck. I will be using the drives for storage, not caches. 1) Is there a driver available to fix the controller and card recognition, and get where I can use the SSD drives? 2) Is it possible to keep the baremetal installation and “virtualize” the SSD’s through some kind of docker app? 3) Is there a different Model number that would work better for me? 4) Should I scrap the baremetal install, then virtualize everything? If so, which hypervisor should I use? I think I have a registered copy of ESXi laying around, but is that the best one to use? I thank you in advance for your help!
×
×
  • Create New...