Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. I have been running XPEnology for many months in bare-metal configuration.  I started experimenting with enterprise NVMe drives a few weeks ago to address Docker/system IOPs limits, and despite some success getting DSM utilities to recognize them, it's apparent that the core udev functionality that Synology uses for hotplug support will prevent any reliable hacking of NVMe volumes into the system.  Hopefully better NVMe drive support will come soon from Synology themselves.

     

    In the meantime, I have drives with 1.5GBps write rates, and 1024 command queue, so I had better find a way to use them.  The only practical way I can think of while staying on XPEnology is to use a hypervisor to present them as virtual storage to DSM.  So far I have the following working well:

    1. NVMe drives, presented as a VMWare SSD (albeit with about a 30% performance penalty vs. native access, tested with hdparm)
    2. Passthrough of my SATA controller (with spinning disks attached)
    3. Passthrough of Mellanox 10GBe network
    4. Passthrough of USB 3.0 controller (to enable USB printer and UPS support)

     

    However, there are two devices that are visible that I think should not be:

    • ESXi is installed on a USB stick.  Once the USB controller was passthrough, the stick and its partitions became visible in DSM.  VID/PID in grub.cfg won't hide it because it isn't a synoboot device.  How can I hide it?(As an aside, I thought that ESXi might lose its partitions after launching the VM with USB passthrough, but it seems to work - because of boot device?).
    • I cannot find any way to hide the loader vmdk synoboot drive.  Is there a conclusive method of doing this?  I see lots of posts and have tried many suggestions, but nothing seems to work.

     

    Thanks for your help.

  2. ds3615xs is older x86 architecture and has the most package support

    ds3617xs is slightly newer x86 architecture and has better native add-in card support since it has an exposed PCI slot, but extra.lzma mostly covers the bases on all platforms

    ds916 has the most recent x86 instruction and quickconnect support, if you want to hardware transcode and have compatible hardware

     

    • Like 1
    • Thanks 1
  3. On ‎12‎/‎13‎/‎2017 at 11:57 AM, 4sag said:

    nvme support ?

    
    lspci -k | grep 'Kernel driver'
            Kernel driver in use: pcieport
            Kernel driver in use: pcieport
            Kernel driver in use: ehci-pci
            Kernel driver in use: pcieport
            Kernel driver in use: pcieport
            Kernel driver in use: pcieport
            Kernel driver in use: pcieport
            Kernel driver in use: ehci-pci
            Kernel driver in use: lpc_ich
            Kernel driver in use: ahci
            Kernel driver in use: hpilo
            Kernel driver in use: uhci_hcd
            Kernel driver in use: tg3
            Kernel driver in use: tg3
            Kernel driver in use: xhci_hcd
            Kernel driver in use: nvme
    
    fdisk -l
    Disk /dev/nvme0n1: 238.5 GiB, 256060514304 bytes, 500118192 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    in syno ds3615xs 6.1.4-15217.3 not disk nvme !

    256 ГБ SSD M.2 накопитель Intel 600p [SSDPEKKW256G7]


    No Synology build currently has disk support for NVMe devices.  FSxxxx only support SATA/SAS. NVMe slots on DS918+, and M2D17 M.2 SATA slots are only for SSD Cache (show up only as Cache Device, not disk), at least for now. The above is evidence that the hardware is being recognized and the device is working with the driver loaded in extra.lzma.  Just DSM is not seeing it, probably due to /dev/nvme* name and DSM internal scripting for /dev/sd*.  IG88, please keep the nvme module in!

     

    I am motivated to find a hack to recognize NVMe as disk, as I have ordered some NVMe drives I would like to use for system/swap/docker.  I will document in a new thread, and not clutter this one up with discussion about it.

     

  4. - Outcome of the update: SUCCESSFUL

    - DSM version prior update: DSM 6.1.4-15217U4

    - Loader version and model: JUN'S LOADER v1.02b - DS3617xs

    - Installation type: baremetal, see sig

    - Additional comments: manually shutdown 6 Docker containers prior to upgrade via UI, no Docker issues

  5. Baremetal Jun 1.02b DS3617xs / Supermicro X11SSH-F / E3-1235Lv5 / 32GB RAM

     

    Upgraded from 6.1.3-15152U8 to 6.1.4-15217U1 via DSM.  A reboot was required.  No initial issues noted.

    A new Docker was then presented via Package Manager, and that was also upgraded and containers functionally verified.  A final, manual reboot (after all upgrades were complete) was uneventful.

  6. I've been testing several permutations of code baremetal installed using Jun 1.02b loader.  System is a Supermicro X11SSH-F with Xeon 1235L v5 Skylake CPU, which has Quicksync support.

     

    If I use DS3617xs image, everything works great including my Mellanox Connect-X 2 dual port 10GB network card.  But there is no kernel support for Quicksync.

    If I use DS916+ image, everything works great except my Mellanox card, and I can see /dev/dri is present and active

     

    I'm guessing that Mellanox support is built in natively in the DS3617xs code base as a supported add-on card, but is not in the DS916+ code.  It looks like I can add additional driver support using Ramdisk, but the references to this in the install are removed for the time being.  Obviously I'd like to get both HW transcoding support and Mellanox going at once.

     

    Can anyone point me in the right direction?  Sorry for the noob request, and thanks

  7. I had a similar problem experimenting with VMM on DSM 6.1.3 Update 6.

    I haven't tried Windows, but unsuccessfully tried Ubuntu (latest), Lubuntu and Debian.  All failed with a panic.

     

    On the Synology page for the VMM beta, there is a link to supported OS's.  Based on this I loaded up the Centos 7.3 1611 ISO and it installed and ran fine. So perhaps you need the specific versions they have identified for the beta code.

     

    FWIW I have Supermicro X11SSH-F with an E3-1235Lv5

×
×
  • Create New...