Jump to content
XPEnology Community

everwisher

Rookie
  • Posts

    6
  • Joined

  • Last visited

Posts posted by everwisher

  1. 2 hours ago, IG-88 said:

    Thank you for replying.

     

    I cannot use this yet as I reverted from DS3622xs back to DS918+ and /etc.default/.extensionPort file doesn't exist in DS918+ firmware. I believe the modification of libsynonvme.so.1 still applies for DS918+, but the problem is said to be regarding the emulated nvme block that is presented to DSM as "Available Spare: 0%", which I believe to hit the point.

     

    That being said, we can only passthrough physical nvme storage block to virtualized DSM currently.

  2. 1 hour ago, IG-88 said:

    you might be over complicate things, just create  virtual volume on the nvme in knm and declare it a (sata) ssd for the xpenology vm

    it would be even possible to have one real nvme, create two virtual ssd's and use it as write capable cache (kind of defeats the intention syno had but as long as you keep that in mind ...)

    There are some premises here:

    1. I have 3 ssds of different capacities, i.e. 250G, 512G, and 800G, and I borrowed the idea from Synology SHR, splitting the 512G and 800G into segments of 250G, and made two raid0 arrays, with the rest part from 800G ssd for host's use. The software raid0 partitions are grouped into virtual group and then used as logical volumes. This configuration has been running for quite a while and I cannot take it apart to passthrough an nvme ssd for DSM.

     

    2. I used a SATA-based lv as cache for DSM. But the emulated nature of SATA can be restricting the transfer rate so I'm looking for a NVME-emulated solution. I remember to have seen some benchmark results, showing transfer rates of emulated NVME device is only inferior to paravirtualized SCSI but far superior to emulated SATA.

  3. In Proxmox, I emulated a nvme-disk and passed it through to a DSM guest using the following qemu option:

      -drive 'file=/dev/mapper/pve-vm--201--disk--1,if=none,id=nvm' \
      -device 'nvme,serial=deadbeef,drive=nvm'

    After starting the DSM guest, disk /dev/nvme0n1 can be seen using this command:

    sudo udevadm info /dev/nvme0n1

    The following information was printed:

    Quote

    Disk /dev/nvme0n1: 256 GiB, 274877906944 bytes, 536870912 sectors
    Disk model: QEMU NVMe Ctrl                          
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    P: /devices/pci0000:00/0000:00:03.0/nvme/nvme0/nvme0n1
    N: nvme0n1
    E: DEVNAME=/dev/nvme0n1
    E: DEVPATH=/devices/pci0000:00/0000:00:03.0/nvme/nvme0/nvme0n1
    E: DEVTYPE=disk
    E: MAJOR=259
    E: MINOR=0
    E: PHYSDEVBUS=pci
    E: PHYSDEVDRIVER=nvme
    E: PHYSDEVPATH=/devices/pci0000:00/0000:00:03.0
    E: SUBSYSTEM=block
    E: SYNO_ATTR_SERIAL=deadbeef
    E: SYNO_DEV_DISKPORTTYPE=UNKNOWN
    E: SYNO_INFO_PLATFORM_NAME=apollolake
    E: SYNO_KERNEL_VERSION=4.4
    E: SYNO_SUPPORT_XA=no
    E: TAGS=:systemd:
    E: USEC_INITIALIZED=366408

    Then I modified the `/lib64/libsynonvme.so.1` file using a HEX editor in Visual Studio Code, changing the DS918+ NVME address from 0000:00:13.0 and 0000:00:13:1 to 0000:00:03.0 and 0000:00:03:1 respectively.

     

    However, when I rebooted DSM, no nvme ssd appears in the Storage Manager.

     

    Anyone with any clue about resolving this?

  4. Thanks for reply. I tried to undervolt my cpu from the MB uefi, only ending up in black screen and compulsory BIOS recovery. I'll work on it to see if this issue has something to do with an es cpu.

    On 3/12/2021 at 2:57 AM, smileyworld said:

    I don't think so. I am running DSM baremetal on an i3-8100 and as far as I know the hardware choices is where you can save energy most easily. Meaning you can lower comsumption mostly by choosing a power efficient CPU and put only drives in your NAS you need, because every HDD needs power as well. You could however undervolt your CPU slightly if the motherboard allows it. I hooked a power consumption measuring device (MyStrom wifi) between my NAS and my wall socket. It is a pretty cool device which offers an API to grab the current power draw and can be integrated in Netdata (if you are into that).

     

    I attached a screenshot of the power consumption, however keep in mind that my MainNAS is quipped currently with a combination of 14 HDDs/SSDs.

    Bildschirmfoto 2021-03-11 um 19.53.44.png

     

  5. I have some spared hardware and I built them into a virtualized xpenology NAS over QEMU with onboard SATA controller passed through, being out of worries about incompatibility because I'm a noob.

     

    Now my build is running with an i7-8700t ES cpu, 32g ddr3-1866 ram, z170-mod MB, a WD SN750 SSD for the host (with two lvs used as simulated RAID1 ssd cahe for DSM), and four ST8000DM004 8TB 5.4krpm hard drives. On average my system consumes 70~90W power.

     

    I wonder if I turn this machine into a physical host based on DSM, is there gonna be a cut in the power consumption on account of potentially better managed hard drive spinning down or something?

×
×
  • Create New...