Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. Well you can do that.  But seems like a waste if you already have ports available.  I have never heard of anyone needing to do this and the native DSM drivers are undoubtedly all for Intel chipset USB, so you may not get any useful advice for a working card for passthrough.

  2. Frankly, today the way to do this is to encapsulate the functionality in Docker containers rather than modify the standard DSM environment. The beauty of Docker on DSM is that you can extend Synology filesystem access (and speed) directly into the container so there really is no performance downside.  If that interests you, you could easily do all your Docker dev/test in XPenology and then have very high confidence it would function exactly the same ported over to other DSM versions and Syno platforms.

     

    I used to run optware and never had any real issues (other than a version upgrade overwriting the optware startup, which was easy to restore).  Since it installs it's own package versions in its own directory tree, and because the standard shell doesn't have optware enabled, compatibility seemed to work out pretty well for upgrades.  But again, I've pretty much moved all the things I wanted to do in the native shell/optware to Docker and won't look back now.

     

    1 hour ago, ariel said:

    It is of course possible to re-apply all of these customizations after an upgrade. But what worries me most is the possibility that something will prevent the upgrade from completing successfully on the production unit. 

     

    That said, I'd be confident of using XPenology to model out your upgrade and functionality plan.  Again, I never encountered problems with upgrading due to optware excepting the rc entries to start optware.  However, that advice is worth exactly what you paid for it :-)

     

    Good luck.

    • Thanks 1
  3. The short answer is no, you can't do what you want with simulating different Syno hardware.

     

    You can only use the specific PAT files (and major DSM versions) that each XPenology loader is built for.  Any other PAT file won't function.  Yes, serials/MACs are coded to hardware types.

     

    Internal to DSM, the model string is just informative. You won't be changing the behavior of DSM due to setting that.  The PAT files may start with the same base code, but are compiled for each target hardware platform, and can different significantly from model to model, including the versions and  types of Synology utilities.

     

    To your last question, I imagine that it depends on the nature of your customization.  You still may be able to use XPenology to test a newer DSM.  Can you provide some specifics?  Some knowledgeable folks here might have some advice on that.

     

    1. First, turn off autoupdate in DSM if you haven't done so already.
    2. If and when you see a version update you think you'd like to apply, check here for the EXACT version you're looking for.  As XPenology users upgrade, they will often post their experience, issues and resolutions.  Sometimes a new DSM will require a new loader, which may take awhile to create and test.  Sometimes the loader you are using still works with the new update, but not with all hardware or with specific workarounds.
    3. Many XPenology users maintain a test device (virtualization helps with this immensely) to help validate their personal configuration with an update before committing to their main system.
    4. As always, keep your stuff backed up somewhere else in case of a negative outcome.
  4. Hint: Repeat the test a number of times and report the median value.

     

    Loader: jun 1.04b (DS918)

    DSM: 6.2.1-23824U1

    Hardware/CPU: J4105-ITX

    HDD: WD Red 8TB RAID 5 (4 drives)

    Results:

    dd if=/dev/zero bs=1M count=1024 | md5sum

    CPU: 422 MBps

    dd bs=1M count=256 if=/dev/zero of=testx conv=fdatasync

    WD Red RAID 5-4: 157 MBps

     

    ______________________________________________________________________________

     

    My main rig is not 6.2.1, but I thought to record the results on the NVMe platform.  I'll repeat at such a date where it is converted to 6.2.1.

     

    Loader: jun 1.02b (DS3615xs)

    DSM: 6.1.7-15284U2

    Hardware: ESXi 6.5

    CPU: E3-1230v6

    HDD: Intel P3500 2GB NVMe RAID 1, WD Red 4TB RAID 10 (8 drives)

    Results:

    dd if=/dev/zero bs=1M count=1024 | md5sum

    CPU: 629MBps (I do have one other active VM but it's pretty idle)

    dd bs=1M count=256 if=/dev/zero of=testx conv=fdatasync

    NVMe RAID 1: 1.1 GBps

    WD Red RAID 10-8: 371 MBps

     

    The only thing above that can be directly compared is haldi's RAID5 @ 171MBps vs. mine at 157MBps, although the drives are quite different designs.

  5. The issue at hand is that Synology is built upon open-source products (even some of their "proprietary" packages).  The license that comes with those open source products essentially requires that the developers must make source code of derivative works available to the public.  Which Synology (eventually) does.

     

    So, for them to say that you, or I or anyone else cannot take those source codes and redeploy them in a manner of our choosing would violate the licenses on which THEY are bound.  That said, reselling XPenology would run foul of their legitimate copyright and they would sue you.  Misuse of their cloud services on a large scale would probably get their attention.

     

    Individual use is probably not worth their time, and probably wouldn't stand up in a court.

  6. 2 hours ago, benok said:

    Have you ever tried NVMe passthrough with 918+ VM ? 

    I wanted to know is it possible to work 918+VM with NVMe pass through like other OS's VM.

     

    My conclusion is that flash cache has little value for the workloads likely to be delivered by a DSM installation. There are many instances of cache corruption and data loss so the risk isn't worth it for me. My goal was to get NVMe drives running as regular disk for DSM.  Presently the ONLY way to do this is to have ESXi convert them (via virtual disk, or pRDM) to SATA or SCSI.  By doing so I get 1-1.5MBps read and write speed on NVMe RAID 1.  So what purpose is the cache?

     

    During testing, I did notice that I could select my NVMe drives (as SCSI via pRDM) as a SSD cache target.  This was on DSM 6.1.7 on DS3615, which does not support NVMe natively.  So DSM was identifying them as regular SSDs, eligible for cache.  This would be fine and equivalent to running DS918 code to use NVMe natively.

     

    DSM for DS918 has drivers, utilities and custom udev code to differentiate SATA SSD from NVMe. However it's a policy decision by Synology to treat NVMe differently as there is no technical reason it can't act as regular disk. That is why NVMe can be used as cache but not as a regular disk.  ESXi passthrough actually presents the hardware to the VM.  All it needs to do so is to be able to identify the PCI resource (you may have to edit hardware map to force ESXi to do it though).  So if you know you are able to run NVMe cache baremetal, it is very likely to work as a passthrough device on ESXi.  But again, only as cache.

     

    2 hours ago, benok said:

    I've read btrfs performance is very bad for VM workload on some Synology forum.

     

    I'm pretty sure that is referring to running child VM's within Synology's Virtual Machine Manager package.  But it does offer advantages for hot snapshots etc.

    Running btrfs on XPenology running on ESXi VM should have no performance difference versus baremetal.

     

  7. This was written for 6.1.7.  The system that I have configured in this way is still on that version.

     

    Note that pRDM does still work fine on 6.2.1 to map ESXi physical drives to virtual SATA drives.

     

    I've done some preliminary testing of 6.2.1 for SCSI/SAS.  LSI Logic SAS dialect doesn't seem to work at all.  The other two SCSI options do work. However, you can't initially configure your system with a virtual SCSI controller or the drive won't be found.  Once the drives are initialized and part of an array, they can be moved from the vSATA controller to the vSCSI controller, probably demanding a system "Recover" operation, but no system or data impact so far.

    • Thanks 1
  8. Ok, given that this emulates the DAS connectivity that OP was interested in, I guess it's not a thread hijack!

     

    The fact is that I am using this to emulate USB storage for a HDTV DVR, so good guess on your part.

     

    Linux "gadget" module g_mass_storage is used to enable storage emulation functions using USB OTG mode on a compatible device.  Gadgets can do other things, including LAN over USB, serial I/O over USB, etc.  Which behavior is dependent upon the module loaded.  Reference information here and here

     

    Part A: Configure the pi-type single-board computer (SBC - in my case, the Neo2)

    I'm using armbian as the OS on my Neo2.  Raspbian and other distros will have similar methods, but the configuration files and locations may vary.

     

    1. Enable the device tree overlay by appending the following to /boot/config.txt

    dtoverlay=dwc2

    2. Load the gadget modules on boot by adding the following lines to /etc/modules-load.d/modules.conf

    dwc2
    g_mass_storage

    The SBC will need to be rebooted to activate the modules.

     

    Part B: Set up a shared folder and image as storage on DSM

     

    1. If not enabled, go into Control Panel and enable NFS.  Under Advanced Settings, set 32K read and write packet sizes.

     

    2. Configure a shared folder on DSM and enable NFS services on it, to include the host/IP of the SBC.  Use these parameters:

    • Privilege: read/write
    • Squash: no mapping
    • Security: sys
    • Enable synchronous: yes

    3. Configure a target image file in the root of the shared folder.

    • This can be done from the SBC after mounting the shared folder via NFS, or from the DSM command line
    • A sparse file will only allocate storage in DSM when it is actually used by the SBC
    • Refer to the reference links above for configuration details and examples

    Part CConfigure the NFS and module scripting on the SBC

     

    1. Configure the NFS mount target on the SBC

    mkdir -p /mnt/nfs_client/<shared folder name>

    2. Sample SBC mount/startup script, assuming a prepared image file called image.img on shared folder share, on volume1, on DSM with IP address 10.2.3.4

    sudo mount -t nfs -o nolock,wsize=32768,rsize=32768,intr,hard 10.2.3.4:/volume1/share /mnt/nfs_client/share
    sudo modprobe g_mass_storage file=/mnt/nfs_client/share/image.img stall=0 iSerialNumber="1234567890" nofua=1

    3. Sample SBC stop/dismount script to complement the above.  Troubleshooting information should be visible in /var/log/messages

    sudo modprobe -r g_mass_storage
    sudo umount /mnt/nfs_client/share

    4. Set up the NFS mount and module load on SBC bootup by adding the startup script to /etc/rc.local

     

    I prefer this over /etc/fstab NFS mount because it eliminates any mismatched service issues between NFS and the g_mass_storage module initialization.

    • Like 3
  9. 25 minutes ago, Joe Bethersonton said:

    @flyride

    I haven't purchased any hardware just yet, as I am still looking for a case that fits in my IKEA Kallax shelf unit.

     

    Regarding the motherboard, is there anything else than the chipset and NIC that I should be aware of?

    I'm considering switching to mITX, and found the Asrock H370M-ITX/ac, which has the Intel H360 chipset, dual Intel gigabit NIC and 6 sata ports. Would that be suitable for a XPEnology build?

     

    You're proposing Intel, so LAN and disk controller are the two key items.  H370 and B360 are basically the same thing from a driver standpoint, so you should be fine.

    For cases, I'm using the U-NAS units, both the NSC-401(mITX) and the NSC-810 (mATX).  They are a royal pain to install but the results are worth it.

     

  10. Unsure.  The DS918+ image (from Synology) is currently the only PAT file supported on XPenology that has the Synology utilities to configure the cache.

     

    However, I was able to see /dev/nvme0n1 and use basic Linux commands to check a NVMe drive on 3615/3617/916 PAT files.  I just never could get it to work without the DSM tools.

    https://xpenology.com/forum/topic/6235-setup-unsupported-nvme-cache/?tab=comments#comment-54018

     

    Nothing comes back if you execute "nvme list" ?

  11. No interest whatsoever in Windows Server.

    SHR is overrated in my opinion.  Performance hits and LVM are not worth the tradeoff for me.  Disks are cheap.

     

    Here's why I am on DSM:

    1. Don't have to pay exorbitant fees for Windows Server 2016
    2. Works well on low-cost hardware
    3. BTRFS snapshot management and replication UI
    4. Docker hosting and management UI

    Now I can easily get some of these elsewhere (e.g. Portainer) but DSM offers a combination of features I need that keeps things simple.

     

  12. - Outcome of the installation/update: SUCCESSFUL

    - DSM version prior update: DSM 6.2-23739U2

    - Loader version and model: Jun v1.04b - DS918

    - Using custom extra.lzma: NO

    - Installation type: BAREMETAL - J4105-ITX

    - Additional comments:

    • tested on ESXi first, upgrading from 1.03a loader to 1.04b.
    • then upgraded DSM to 6.2.1 and validated both vmxnet3 and e1000 VLAN drivers
    • baremetal upgrade to 6.2.1 is working with Realtek NIC
    • also /dev/dri is active for the first time on baremetal J4105 (Gemini Lake)
    • Thanks 3
  13. On ‎10‎/‎17‎/‎2018 at 3:06 PM, Joe Bethersonton said:

    What I want from my NAS is:

    - Room for at least four drives. (SHR or similar)

    - The option to use Emby (or Plex)

    - Play my movies (mostly 720p mkv) on my Apple TV 4K.

    - Run a few third party applications on the NAS.

    - Low-ish power consumption.

    - It should be stable. I don't want a NAS, where I have to  reinstall the OS every other week or month..

    Can it transcode 4K, provided that the hardware is powerful enough? 

     

    I have a old Dell OptiPlex USFF with a i3 3rd gen that I just played around with. It was fairly easy getting DSM6.1.4 running, but I can't get DSM 6.2 working. When I boot on the USB, it doesn't get an IP from the DHCP server. Is that a common problem?

     

    Proposed Hardware:
    Gigabyte B360M D3H (Intel NIC, 6 SATA)
    Intel Core I5 8400 (6-core Coffee Lake)
    G.Skill Aegis DDR4 2666MHz 8GB
    Silverstone SFX SST-ST45SF-G 450W PSU

     

    Your proposed hardware is very powerful.  Of note is the 6C/6T processor - DSM maxes out at 8 threads total.  An i5-8600 would be partially unused, so you have the right chip. Support of 6.x through 6.1.7 is broadly compatible with a lot of different hardware, partially due to add-on "extra.lzma" hacks provided by the community.  6.2 is less compatible but your vanilla Intel chip, chipset and NIC are going to work well.  6.2.1 (as you may have surmised) presently requires an Intel NIC (edit: 1.04b addresses the Intel NIC limitation).  There are platforms that use less power, but Coffee Lake idles fairly well (my i7-8700 idles at about 20W).

     

    Your platform will easily run Plex as a docker app or DSM native, and be able to transcode H.264 in software at 4K.  Hardware transcoding must be supported by both the DSM platform (916+/918+) and the application (Plex/Emby/Video Station), so it vastly narrows your choices for hardware.  My personal opinion is that it really isn't worth the trouble, which is fine on your proposed hardware as long as you aren't running more than one or two transcoding streams simultaneously.

     

    Regarding stability - once set up, XPenology is DSM which is just open-source software (Linux and utilities) with scripted functions - a very stable platform.  People mostly get into trouble because they allow DSM to auto-update itself (or initiate an update themselves) without adequate testing.  Each, and every update needs to be tested.  If you don't maintain an environment to test on your own, you should at a minimum follow the update version threads on this forum, and verify that a configuration very close to yours installed successfully before attempting it.

     

    Regarding the "old Dell OptiPlex USFF" that you say runs 6.1.x and not 6.2.  Assuming you are choosing and installing the correct loader properly, there are two main reasons it would fail to work:

     

    1. You are trying to run the 918+ platform on a pre-Haswell CPU
    2. You are trying to run 6.2.1 DSM (any platform) on loaders prior to 1.04b but the system does not have an Intel NIC

     

    Folks are currently having a lot of difficulty navigating some of the 6.2 pitfalls.  I put together this, which should help you evaluate your options.

     

×
×
  • Create New...