Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Well you can do that. But seems like a waste if you already have ports available. I have never heard of anyone needing to do this and the native DSM drivers are undoubtedly all for Intel chipset USB, so you may not get any useful advice for a working card for passthrough.
  2. You can't pass through a USB port. You can pass through a device attached to a USB 3.0 port. It won't show up as available for passthrough until it is attached.
  3. Double check this information here: https://xpenology.com/forum/topic/13333-6x-loaders-and-platforms/
  4. I stand corrected! Although filesystem should add some overhead.
  5. Something is flawed about this. Write cache turned on? I don't think there is a spinning disk on the planet that can write 153 MBps.
  6. Frankly, today the way to do this is to encapsulate the functionality in Docker containers rather than modify the standard DSM environment. The beauty of Docker on DSM is that you can extend Synology filesystem access (and speed) directly into the container so there really is no performance downside. If that interests you, you could easily do all your Docker dev/test in XPenology and then have very high confidence it would function exactly the same ported over to other DSM versions and Syno platforms. I used to run optware and never had any real issues (other than a version upgrade overwriting the optware startup, which was easy to restore). Since it installs it's own package versions in its own directory tree, and because the standard shell doesn't have optware enabled, compatibility seemed to work out pretty well for upgrades. But again, I've pretty much moved all the things I wanted to do in the native shell/optware to Docker and won't look back now. That said, I'd be confident of using XPenology to model out your upgrade and functionality plan. Again, I never encountered problems with upgrading due to optware excepting the rc entries to start optware. However, that advice is worth exactly what you paid for it Good luck.
  7. The short answer is no, you can't do what you want with simulating different Syno hardware. You can only use the specific PAT files (and major DSM versions) that each XPenology loader is built for. Any other PAT file won't function. Yes, serials/MACs are coded to hardware types. Internal to DSM, the model string is just informative. You won't be changing the behavior of DSM due to setting that. The PAT files may start with the same base code, but are compiled for each target hardware platform, and can different significantly from model to model, including the versions and types of Synology utilities. To your last question, I imagine that it depends on the nature of your customization. You still may be able to use XPenology to test a newer DSM. Can you provide some specifics? Some knowledgeable folks here might have some advice on that.
  8. First, turn off autoupdate in DSM if you haven't done so already. If and when you see a version update you think you'd like to apply, check here for the EXACT version you're looking for. As XPenology users upgrade, they will often post their experience, issues and resolutions. Sometimes a new DSM will require a new loader, which may take awhile to create and test. Sometimes the loader you are using still works with the new update, but not with all hardware or with specific workarounds. Many XPenology users maintain a test device (virtualization helps with this immensely) to help validate their personal configuration with an update before committing to their main system. As always, keep your stuff backed up somewhere else in case of a negative outcome.
  9. Hint: Repeat the test a number of times and report the median value. Loader: jun 1.04b (DS918) DSM: 6.2.1-23824U1 Hardware/CPU: J4105-ITX HDD: WD Red 8TB RAID 5 (4 drives) Results: dd if=/dev/zero bs=1M count=1024 | md5sum CPU: 422 MBps dd bs=1M count=256 if=/dev/zero of=testx conv=fdatasync WD Red RAID 5-4: 157 MBps ______________________________________________________________________________ My main rig is not 6.2.1, but I thought to record the results on the NVMe platform. I'll repeat at such a date where it is converted to 6.2.1. Loader: jun 1.02b (DS3615xs) DSM: 6.1.7-15284U2 Hardware: ESXi 6.5 CPU: E3-1230v6 HDD: Intel P3500 2GB NVMe RAID 1, WD Red 4TB RAID 10 (8 drives) Results: dd if=/dev/zero bs=1M count=1024 | md5sum CPU: 629MBps (I do have one other active VM but it's pretty idle) dd bs=1M count=256 if=/dev/zero of=testx conv=fdatasync NVMe RAID 1: 1.1 GBps WD Red RAID 10-8: 371 MBps The only thing above that can be directly compared is haldi's RAID5 @ 171MBps vs. mine at 157MBps, although the drives are quite different designs.
  10. Why don't you create a co-located VM (install to the same IP network) and install DSM Assistant?
  11. The issue at hand is that Synology is built upon open-source products (even some of their "proprietary" packages). The license that comes with those open source products essentially requires that the developers must make source code of derivative works available to the public. Which Synology (eventually) does. So, for them to say that you, or I or anyone else cannot take those source codes and redeploy them in a manner of our choosing would violate the licenses on which THEY are bound. That said, reselling XPenology would run foul of their legitimate copyright and they would sue you. Misuse of their cloud services on a large scale would probably get their attention. Individual use is probably not worth their time, and probably wouldn't stand up in a court.
  12. My conclusion is that flash cache has little value for the workloads likely to be delivered by a DSM installation. There are many instances of cache corruption and data loss so the risk isn't worth it for me. My goal was to get NVMe drives running as regular disk for DSM. Presently the ONLY way to do this is to have ESXi convert them (via virtual disk, or pRDM) to SATA or SCSI. By doing so I get 1-1.5MBps read and write speed on NVMe RAID 1. So what purpose is the cache? During testing, I did notice that I could select my NVMe drives (as SCSI via pRDM) as a SSD cache target. This was on DSM 6.1.7 on DS3615, which does not support NVMe natively. So DSM was identifying them as regular SSDs, eligible for cache. This would be fine and equivalent to running DS918 code to use NVMe natively. DSM for DS918 has drivers, utilities and custom udev code to differentiate SATA SSD from NVMe. However it's a policy decision by Synology to treat NVMe differently as there is no technical reason it can't act as regular disk. That is why NVMe can be used as cache but not as a regular disk. ESXi passthrough actually presents the hardware to the VM. All it needs to do so is to be able to identify the PCI resource (you may have to edit hardware map to force ESXi to do it though). So if you know you are able to run NVMe cache baremetal, it is very likely to work as a passthrough device on ESXi. But again, only as cache. I'm pretty sure that is referring to running child VM's within Synology's Virtual Machine Manager package. But it does offer advantages for hot snapshots etc. Running btrfs on XPenology running on ESXi VM should have no performance difference versus baremetal.
  13. This was written for 6.1.7. The system that I have configured in this way is still on that version. Note that pRDM does still work fine on 6.2.1 to map ESXi physical drives to virtual SATA drives. I've done some preliminary testing of 6.2.1 for SCSI/SAS. LSI Logic SAS dialect doesn't seem to work at all. The other two SCSI options do work. However, you can't initially configure your system with a virtual SCSI controller or the drive won't be found. Once the drives are initialized and part of an array, they can be moved from the vSATA controller to the vSCSI controller, probably demanding a system "Recover" operation, but no system or data impact so far.
  14. Ok, given that this emulates the DAS connectivity that OP was interested in, I guess it's not a thread hijack! The fact is that I am using this to emulate USB storage for a HDTV DVR, so good guess on your part. Linux "gadget" module g_mass_storage is used to enable storage emulation functions using USB OTG mode on a compatible device. Gadgets can do other things, including LAN over USB, serial I/O over USB, etc. Which behavior is dependent upon the module loaded. Reference information here and here Part A: Configure the pi-type single-board computer (SBC - in my case, the Neo2) I'm using armbian as the OS on my Neo2. Raspbian and other distros will have similar methods, but the configuration files and locations may vary. 1. Enable the device tree overlay by appending the following to /boot/config.txt dtoverlay=dwc2 2. Load the gadget modules on boot by adding the following lines to /etc/modules-load.d/modules.conf dwc2 g_mass_storage The SBC will need to be rebooted to activate the modules. Part B: Set up a shared folder and image as storage on DSM 1. If not enabled, go into Control Panel and enable NFS. Under Advanced Settings, set 32K read and write packet sizes. 2. Configure a shared folder on DSM and enable NFS services on it, to include the host/IP of the SBC. Use these parameters: Privilege: read/write Squash: no mapping Security: sys Enable synchronous: yes 3. Configure a target image file in the root of the shared folder. This can be done from the SBC after mounting the shared folder via NFS, or from the DSM command line A sparse file will only allocate storage in DSM when it is actually used by the SBC Refer to the reference links above for configuration details and examples Part C: Configure the NFS and module scripting on the SBC 1. Configure the NFS mount target on the SBC mkdir -p /mnt/nfs_client/<shared folder name> 2. Sample SBC mount/startup script, assuming a prepared image file called image.img on shared folder share, on volume1, on DSM with IP address 10.2.3.4 sudo mount -t nfs -o nolock,wsize=32768,rsize=32768,intr,hard 10.2.3.4:/volume1/share /mnt/nfs_client/share sudo modprobe g_mass_storage file=/mnt/nfs_client/share/image.img stall=0 iSerialNumber="1234567890" nofua=1 3. Sample SBC stop/dismount script to complement the above. Troubleshooting information should be visible in /var/log/messages sudo modprobe -r g_mass_storage sudo umount /mnt/nfs_client/share 4. Set up the NFS mount and module load on SBC bootup by adding the startup script to /etc/rc.local I prefer this over /etc/fstab NFS mount because it eliminates any mismatched service issues between NFS and the g_mass_storage module initialization.
  15. You're proposing Intel, so LAN and disk controller are the two key items. H370 and B360 are basically the same thing from a driver standpoint, so you should be fine. For cases, I'm using the U-NAS units, both the NSC-401(mITX) and the NSC-810 (mATX). They are a royal pain to install but the results are worth it.
  16. What you just wrote is pretty much my viewpoints in a nutshell. Odd that there was a very similar exchange yesterday.
  17. flyride

    DSM 6.2 Loader

    I'm running it now. You do need a Haswell or later CPU. Make sure you are using a 918+ PAT file
  18. Unsure. The DS918+ image (from Synology) is currently the only PAT file supported on XPenology that has the Synology utilities to configure the cache. However, I was able to see /dev/nvme0n1 and use basic Linux commands to check a NVMe drive on 3615/3617/916 PAT files. I just never could get it to work without the DSM tools. https://xpenology.com/forum/topic/6235-setup-unsupported-nvme-cache/?tab=comments#comment-54018 Nothing comes back if you execute "nvme list" ?
  19. No interest whatsoever in Windows Server. SHR is overrated in my opinion. Performance hits and LVM are not worth the tradeoff for me. Disks are cheap. Here's why I am on DSM: Don't have to pay exorbitant fees for Windows Server 2016 Works well on low-cost hardware BTRFS snapshot management and replication UI Docker hosting and management UI Now I can easily get some of these elsewhere (e.g. Portainer) but DSM offers a combination of features I need that keeps things simple.
  20. - Outcome of the installation/update: SUCCESSFUL - DSM version prior update: DSM 6.2-23739U2 - Loader version and model: Jun v1.04b - DS918 - Using custom extra.lzma: NO - Installation type: BAREMETAL - J4105-ITX - Additional comments: tested on ESXi first, upgrading from 1.03a loader to 1.04b. then upgraded DSM to 6.2.1 and validated both vmxnet3 and e1000 VLAN drivers baremetal upgrade to 6.2.1 is working with Realtek NIC also /dev/dri is active for the first time on baremetal J4105 (Gemini Lake)
  21. When setting up an XPEnology system, you must first select a DSM platform and version. XPEnology supports a few specific DSM platforms that enable certain hardware and software features. All support a minimum of 4 CPU cores, 64GB of RAM, 10Gbe network cards and 12-disk arrays. When you choose a platform and the desired DSM software version, you must download the correct corresponding loader. That may not be the "newest" loader available. The last 6.x version (6.2.4-25556) is functional only with the TCRP loader. TCRP is very different than the Jun loader. If you want to learn more, or if you are interested in deploying the latest 7.x versions, see the 7.x Loaders and Platforms thread. Be advised that installing 6.2.4 with TCRP is basically the same procedure as installing 7.x. Each of these combinations can be run "baremetal" as a stand-alone operating system OR as a virtual machine within a hypervisor (VMWare ESXi is most popular and best documented, but other hypervisors can be used if desired). Review the table and decision tree below to help you navigate the options. 6.x Loaders and Platforms as of 16-May-2022 Options Ranked DSM Platform DSM Version Loader Boot Methods*** Hardware Transcode Support NVMe Cache Support RAIDF1 Support Oldest CPU Supported Max CPU Threads Notes 1,3a DS918+ 6.2.0 to 6.2.3-25426 Jun 1.04b UEFI, BIOS/CSM Yes Yes No Haswell ** 8 6.2.0, 6.2.3 ok, 6.2.1/6.2.2 not recommended for new installs* 2,3b DS3617xs 6.2.0 to 6.2.3-25426 Jun 1.03b BIOS/CSM only No No Yes any x86-64 16 6.2.0, 6.2.3 ok, 6.2.1/6.2.2 not recommended for new installs* DS3615xs 6.2.0 to 6.2.3-25426 Jun 1.03b BIOS/CSM only No No Yes any x86-64 8 6.2.0, 6.2.3 ok, 6.2.1/6.2.2 not recommended for new installs* DS918+ 6.2.4-25556 TCRP 0.4.6 UEFI, BIOS/CSM Yes Yes No Haswell ** 8 recommend 7.x instead DS3615xs 6.2.4-25556 TCRP 0.4.6 UEFI, BIOS/CSM No No Yes any x86-64 8 recommend 7.x instead DS916+ 6.0.3 to 6.1.7 Jun 1.02b UEFI, BIOS/CSM Yes No No Haswell ** 8 obsolete, use DS918+ instead DS3617xs 6.0.3 to 6.1.6 Jun 1.02b UEFI, BIOS/CSM No No Yes any x86-64 16 6.1.7 may kernel panic on ESXi 4 DS3615xs 6.0.3 to 6.1.7 Jun 1.02b UEFI, BIOS/CSM No No Yes any x86-64 8 best compatibility on 6.1.x * 6.2.1 and 6.2.2 have a unique kernel signature causing issues with most kernel driver modules, including those included in the loader. Hardware compatibility is limited. ** FMA3 instruction support required. All Haswell Core processors or later support it. Only a select few Pentium, and no Celeron CPUs do. ** Piledriver is believed to be the minimum AMD CPU architecture to support the DS916+ and DS918+ DSM platforms. *** If you need an MBR version of the boot loader because your system does not support a modern boot methodology, follow this procedure. CURRENT LOADER/PLATFORM RECOMMENDATIONS/SAMPLE DECISION POINTS: 1. DEFAULT install DS918+ 6.2.3 - also if hardware transcoding or NVMe cache support is desired, or if your system only support UEFI boot Prerequisite: Intel Haswell (aka 4th generation) or newer CPU architecture (or AMD equivalent) Configuration: baremetal loader 1.04b, DSM platform DS918+ version 6.2.3 Compatibility troubleshooting options: extra.lzma or ESXi 2. ALTERNATE install DS3617xs 6.2.3 - if RAIDF1, 16-thread or best SAS support is desired, or your CPU is too old for DS918+ Prerequisite: USB key boot mode must be set to BIOS/CSM/Legacy Boot Configuration: baremetal loader 1.03b, DSM platform DS3617xs version 6.2.3 Compatibility troubleshooting options: extra.lzma, DS3615xs platform, or ESXi 3. ESXi (or other hypervisor) virtual machine install - generally, if hardware is unsupported by DSM but works with a hypervisor Prerequisites: ESXi hardware compatibility, free or full ESXi 6.x or 7.x license Use case examples: virtualize unsupported NIC, virtualize SAS/NVMe disks and present as SATA, run other ESXi VM's instead of Synology VMM Option 3a: 1.04b loader, DSM platform DS918+ version 6.2.3 Option 3b: 1.03b loader, DSM platform DS3617xs version 6.2.3 (VM must be set to BIOS Firmware) Preferred configurations: passthrough SATA controller and disks, and/or configure RDM/RAW disks 4. FALLBACK install DS3615xs 6.1.7 - if you can't get anything else to work Prerequisite: none Configuration: baremetal loader 1.02b, DSM platform DS3615xs version 6.1.7 SPECIAL NOTE for Intel 8th generation+ (Coffee Lake, Comet Lake, Ice Lake, etc.) motherboards with embedded Intel network controllers: Each time Intel releases a new chipset, it updates the PCI id for the embedded NIC. This means there is a driver update required to support it, which may or may not be available with an extra.lzma update. Alternatively, disable the onboard NIC and install a compatible PCIe NIC such as the Intel CT gigabit card.
  22. Your proposed hardware is very powerful. Of note is the 6C/6T processor - DSM maxes out at 8 threads total. An i5-8600 would be partially unused, so you have the right chip. Support of 6.x through 6.1.7 is broadly compatible with a lot of different hardware, partially due to add-on "extra.lzma" hacks provided by the community. 6.2 is less compatible but your vanilla Intel chip, chipset and NIC are going to work well. 6.2.1 (as you may have surmised) presently requires an Intel NIC (edit: 1.04b addresses the Intel NIC limitation). There are platforms that use less power, but Coffee Lake idles fairly well (my i7-8700 idles at about 20W). Your platform will easily run Plex as a docker app or DSM native, and be able to transcode H.264 in software at 4K. Hardware transcoding must be supported by both the DSM platform (916+/918+) and the application (Plex/Emby/Video Station), so it vastly narrows your choices for hardware. My personal opinion is that it really isn't worth the trouble, which is fine on your proposed hardware as long as you aren't running more than one or two transcoding streams simultaneously. Regarding stability - once set up, XPenology is DSM which is just open-source software (Linux and utilities) with scripted functions - a very stable platform. People mostly get into trouble because they allow DSM to auto-update itself (or initiate an update themselves) without adequate testing. Each, and every update needs to be tested. If you don't maintain an environment to test on your own, you should at a minimum follow the update version threads on this forum, and verify that a configuration very close to yours installed successfully before attempting it. Regarding the "old Dell OptiPlex USFF" that you say runs 6.1.x and not 6.2. Assuming you are choosing and installing the correct loader properly, there are two main reasons it would fail to work: You are trying to run the 918+ platform on a pre-Haswell CPU You are trying to run 6.2.1 DSM (any platform) on loaders prior to 1.04b but the system does not have an Intel NIC Folks are currently having a lot of difficulty navigating some of the 6.2 pitfalls. I put together this, which should help you evaluate your options.
  23. Are you trying to recover with a totally clean build of the bootloader? If not, try that. Also, can you validate the on-board Ethernet current functioning using another OS?
×
×
  • Create New...