flyride

Members
  • Content Count

    384
  • Joined

  • Last visited

  • Days Won

    30

flyride last won the day on June 15

flyride had the most liked content!

Community Reputation

150 Excellent

4 Followers

About flyride

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. No, it's not an XPEnology problem, the occasional corruption events are well chronicled on Synology forum and Reddit, etc. Why bother if it doesn't measure well, especially when you can put my small files on a real SSD. I'm not saying cache is useless, it's just not very beneficial for typical homebrew XPEnology installations. In a multi-user environment with a 10Gbe interface and lots of small files, that's really an excellent use case.
  2. Regardless, someone would have to engineer a loader for ARM which has not been done.
  3. This isn't really an XPEnology or DSM problem, this is an ESXi problem. I don't see how you can do it differently than you are without changing hardware. Why not acquire and install another SATA or NVMe datastore so that you don't need the 32GB USB stick and can return the ESXi USB handling to normal?
  4. Two reasons: 1. Ongoing, albeit occasional instances of volume corruption directly attributable to SSD cache 2. Real world workloads not improved substantially by cache. See commentary here that encapsulates some thoughts on the matter
  5. Actually there are some combinations that work to use NVMe under ESXi as the VM emulates whatever disk interface you want. SATA and SCSI are recognized by DSM. I've had the best results with physical RDM and SCSI under DSM 6.1.x. DSM 6.2.x can be made to work but it's much, much more finicky (which is true in general). See this and this It does also work with virtual drives on SSD datastores (see earlier in the thread). In any case, the drive must be recognized by DSM as an SSD disk type for it specifically to be available for cache. I'm not a fan of cache personally, but to each his own.
  6. 11 - Why all the issues with recent versions of DSM (6.2.x), and what options are available to mitigate them? Jun's loader attempts to fool the Synology code into thinking that the user hardware is actually Synology's platform hardware. How this is technically accomplished is a mystery to most everyone except Jun, but regardless of how it works, it has an impact on the DSM runtime environment. Some issues are fairly innocuous (spurious errors in the system logs, inability to leverage hardware features like CPU turbo and hibernation, etc.) but others may cause instability, driver and kernel crashes. By far, the most stable combination (meaning, compatibility with the largest inventory of hardware) is Jun's loader 1.02b and DSM 6.1.x. Jun has stated that the target platform for 1.02b was DS3615, and DS3617 was incidentally compatible enough to use the same loader. However, there are kernel crashes with DS3617 on certain combinations of hardware. There is otherwise no functional difference between DS3615 and DS3617, which is why DS3615 is recommended. DSM 6.2 introduced new, more stringent Synology hardware checks and Jun came up with another approach to bypass them. While the loaders do work with optimal hardware, on many systems the 6.2 loaders often result in kernel loadable module crashes and kernel panics. Many have also noted substantially poorer disk I/O performance compared with prior versions. DSM's embedded NIC drivers have been inventoried and documented, but much of that catalogue is useless as the 1.03b loader crashes all but a select few drivers on DSM 6.2.1 and later. And, users with new hardware often find that those few functional network drivers don't support the newest revisions of their on-board silicon. Similarly, the 1.04b loader explicitly adds support for the Intel Graphics (i915) driver, but upgrading to 6.2.2 causes it to crash on some revisions of the Intel Graphics hardware (such as Apollo Lake J-series systems). A very large number of forum posts can be attributed to users seeking to install DSM 6.2.x and encountering one of these two significant problems. ESXi or another virtualization platform is probably the best strategy to mitigate hardware support limitations on XPEnology and DSM 6.2.x. If your goal is to deploy the latest DSM versions and keep pace with Synology patches, you would be wisely advised not to deploy a baremetal XPEnology installation at this time. Unfortunately, this is an obstacle to those who want a baremetal solution to enable hardware accelerated video encoding support within DSM. It should be noted that many XPEnology super-users, forum admins and devs continue to use the stalwart combination of ESXi, Jun 1.02b and DS3615 DSM 6.1.7 for mission critical work, and have no intention of upgrading. That said, there really isn't much of a reason to stay current once you have a functioning system. DSM 7.0 is imminent and the current loaders are virtually guaranteed not to work with it. And each new 6.2 point release is objectively bringing new compatibility issues and crashes, for little or no functional/incremental benefit.
  7. grub.cfg doesn't have much to do with whether a NIC driver works. I don't currently run DSM 6.2.2 either. Read on for why. Folks, Jun's loader is fundamentally trying to fool the Synology code into thinking that the booted hardware is Synology's platform hardware. How it does this is a mystery to most everyone except Jun. Whatever it does, has varying effects on the OS environment. Some are innocuous (spurious errors in the system logs), and some cause instability, driver and kernel crashes. By far, the most stable combination (meaning, compatibility with the largest inventory of hardware) is Jun's loader 1.02b and DSM 6.1.x. Jun stated that the target DSM platform for 1.02b was DS3615, and DS3617 was compatible enough to use the same loader. However, there are issues with DS3617 on certain combinations of hardware that cause kernel crashes for undetermined reasons. I can consistently demonstrate a crash and lockup on the combination of ESXi, DS3617 and 1.02b on the most recent DSM 6.1 release (6.1.7). There is otherwise no functional difference between DS3615 and DS3617, which is why DS3615 is recommended. DSM 6.2 introduced new Synology hardware checks and Jun came up with a new approach to bypass them so that we can run those versions - implemented in loaders 1.03 and 1.04. Whatever he did to defeat the DSM 6.2 Synology code results in more frequent kernel loadable module crashes, kernel panics, and generally poorer performance (the performance difference may be inherent to 6.2, however). There is a useless library of NIC support in DSM 6.2.1/6.2.2 as the 1.03b loader crashes NIC modules but a select few on DSM 6.2.1 and later. Similarly, the 1.04b loader has support for the Intel Graphics (i915) driver, but going from 6.2.1 to 6.2.2 causes it to crash on some revisions of the Intel Graphics hardware. Personally, I am running ESXi 1.02b/DS3615 6.1.7 for mission critical work and have no intention of upgrading. I do have a baremetal system running 6.2.1 right now, but it's really only for test and archive. Nowadays, I strongly advise against a baremetal XPEnology installation if you have any desire to attempt to keep pace with Synology patches. That said, there really isn't much of a reason to stay current. DSM 7.0 is imminent and the available loaders are virtually guaranteed not to work with it. Each new 6.2 point release is causing new issues and crashes, for little or no functional/incremental benefit.
  8. Picking the correct OS version from ESXi (per the tutorials) will affect whether the virtual e1000e hardware is presented to the VM or not. So it does matter.
  9. flyride

    DSM 6.1.x Loader

    Use another NIC? <g> On a serious note, there are many examples of functional NIC models. Pick one of those, or use ESXi.
  10. Again, your PC running DSM needs the drives connected directly, not via USB or Ethernet.
  11. How are you connecting the drives to the Beelink PC? USB won't really work - they need to be connected to a traditional SATA controller, and that controller must be supported by DSM.
  12. RS409 has an ARM processor, XPEnology only is able to enable the use of Intel-based code. The 16TB limitation is associated with the 32-bit address space of your processor. You've gotten your money's worth out of that hardware. Build a new system using XPEnology and enjoy vastly better performance than what you get out of that ancient Marvell CPU.
  13. Yes, your J3455 supports the instruction set needed. When you install the new loader, your system will boot to a migration install since you are changing DSM platforms. So download the 6.2.1-23824 PAT file in advance.
  14. Please see this for loaders vs. versions. 6.2.x has a number of hardware constraints on it, so you may need to pick your loader and platform carefully, referencing your hardware.
  15. Well, you need a certain minimum level of tech expertise to run XPEnology. If you aren't comfortable with that then you might be better off with a real Synology unit. There isn't a turnkey backup methodology to restore a broken XPEnology/DSM system. But there are a number of ways to back up various parts of it, here are some: 1. Keep a backup image of your loader on your PC hard drive so it can be rebuilt exactly every time 2. Save your system settings from DSM 3. If you are using ESXi, you can snapshot your VM prior to upgrade and roll it back if there is a problem 4. If you are using ESXi, you can backup your VM and scratch resources using any number of ESXi backup tools 5. If you want an separate copy of your data, build up another XPEnology NAS environment and use BTRFS replication 5a. If you are using Docker, you can replicate your whole Docker environment using BTRFS replication Backing up DSM itself to something like an image backup isn't the easiest thing to do, and generally isn't necessary.