Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Is SMB3 MC supported on DSM? I think it's still highly experimental: https://www.reddit.com/r/synology/comments/90gc61/smb_3_multichannel_support_on_dsm_62/?utm_source=BD&utm_medium=Search&utm_name=Bing&utm_content=PSR1 There is lots of evidence that bonding ports with only a single workstation doesn't improve throughput (even though Windows lies to you about it). Alternatively, add supported 10 gig NIC and pass that through to the VM. If you are point-to-point you can use SFP+ cards and a SFP+ DAC instead of a switch, and there are a lot of 2-port 10Gbe cards available if you need to scale to two high speed clients. Works great for me.
  2. No, it's not an XPEnology problem, the occasional corruption events are well chronicled on Synology forum and Reddit, etc. Why bother if it doesn't measure well, especially when you can put my small files on a real SSD. I'm not saying cache is useless, it's just not very beneficial for typical homebrew XPEnology installations. In a multi-user environment with a 10Gbe interface and lots of small files, that's really an excellent use case.
  3. Regardless, someone would have to engineer a loader for ARM which has not been done.
  4. This isn't really an XPEnology or DSM problem, this is an ESXi problem. I don't see how you can do it differently than you are without changing hardware. Why not acquire and install another SATA or NVMe datastore so that you don't need the 32GB USB stick and can return the ESXi USB handling to normal?
  5. Two reasons: 1. Ongoing, albeit occasional instances of volume corruption directly attributable to SSD cache 2. Real world workloads not improved substantially by cache. See commentary here that encapsulates some thoughts on the matter
  6. Actually there are some combinations that work to use NVMe under ESXi as the VM emulates whatever disk interface you want. SATA and SCSI are recognized by DSM. I've had the best results with physical RDM and SCSI under DSM 6.1.x. DSM 6.2.x can be made to work but it's much, much more finicky (which is true in general). See this and this It does also work with virtual drives on SSD datastores (see earlier in the thread). In any case, the drive must be recognized by DSM as an SSD disk type for it specifically to be available for cache. I'm not a fan of cache personally, but to each his own.
  7. grub.cfg doesn't have much to do with whether a NIC driver works. I don't currently run DSM 6.2.2 either. Read on for why. Folks, Jun's loader is fundamentally trying to fool the Synology code into thinking that the booted hardware is Synology's platform hardware. How it does this is a mystery to most everyone except Jun. Whatever it does, has varying effects on the OS environment. Some are innocuous (spurious errors in the system logs), and some cause instability, driver and kernel crashes. By far, the most stable combination (meaning, compatibility with the largest inventory of hardware) is Jun's loader 1.02b and DSM 6.1.x. Jun stated that the target DSM platform for 1.02b was DS3615, and DS3617 was compatible enough to use the same loader. However, there are issues with DS3617 on certain combinations of hardware that cause kernel crashes for undetermined reasons. I can consistently demonstrate a crash and lockup on the combination of ESXi, DS3617 and 1.02b on the most recent DSM 6.1 release (6.1.7). There is otherwise no functional difference between DS3615 and DS3617, which is why DS3615 is recommended. DSM 6.2 introduced new Synology hardware checks and Jun came up with a new approach to bypass them so that we can run those versions - implemented in loaders 1.03 and 1.04. Whatever he did to defeat the DSM 6.2 Synology code results in more frequent kernel loadable module crashes, kernel panics, and generally poorer performance (the performance difference may be inherent to 6.2, however). There is a useless library of NIC support in DSM 6.2.1/6.2.2 as the 1.03b loader crashes NIC modules but a select few on DSM 6.2.1 and later. Similarly, the 1.04b loader has support for the Intel Graphics (i915) driver, but going from 6.2.1 to 6.2.2 causes it to crash on some revisions of the Intel Graphics hardware. Personally, I am running ESXi 1.02b/DS3615 6.1.7 for mission critical work and have no intention of upgrading. I do have a baremetal system running 6.2.1 right now, but it's really only for test and archive. Nowadays, I strongly advise against a baremetal XPEnology installation if you have any desire to attempt to keep pace with Synology patches. That said, there really isn't much of a reason to stay current. DSM 7.0 is imminent and the available loaders are virtually guaranteed not to work with it. Each new 6.2 point release is causing new issues and crashes, for little or no functional/incremental benefit.
  8. Picking the correct OS version from ESXi (per the tutorials) will affect whether the virtual e1000e hardware is presented to the VM or not. So it does matter.
  9. flyride

    DSM 6.1.x Loader

    Use another NIC? <g> On a serious note, there are many examples of functional NIC models. Pick one of those, or use ESXi.
  10. Again, your PC running DSM needs the drives connected directly, not via USB or Ethernet.
  11. How are you connecting the drives to the Beelink PC? USB won't really work - they need to be connected to a traditional SATA controller, and that controller must be supported by DSM.
  12. RS409 has an ARM processor, XPEnology only is able to enable the use of Intel-based code. The 16TB limitation is associated with the 32-bit address space of your processor. You've gotten your money's worth out of that hardware. Build a new system using XPEnology and enjoy vastly better performance than what you get out of that ancient Marvell CPU.
  13. Yes, your J3455 supports the instruction set needed. When you install the new loader, your system will boot to a migration install since you are changing DSM platforms. So download the 6.2.1-23824 PAT file in advance.
  14. Please see this for loaders vs. versions. 6.2.x has a number of hardware constraints on it, so you may need to pick your loader and platform carefully, referencing your hardware.
  15. Well, you need a certain minimum level of tech expertise to run XPEnology. If you aren't comfortable with that then you might be better off with a real Synology unit. There isn't a turnkey backup methodology to restore a broken XPEnology/DSM system. But there are a number of ways to back up various parts of it, here are some: 1. Keep a backup image of your loader on your PC hard drive so it can be rebuilt exactly every time 2. Save your system settings from DSM 3. If you are using ESXi, you can snapshot your VM prior to upgrade and roll it back if there is a problem 4. If you are using ESXi, you can backup your VM and scratch resources using any number of ESXi backup tools 5. If you want an separate copy of your data, build up another XPEnology NAS environment and use BTRFS replication 5a. If you are using Docker, you can replicate your whole Docker environment using BTRFS replication Backing up DSM itself to something like an image backup isn't the easiest thing to do, and generally isn't necessary.
  16. Install an Intel NIC (see here) and disable your on-board NIC in the BIOS. Intel CT works, as do others that are specifically supported by e1000e driver (see here)
  17. When you try it, we will know... The spreadsheet doesn't tell you whether particular hardware will work. For example, we have had some difficulties with booting network drivers in loaders after 1.02. So even if the NIC is supported (meaning, there is a driver installed that is compatible with the card), there have been kernel panics and other failures. I have not seen evidence of similarly supported (driver installed in image) disk drivers with problems, but there are no guarantees unless someone else has proven it out with the hardware you are considering. I will say that some implementations of embedded third-party controllers behave differently than the native PCI cards which can cause problems. The spreadsheet shows exactly which device drivers exist and the PCI device ID's supported by those specific drivers... nothing more or less. If I were in your shoes, I would be very certain that the PCI device ID of the LSI controller embedded on the SuperMicro board is represented in the image you want to use.
  18. Actually it's Intel that says the processor only supports 8GB but it actually works fine up to 32GB.
  19. That board is having problems with 6.2.2 and we haven't figured out why yet. It works with 6.2.1 okay.
  20. Yes that card works great with 1.02b and DS3615 on all DSM 6.1.x
  21. That is what I would attempt. Be prepared for suboptimal SATA controller mappings. Can you grab a few small drives and build up a parallel test environment to try it? In any case, don't try to repair anything that doesn't come up clean and you can always go back.
  22. I'm not sure why the i3-4150 would not have FMA3 like other Haswells, but if it is true the Linux kernel will stop the boot. So you will know immediately.
  23. Just curious, what is Portainer doing for you that the native Syno docker manager doesn't? I use Portainer on non-DSM Docker hosts, but never thought to load it up on DSM.
  24. 1.02b loader supports all 6.1.x builds released thus far. No package limitations, with the exception of those requiring a legitimate Synology serial number.
  25. Aside from courteous language, no. As of 12 Jan that appeared to be true. Everything you see in that post comes from community experience and reports. I'll remove that statement based on your note.
×
×
  • Create New...