flyride

Members
  • Content Count

    1,845
  • Joined

  • Last visited

  • Days Won

    96

flyride last won the day on July 12

flyride had the most liked content!

Community Reputation

583 Excellent

About flyride

  • Rank
    Guru Master

Recent Profile Visitors

8,028 profile views
  1. Just FYI for those testing, probably unwise to display your serial number (even if it is generated), as it could theoretically give Synology a way to identify you/your IP/etc.
  2. The USB key is a permanent boot. Don't remove it. Your BIOS should be set only to boot from USB.
  3. Read the question, and you'll understand the answer. Nothing has changed (choose 16-core/RAIDF1 or Quicksync/NVMe) with legacy or current products thus far.
  4. No, the RS3621xs+ is indirectly supporting NVMe cache through add-in card, but there is no current Synology device that supports all of the desired image-specific features that are available in either DS918+ or DS3617xs: NVMe, Quicksync, RAIDF1 and 16-thread.
  5. It is a travesty that Synology calls that a "plus" model. In any case, you may want to know this: because your volume was created with a 32-bit processor, it has a maximum size of 16TB when you expand it in the future.
  6. This is a bit OT, but we get what we get. Synology is selling you a 14nm 1st gen circa 2018 CPU as a new product. It's no mistake that older gen hardware tends to work better. And even if a platform were not supported under kernel 4.4.x, all Syno has to do is backport the kernel mods for that particular CPU/chipset/NIC and they could care less about other new hardware. I don't think the objective of redpill or any other loader should necessarily be to be compatible with ALL hardware, particularly the newest stuff. It's great if it can work out that way, but I would consider a l
  7. https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-and-621-on-ds3615/ This ought to help you with your research. Short answer, it should work fine.
  8. You've done what I would do. You might still try running it with two DIMMs only to see if there is any change.
  9. As long as the NIC and controller are supported baremetal (proof if you already passthrough), this will work fine and offer a "migration" install. Always best to keep the order, but if the array was healthy prior to the migration, it shouldn't matter.
  10. NVMe is just a PCIe interface - there is no controller involved. So the ASUS Hyper M.2 is nothing more than a PCIe form factor translation (PCIe slot to M.2)... it doesn't do anything to enable RAID or boot or anything else. Some of the multi-NVMe cards do have some logic - a PCIe switch to enable use of more drives while economizing on PCIe lanes.
  11. The higher end SAS/RAID controller support is better on DS3617xs and DS3615xs (SOHO/prosumer vs. entry-level retail DS918+), and the xs models have RAIDF1 support when DS918+ does not. Yes, except you can't assign any disks beyond the MaxDisks limit, they won't be accessible (by design). Your example will deny access to disks on the 2nd controller. For DS3615xs/DS3617xs, MaxDisks is 12 decimal by default, so DiskIdxMap=0C causes the first controller (virtual SATA) to map beyond the slot range (hiding the loader) For DS918+, MaxDisks is 16 decimal by defa
  12. Thanks. In my own testing, I've manually created a partition structure similar to what you have done, as has @The Chief who authored the NVMe patch. You have created a simple, single-element array so there is no possibility of array maintenance. What I have also found in testing is that if there is an NVMe member in a complex (multiple member RAID1, RAID5, etc) array or SHR, an array change often causes the NVMe disk(s) to be dropped. Do you have more complex arrays with NVMe working as described?