flyride

Members
  • Content Count

    1,846
  • Joined

  • Last visited

  • Days Won

    96

Everything posted by flyride

  1. Yes, if you boot with a fresh loader, you will be asked to migrate install and DSM will be overwritten, along with any non-GUI customizations you made to the root structures. This implies you would be doing this without a backup, if that is true it is a Bad Idea. An installation error can easily overwrite your array (at worst) or have to repeat the install because of a compatibility/configuration problem (at best).
  2. Just FYI for those testing, probably unwise to display your serial number (even if it is generated), as it could theoretically give Synology a way to identify you/your IP/etc.
  3. The USB key is a permanent boot. Don't remove it. Your BIOS should be set only to boot from USB.
  4. Read the question, and you'll understand the answer. Nothing has changed (choose 16-core/RAIDF1 or Quicksync/NVMe) with legacy or current products thus far.
  5. No, the RS3621xs+ is indirectly supporting NVMe cache through add-in card, but there is no current Synology device that supports all of the desired image-specific features that are available in either DS918+ or DS3617xs: NVMe, Quicksync, RAIDF1 and 16-thread.
  6. It is a travesty that Synology calls that a "plus" model. In any case, you may want to know this: because your volume was created with a 32-bit processor, it has a maximum size of 16TB when you expand it in the future.
  7. This is a bit OT, but we get what we get. Synology is selling you a 14nm 1st gen circa 2018 CPU as a new product. It's no mistake that older gen hardware tends to work better. And even if a platform were not supported under kernel 4.4.x, all Syno has to do is backport the kernel mods for that particular CPU/chipset/NIC and they could care less about other new hardware. I don't think the objective of redpill or any other loader should necessarily be to be compatible with ALL hardware, particularly the newest stuff. It's great if it can work out that way, but I would consider a l
  8. https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-and-621-on-ds3615/ This ought to help you with your research. Short answer, it should work fine.
  9. You've done what I would do. You might still try running it with two DIMMs only to see if there is any change.
  10. As long as the NIC and controller are supported baremetal (proof if you already passthrough), this will work fine and offer a "migration" install. Always best to keep the order, but if the array was healthy prior to the migration, it shouldn't matter.
  11. NVMe is just a PCIe interface - there is no controller involved. So the ASUS Hyper M.2 is nothing more than a PCIe form factor translation (PCIe slot to M.2)... it doesn't do anything to enable RAID or boot or anything else. Some of the multi-NVMe cards do have some logic - a PCIe switch to enable use of more drives while economizing on PCIe lanes.
  12. The higher end SAS/RAID controller support is better on DS3617xs and DS3615xs (SOHO/prosumer vs. entry-level retail DS918+), and the xs models have RAIDF1 support when DS918+ does not. Yes, except you can't assign any disks beyond the MaxDisks limit, they won't be accessible (by design). Your example will deny access to disks on the 2nd controller. For DS3615xs/DS3617xs, MaxDisks is 12 decimal by default, so DiskIdxMap=0C causes the first controller (virtual SATA) to map beyond the slot range (hiding the loader) For DS918+, MaxDisks is 16 decimal by defa
  13. Thanks. In my own testing, I've manually created a partition structure similar to what you have done, as has @The Chief who authored the NVMe patch. You have created a simple, single-element array so there is no possibility of array maintenance. What I have also found in testing is that if there is an NVMe member in a complex (multiple member RAID1, RAID5, etc) array or SHR, an array change often causes the NVMe disk(s) to be dropped. Do you have more complex arrays with NVMe working as described?
  14. Super fast Xeon and quiet are usually mutually exclusive. Since there is a maximum performance level available to DSM (8 HT cores/16 threads using DS3617xs) there is usually no need for a superfast Xeon. A 4-core CPU is more than adequate to handle a completely saturated 10Gbe interface.
  15. All the patch does is allow Synology's own nvme tools to recognize nvme devices that don't exactly conform to the PCI slots of a DS918+. The base nvme support is already built into DS918+ DSM and is functional. So I do not think the patch has any impact on what you are doing. IMHO Syno does not offer NVMe array capable systems because they do not want the cheap systems competing with their expensive ones. If you don't mind, post some Disk Manager screenshots and a cat /proc/mdstat of a healthy running system with your NVMe devices.
  16. The original post asked for PC/workstatations, not home made. I rolled my own using the U-NAS case line (4 bay and 8 bay). Handpicked fans and passive cooling on the NAS with a low power CPU. Since fan control is problematic with DSM (BIOS only, or write your own driver/shim), picking the right fans will make a big difference.
  17. SataPortMap=065 will break your system just as well as 0. SataPortMap=1 should work fine unless you are running out of slots with very high port density controllers. Is your boot loader disk set to SATA0:0 (it should be)? If you are really missing DiskIdxMap in the grub string, that is your main issue. If you are running DS3615/17xs, set DiskIdxMap=0C, for DS918+ set DiskIdxMap=10 It does look like you are running DS918+ however (EDIT: definitely DS918+ as it is visible on your boot screen). The grub configuration on DS918+ is not really ideal for multipl
  18. So let me understand, you are manually creating partitions on /dev/nvmeXn1 and they have nvme proper nomenclature (i.e. /dev/nvme0n1p1) and they behave as above? Why do you even need the patch then? I/O support already exists prior to the patch, which only exists for the cache utilities.
  19. Be careful with this. Any MD event initiated by the UI will probably damage the integrity of an array with an NVMe member.
  20. A few comments: There are two times when network connectivity matters - first, on the initial boot for the install, and second, when DSM finally boots after install. Just because it works for install doesn't mean it will work when the DSM flavor of Linux is initialized. If DSM boots post-install and you observe connectivity, it isn't "lost" due to instability unless you have a NIC hardware failure, which is incredibly unlikely. System instability is not a typical problem with DSM. If that is occurring, I would check 1) memory, 2) don't overclock and 3) system an
  21. Not clear what's wrong without seeking a bit more information. Take a look at this and see if it applies to you. If that doesn't solve it, post your SataPortMap and DiskIdxMap settings.
  22. The -3 indicates a patch. Look at the file sizes - one is 273MB and the other is 44MB.