flyride

Members
  • Content Count

    1,955
  • Joined

  • Last visited

  • Days Won

    98

Everything posted by flyride

  1. Sure it can. You just have to go into the BIOS during boot and set it up.
  2. Yes, it should work. You need DS3617xs to use all your threads. If you have any issues it will be with the SAS controller support, so test thoroughly. You did not mention how you have your disks configured now - are you passing through the controller, or is ESXi managing them? ESXi is a better general-purpose hypervisor than VM Manager, and any performance bottlenecks are mitigated by giving DSM access to the disk hardware (i.e. passthrough controller or RDM). If I had your combination of VM workloads (wait, I do), I would run DSM as a VM on ESXi (oh yeah, that i
  3. You're relying on VMware to virtualize and service your disk I/O mapped to a physical file. Is ESXi running from the same drive also? Always best to dedicate disk to DSM (directly managing drives is what it was optimized to do) either via passthrough or RDM.
  4. Yes, it's being worked on
  5. When you install DSM, it goes to the base filesystem (anything that is not in a /volumeX tree). Any modifications you make to that filesystem are overwritten clean in a migration install, aside from the DSM settings in the GUI that are copied.
  6. Yes, if you boot with a fresh loader, you will be asked to migrate install and DSM will be overwritten, along with any non-GUI customizations you made to the root structures. This implies you would be doing this without a backup, if that is true it is a Bad Idea. An installation error can easily overwrite your array (at worst) or have to repeat the install because of a compatibility/configuration problem (at best).
  7. Just FYI for those testing, probably unwise to display your serial number (even if it is generated), as it could theoretically give Synology a way to identify you/your IP/etc.
  8. The USB key is a permanent boot. Don't remove it. Your BIOS should be set only to boot from USB.
  9. Read the question, and you'll understand the answer. Nothing has changed (choose 16-core/RAIDF1 or Quicksync/NVMe) with legacy or current products thus far.
  10. No, the RS3621xs+ is indirectly supporting NVMe cache through add-in card, but there is no current Synology device that supports all of the desired image-specific features that are available in either DS918+ or DS3617xs: NVMe, Quicksync, RAIDF1 and 16-thread.
  11. It is a travesty that Synology calls that a "plus" model. In any case, you may want to know this: because your volume was created with a 32-bit processor, it has a maximum size of 16TB when you expand it in the future.
  12. This is a bit OT, but we get what we get. Synology is selling you a 14nm 1st gen circa 2018 CPU as a new product. It's no mistake that older gen hardware tends to work better. And even if a platform were not supported under kernel 4.4.x, all Syno has to do is backport the kernel mods for that particular CPU/chipset/NIC and they could care less about other new hardware. I don't think the objective of redpill or any other loader should necessarily be to be compatible with ALL hardware, particularly the newest stuff. It's great if it can work out that way, but I would consider a l
  13. https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-and-621-on-ds3615/ This ought to help you with your research. Short answer, it should work fine.
  14. You've done what I would do. You might still try running it with two DIMMs only to see if there is any change.
  15. As long as the NIC and controller are supported baremetal (proof if you already passthrough), this will work fine and offer a "migration" install. Always best to keep the order, but if the array was healthy prior to the migration, it shouldn't matter.
  16. NVMe is just a PCIe interface - there is no controller involved. So the ASUS Hyper M.2 is nothing more than a PCIe form factor translation (PCIe slot to M.2)... it doesn't do anything to enable RAID or boot or anything else. Some of the multi-NVMe cards do have some logic - a PCIe switch to enable use of more drives while economizing on PCIe lanes.
  17. The higher end SAS/RAID controller support is better on DS3617xs and DS3615xs (SOHO/prosumer vs. entry-level retail DS918+), and the xs models have RAIDF1 support when DS918+ does not. Yes, except you can't assign any disks beyond the MaxDisks limit, they won't be accessible (by design). Your example will deny access to disks on the 2nd controller. For DS3615xs/DS3617xs, MaxDisks is 12 decimal by default, so DiskIdxMap=0C causes the first controller (virtual SATA) to map beyond the slot range (hiding the loader) For DS918+, MaxDisks is 16 decimal by defa
  18. Thanks. In my own testing, I've manually created a partition structure similar to what you have done, as has @The Chief who authored the NVMe patch. You have created a simple, single-element array so there is no possibility of array maintenance. What I have also found in testing is that if there is an NVMe member in a complex (multiple member RAID1, RAID5, etc) array or SHR, an array change often causes the NVMe disk(s) to be dropped. Do you have more complex arrays with NVMe working as described?
  19. Super fast Xeon and quiet are usually mutually exclusive. Since there is a maximum performance level available to DSM (8 HT cores/16 threads using DS3617xs) there is usually no need for a superfast Xeon. A 4-core CPU is more than adequate to handle a completely saturated 10Gbe interface.
  20. All the patch does is allow Synology's own nvme tools to recognize nvme devices that don't exactly conform to the PCI slots of a DS918+. The base nvme support is already built into DS918+ DSM and is functional. So I do not think the patch has any impact on what you are doing. IMHO Syno does not offer NVMe array capable systems because they do not want the cheap systems competing with their expensive ones. If you don't mind, post some Disk Manager screenshots and a cat /proc/mdstat of a healthy running system with your NVMe devices.
  21. The original post asked for PC/workstatations, not home made. I rolled my own using the U-NAS case line (4 bay and 8 bay). Handpicked fans and passive cooling on the NAS with a low power CPU. Since fan control is problematic with DSM (BIOS only, or write your own driver/shim), picking the right fans will make a big difference.