flyride

Members
  • Content Count

    1,843
  • Joined

  • Last visited

  • Days Won

    96

Posts posted by flyride

  1. 1 hour ago, wreslrstud said:

    I would be more inclined to see if we can develop a loader for the Rackstation branded versions. It looks like some of them support a higher core count which may be valuable to us that would like to run more than 8 cores. It looks like the RS3621xs+ can support up to 16 threads which I believe is more than what images such as the 918 support.

     

    DS3617xs offers 16 threads to you now.

  2. 4 hours ago, trek102 said:

    Synology DS212+, 0.5GB Ram, Arm Processor, Single core, Marvell Kirkwood 88F6282

     

    It is a travesty that Synology calls that a "plus" model.  In any case, you may want to know this: because your volume was created with a 32-bit processor, it has a maximum size of 16TB when you expand it in the future.

  3. 4 hours ago, rok1 said:

    Not sure what 7.0 will give us aside from some minor updates aesthetically and functionally. DSM 7 for my 1621+ still is only on kernel 4.4.180. To be able to use XPE 918+ with more current hardware we're going to need a newer kernel.

    This is a bit OT, but we get what we get.  Synology is selling you a 14nm 1st gen circa 2018 CPU as a new product.  It's no mistake that older gen hardware tends to work better.  And even if a platform were not supported under kernel 4.4.x, all Syno has to do is backport the kernel mods for that particular CPU/chipset/NIC and they could care less about other new hardware.

     

    I don't think the objective of redpill or any other loader should necessarily be to be compatible with ALL hardware, particularly the newest stuff.  It's great if it can work out that way, but I would consider a loader development a success if it could run on LIKE hardware, particularly if it can be emulated via virtual machine.

    • Like 3
  4. 13 hours ago, unmesh said:

    Or is it sufficient to create a new USB stick with the same version of bootloader, edit grub.cfg with the new VID:PID and the MAC address from the other system, and "reinstall" the appropriate version of DSM using the old credentials to the hard drives (that already have DSM on them)?

     

    As long as the NIC and controller are supported baremetal (proof if you already passthrough), this will work fine and offer a "migration" install.

     

    13 hours ago, unmesh said:

    Would I have to be concerned about the order of the drives on the new SATA controller?

     

    Always best to keep the order, but if the array was healthy prior to the migration, it shouldn't matter.

  5. NVMe is just a PCIe interface - there is no controller involved.  So the ASUS Hyper M.2 is nothing more than a PCIe form factor translation (PCIe slot to M.2)... it doesn't do anything to enable RAID or boot or anything else.

     

    Some of the multi-NVMe cards do have some logic - a PCIe switch to enable use of more drives while economizing on PCIe lanes.

  6. The higher end SAS/RAID controller support is better on DS3617xs and DS3615xs (SOHO/prosumer vs. entry-level retail DS918+), and the xs models have RAIDF1 support when DS918+ does not.

     

    4 hours ago, katbyte said:

    0C10 means 1st controller to disk 12, 2nd to disk 17 and without adding any more values the rest start numbering at disk 1? 

     

    Yes, except you can't assign any disks beyond the MaxDisks limit, they won't be accessible (by design).  Your example will deny access to disks on the 2nd controller.

     

    For DS3615xs/DS3617xs, MaxDisks is 12 decimal by default, so DiskIdxMap=0C causes the first controller (virtual SATA) to map beyond the slot range (hiding the loader)

    For DS918+, MaxDisks is 16 decimal by default (via Jun inline patch), so DiskIdxMap=10 causes the first controller to map beyond the slot range, hiding the loader

  7. Thanks.  In my own testing, I've manually created a partition structure similar to what you have done, as has @The Chief who authored the NVMe patch.  You have created a simple, single-element array so there is no possibility of array maintenance.

     

    What I have also found in testing is that if there is an NVMe member in a complex (multiple member RAID1, RAID5, etc) array or SHR, an array change often causes the NVMe disk(s) to be dropped.  Do you have more complex arrays with NVMe working as described?

  8. Super fast Xeon and quiet are usually mutually exclusive.  Since there is a maximum performance level available to DSM (8 HT cores/16 threads using DS3617xs) there is usually no need for a superfast Xeon.  A 4-core CPU is more than adequate to handle a completely saturated 10Gbe interface.

  9. All the patch does is allow Synology's own nvme tools to recognize nvme devices that don't exactly conform to the PCI slots of a DS918+.

    The base nvme support is already built into DS918+ DSM and is functional.  So I do not think the patch has any impact on what you are doing.

     

    IMHO Syno does not offer NVMe array capable systems because they do not want the cheap systems competing with their expensive ones.

     

    If you don't mind, post some Disk Manager screenshots and a cat /proc/mdstat of a healthy running system with your NVMe devices.

  10. The original post asked for PC/workstatations, not home made.

     

    I rolled my own using the U-NAS case line (4 bay and 8 bay).  Handpicked fans and passive cooling on the NAS with a low power CPU.

     

    Since fan control is problematic with DSM (BIOS only, or write your own driver/shim), picking the right fans will make a big difference.

  11. SataPortMap=065 will break your system just as well as 0.  SataPortMap=1 should work fine unless you are running out of slots with very high port density controllers.

     

    Is your boot loader disk set to SATA0:0 (it should be)?

     

    If you are really missing DiskIdxMap in the grub string, that is your main issue.  If you are running DS3615/17xs, set DiskIdxMap=0C, for DS918+ set DiskIdxMap=10

     

    It does look like you are running DS918+ however (EDIT: definitely DS918+ as it is visible on your boot screen).  The grub configuration on DS918+ is not really ideal for multiple controller systems, most use DS3617xs for this.

  12. 3 minutes ago, nadiva said:

    Since formatted with synopartition NVME acts exactly like other arrays, it has the the same small SYNO partition, and every UI/CLI disk related command works on it, from creating volumes, monitoring, trimming, replicating, share transfers back and forth. It had too much time to prove itself, and became the most reliable drive with highest availability along with SSD (even HDD RAID had to be rebuilt for no reason, just internal controller hiccup). Not bad for a cheap external PCI3x1-4. Once NVMEs are cheap, i will build big arrays from PCI5 NVMEs in order to utilize multi-40+gbit NICs :) In future, LAN if not WAN connection speeds must exceed local highend PC speeds. Not with official Syno boxes thou - their pathetic hardware is good for archaic 100Mbit samba only, i think they even use PCI2, that's why their UI pretends not to see NVME for volume creation, officially claiming lack of support "because of heat problems":)

     

    So let me understand, you are manually creating partitions on /dev/nvmeXn1 and they have nvme proper nomenclature (i.e. /dev/nvme0n1p1) and they behave as above?

     

    15 hours ago, nadiva said:

    Once you do the patch, you already win, then you just publish it into MD array to make it visible to DSM

     

    Why do you even need the patch then?  I/O support already exists prior to the patch, which only exists for the cache utilities.

  13. 1 hour ago, nadiva said:

    publish it into MD array to make it visible to DSM. Then do whatever you like in UI, e.g. move the shares to it.

     

    Be careful with this.  Any MD event initiated by the UI will probably damage the integrity of an array with an NVMe member.

  14. A few comments:

    1. There are two times when network connectivity matters - first, on the initial boot for the install, and second, when DSM finally boots after install.  Just because it works for install doesn't mean it will work when the DSM flavor of Linux is initialized.
       
    2. If DSM boots post-install and you observe connectivity, it isn't "lost" due to instability unless you have a NIC hardware failure, which is incredibly unlikely.
       
    3. System instability is not a typical problem with DSM.  If that is occurring, I would check 1) memory, 2) don't overclock and 3) system and CPU cooling.

    Now this:

     

    5 hours ago, asheenlevrai said:

    machineB (instability issue reported here)

    DS3617xs 1.03b

    MB :  Asus p8z77-m (onboard NIC disabled in BIOS)

    CPU : i7 3770k

    RAM : 4x 4GB DDR3 1066MHz

     

    In your other thread, you didn't post the DIMM configuration, but I note that this is the only one of the four machines that has four DIMMs.  Most desktop systems are less stable with 4 DIMMS instead of 2, and may need a voltage bump or more conservative speed/timing settings to remain stable.  Have you run hardware/memory/stress tests on this box?  Have you tried pulling two of the DIMMs to see if it is then stable?  No idea if this is your issue, but never assume the hardware is working great, especially if it is repurposed/old.

     

    5 hours ago, asheenlevrai said:

    I don't understand how on earth I am able to use this quad-port card on machineB. What is different with this HW compared to the 3 other rigs?

     

    No real clue here.  Make sure you are plugging it into a real 4x slot (many motherboards have long slots wired for 1x or 2x to save PCI lanes). Did you add extra.lzma on the system that works?  Also you have two different DSM platforms in play (DS3617xs and DS918+).  Nothing wrong with that, but variability is doubled for troubleshooting.