flyride

Members
  • Content Count

    637
  • Joined

  • Last visited

  • Days Won

    43

flyride last won the day on January 24

flyride had the most liked content!

Community Reputation

225 Excellent

4 Followers

About flyride

  • Rank
    Guru

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I am pretty sure i350 onboard NIC is e1000e so that should be ok. If you wish to prove with another NIC get yourself a cheap Intel CT $20 and try that.
  2. Also be sure you are setting up Legacy BIOS boot (not UEFI) with 1.03b. See more here: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  3. I would expect that motherboard to run 1.03b and associated DSM platform well. Make sure you are building your system and connecting with the gigabit port first. Also, try DS3615xs instead of DS3617xs. You don't need the cores support of DS3617xs.
  4. If all your problems started after that power supply replacement, this further reinforces the idea of stable power. You seem reluctant to believe that a new power supply can be a problem (it can). For what it's worth, 13 drives x 5w equals 65w, that shouldn't be a factor. In any debugging and recovery operation, the objective should be to manage the change rate and therefore risk. Replacing the whole system would violate that strategy. Do the drive connectivity failures implicate a SAS card problem? Maybe, but a much more plausible explanation is physical connectivity or power. If you have an identical SAS card, and it is passive (no intrinsic configuration required), replacing it is a low risk troubleshooting strategy. Do failures implicate the motherboard? Maybe, if you are using on-board SATA ports, but the same plausibility test applies. However, there is more variability and risk (mobo model, BIOS settings, etc). Do failures implicate DSM or loader stability? Not at all; DSM boots fine and is not crashing. And if you reinstall DSM, it's very likely your arrays will be destructively reconfigured. Please don't do this. So I'll stand by (and extend) my previous statement - if this were my system, I would change your power and cables first. If that doesn't solve things, maybe the SAS card, and lastly the motherboard.
  5. I can't really answer your question. Drives are going up and down. That can happen because the interface is unreliable, or the power is unreliable. A logic problem in the SAS card is way more likely to be a total failure, not an intermittent one. If it were me, I would completely replace all your SATA cables and the power supply.
  6. Everything is going up and down right now. You can see the changed drive assignments between the two last posted mdstats. We can't do anything with this until it's stable.
  7. Your drives have reordered yet again. I know IG-88 said your controller deliberately presents them contiguously (which is problematic in itself) but if all drives are up and stable, I cannot see why that behavior would cause a reorder on reboot. I remain very wary of your hardware consistency. Look through dmesg and see if you have any hardware problems since your power cycle boot. Run another hotswap query and see if any drives have changed state since your power cycle boot. Run another mdstat - is it still slow?
  8. # mdadm --assemble --run /dev/md4 -u648fc239:67ee3f00:fa9d25fe:ef2f8cb0 # mdadm --assemble --run /dev/md5 -uae55eeff:e6a5cc66:2609f5e0:2e2ef747
  9. # mdadm --stop /dev/md4 # mdadm --stop /dev/md5 # mdadm --assemble /dev/md4 -u648fc239:67ee3f00:fa9d25fe:ef2f8cb0 # mdadm --assemble /dev/md5 -uae55eeff:e6a5cc66:2609f5e0:2e2ef747 The first one will probably error out complaining that /dev/sdo6 is not current. We'll be able to fix that.
  10. I don't have a plan to plug the disconnected 10TB at this time. # mdadm --detail /dev/md5 # mdadm --detail /dev/md4
  11. Ok then, lets complete the last task then, try to incorporate your second 10TB drive into the array: # mdadm -Cf /dev/md2 -e1.2 -n13 -l5 --verbose --assume-clean /dev/sd[bcdefpqlmn]5 missing /dev/sdo5 /dev/sdk5 -u43699871:217306be:dc16f5e8:dcbe1b0d # cat /proc/mdstat
  12. To compare, we would have to fgrep "hotswap" /var/log/disk.log not "hotplug" But looking at your pastebin it appears that the only drive to hotplug out/in is sda, which doesn't affect us. But why that happened is still concerning. Please run it again (with hotswap) and make sure there are no array drives changing after 2020-01-21T05:58:20+08:00