Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 12/26/2020 in all areas

  1. its the way you have chosen when using raid0, the normal way with a non working disk / removed disk is a destroyed raid and restoring data from backup, thats the way you designed and set up your system, anything else is optional and depends on other things and might not 100% predictable beside the comments about reinserting the mSATA drive to see if the error is still there, i would suggest clonezilla for cloning, its free and you get a ready to use boot media, as long a the new drive is not smaller it will do this job i think mdadm would not mind a clone of a disk on a new (disk) hardware but DSM could interfere, i never tried this scenario, my guess is that it should work in theory: as your old drive is still in working condition and if the clone does not do its job as intended, the raid0 set should not start and if you put back the old drive the raid0 set should start again and you can have a look in the logs to see what was going wrong once you started the raid0 with the new drive you old drive would be invalid for that raid set
    1 point
  2. Drive reconnection count is not an internal disk error - it usually derives from a bad SATA cable or dirty connector. In my opinion that drive is not dying and I wouldn't change it out.
    1 point
  3. Hi, on DSM 6.2.2 I got this: sed: -e expression #1, char 40: unterminated `s' command sed: -e expression #1, char 94: unterminated `s' command Any idea? Thanks!
    1 point
×
×
  • Create New...