Jump to content
XPEnology Community

Restore degraded SHR-Array


Recommended Posts



I've a Server 2012 R2 Machine running XPEnoboot_DS3615xs_5.1-5022.3 on a Hyper-V VM. It's using 3 physical hard disk drives as a SHR-Raid volume (can handle failture of 1 disk). This worked fine for about 1 year, but now one of the WD red 3TB break. Now I got a new one by warranty, a newer model (WDC WD30EFRX-68EUZN0) in the same capacity. Now I want to repair my volume like described in the wiki: https://www.synology.com/en-global/know ... oup_repair


But I can't make this work... When I followed those instructions, the DSM shows a loading-screen for some seconds. Then I get a notification in the notification bar top which is saying


Volume 1 on MyNAS has entered degraded mode [2/3]


Since them there is no I/O any more on those disk. After refreshing the storage-manager, I see no status like "repairing volume". It shows me the volume as degraded like it was before I started the repairing-process. I tried different things, also using a 4TB Seagate instead of the 3TB WD to repair the array. Because a research gave me the result, that some users of DSM had trouble with the WD30EFRX some time ago. But without luck, both WD and Seagate will result in the same as described before.


So I did some research what software DSM is using internally to build the software-raid. It seems like they use mdadm and the SHR is a RAID5. But I also read some unspecific information that Synology modified the RAID, so that it's not like a normal RAID5 one. This make me a bit afraid to break something. In a normal RAID5 generated with mdadm I could mark the removed drive as broken and insert a new one - At least in my point of view. I had no experience building software-raids yet.


Is the problem itself a bug in Xpenology? I think this can't exit in DSM itself because it seems to be a general issue with the OS, not the hard disk drives.

At the moment my workaround causing as less damage as possible is to format the 4TB Seagate as new volume, move all of my shared folders on it. Then deleting the volume containing the 3-disk array with one disk missing, and move everything back.


But I would prefer a cleaner solution, also for the future. So any support is welcome. To give you some more details, here the detailed output of mdadm of the degraded array using mdadm:


MyNAS> mdadm --detail /dev/md2
       Version : 1.2
 Creation Time : Sun Apr  3 16:28:25 2016
    Raid Level : raid5
    Array Size : 5850870528 (5579.83 GiB 5991.29 GB)
 Used Dev Size : 2925435264 (2789.91 GiB 2995.65 GB)
  Raid Devices : 3
 Total Devices : 2
   Persistence : Superblock is persistent

   Update Time : Sat Apr 30 23:07:41 2016
         State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 64K

          Name : MyNAS:2  (local to host MyNAS)
          UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
        Events : 74640

   Number   Major   Minor   RaidDevice State
      0       8       85        0      active sync   /dev/sdf5
      1       0        0        1      removed
      2       8       99        2      active sync   /dev/sdg3


As you can see, drive #2 is the broken one, which I want to replace by the new drive.

Link to comment
Share on other sites

if your data is accessible, it might be far easier to back it up to an external device (if you have spare hardware maybe a 'test' xpe box, then you could backup config and apps too :smile: ), then delete the volume and recreate from scratch with the 3 drives.

Link to comment
Share on other sites

  • Create New...