Jump to content
XPEnology Community

[HOWTO] repair a clean volume who stays crashed volume


Recommended Posts

hi,

followed those two tutorials

https://www.dsebastien.net/2015/05/19/recovering-a-raid-array-in-e-state-on-a-synology-nas/

https://blogs.dbcloudsvc.com/life/fixed-the-crashed-basic-volume-of-synology-nas/

 

and applying on my BASIC 1HDD crashed volume

 

# Stop all NAS services except from SSH

> sudo syno_poweroff_task -d

 

$ sudo mdadm --detail /dev/md5

/dev/md5:
        Version : 1.2
  Creation Time : Mon May  4 20:54:01 2020
     Raid Level : raid1
     Array Size : 5331334464 (5084.36 GiB 5459.29 GB)
  Used Dev Size : 5331334464 (5084.36 GiB 5459.29 GB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sun May 10 00:40:01 2020
          State : clean, FAILED
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : SynoNas:5
           UUID : 6bbaadf9:fccd6e69:66518019:2ee66ad8
         Events : 22

    Number   Major   Minor   RaidDevice State
       0       8       35        0      faulty active sync   /dev/sdc3

 

 

 

 

$ sudo mdadm --examine /dev/sdc3

/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 6bbaadf9:fccd6e69:66518019:2ee66ad8
           Name : SynoNas:5
  Creation Time : Mon May  4 20:54:01 2020
     Raid Level : raid1
   Raid Devices : 1

 Avail Dev Size : 10662668960 (5084.36 GiB 5459.29 GB)
     Array Size : 5331334464 (5084.36 GiB 5459.29 GB)
  Used Dev Size : 10662668928 (5084.36 GiB 5459.29 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : 524c0a00:3ef4e283:2d9a4151:0764a8ed

    Update Time : Sun May 10 00:40:01 2020
       Checksum : 1f2a1c29 - correct
         Events : 22

 

$cat /proc/mdstat

md5 : active raid1 sdc3[0](E)
      5331334464 blocks super 1.2 [1/1] [E]

 

$ sudo mdadm --stop /dev/md5

mdadm: stopped /dev/md5

 

$ sudo mdadm -Cf /dev/md5 -e1.2 -n1 -l1 /dev/sdc3 -u6bbaadf9:fccd6e69:66518019:2ee66ad8

 

mdadm: /dev/sdc3 appears to be part of a raid array:
       level=raid1 devices=1 ctime=Mon May  4 20:54:01 2020
Continue creating array? y
mdadm: array /dev/md5 started.

 

admin@SynoJeux:/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md5 : active raid1 sdc3[0]
      5331334464 blocks super 1.2 [1/1]

 

$ sudo e2fsck -pvf -C0 /dev/md5

 

and voilaaaaaaa :-D

 

 

image.png.1f68089af9d192b2338a39778a8381ef.png

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

  • 2 months later...

Thank you very much! 👍🤟 I had the same issue and your tutorial help me to resolve it ! And BTW, give me a better understanding of the raid operation.

 

Only things I changed was the create command : 

sudo mdadm --create --force /dev/md4 --level=1 --metadata=1.2 --raid-devices=1 /dev/sde3

for a better understanding 😊

Edited by dave
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...