Jump to content
XPEnology Community

DANGER : Raid crashed. Help me restore my data!


Recommended Posts

Hello there!

 

I really need your help to save my data on my volume.

I have an xpenology on 6.1 using Junboot 1.02b with disks in AHCI in bios.
There is one volume with 4 disks on a Raid SHR (with data protection of 1 disk).

The Overview says DANGER, the status is Crashed with 2 failed disks
The HDD view says that Disk1 is in Warning, Disk2 is Normal, Disk 3 is Initialised, Normal and Disk 4 is normal

 

I did not delete any data on the drive. I think there is a way to recover it, that is why I am contacting you. 
It is stupid but all my life and kids videos are on those drives. I though I was covered with 1 faulty disk tolerance. Lesson learned.

 

How can i recover that? Those 2 drives are now back but not in the RAID...

 

I appreciate your help, thanks.

admin@DiskStation:/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF                                         1]
md2 : active raid5 sdb5[2] sdd5[4]
      8776594944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/2] [__UU]

md1 : active raid1 sdb2[1] sdc2[0] sdd2[2]
      2097088 blocks [12/3] [UUU_________]

md0 : active raid1 sdb1[2] sdc1[0] sdd1[3]
      2490176 blocks [12/3] [U_UU________]

unused devices: <none>
admin@DiskStation:/$


 

 

image.thumb.png.39b3ecfaaa5578110c327ff97a43549f.pngimage.png.1e3e1eb6e23fe0c59aabeb1044d965aa.png

Link to comment
Share on other sites

It's to bad to hear that. While waiting for other guys to help, I would suggest to turn off the NAS if you still leave it on ...

Just to remind that, NAS is not a backup. For me, most of my photos and videos are sync'ed to Google Photos as they are free and unlimited. I also have them backed up on my PC and the other version on the NAS.

Link to comment
Share on other sites

you might want to read this, its a good way of understanding the process for recovering a raid system

 

https://xpenology.com/forum/topic/24525-help-save-my-55tb-shr1-or-mount-it-via-ubuntu/

 

a easy estimate about the potential loss of data will be to know the sequence number of the disks, to get something you can access again you would need to force one of the disks into the raid set - meaning loss of data, - and then rebuild the redundancy disk

but before doing this you need to know what caused the problem and make sure it does not happen again, as in the case above seen it will be a fruitless effort trying to repair a unstable system

in a "real" recovery where your date would be worth money you would clone all disks and work with copy's or image files on a proven stable hardware, professional would not try to repair on a system with unknown's

 

this will tell you more about the state of the disks

mdadm --detail /dev/md2

mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd'

also check log files and check s.m.a.r.t. status of the disks, you need to know how it happened

 

On 3/28/2020 at 8:03 AM, jbesclapez said:

I though I was covered with 1 faulty disk tolerance. Lesson learned.

 

are you sure? i can tell you from personal experience that even a 2 disk redundancy is not enough and there can be also file system corruptions

just to make sure - you need backup, not more redundancy

btw. did you ever thought about possible "other" scenarios? theft, fire, ...

in some cases it just needs one big usb disk to cover the most valuable data or maybe cloud storage like this

https://www.backblaze.com/blog/synology-cloud-backup-guide/

https://www.backblaze.com/b2/partner-synology.html

(i followed them over the years because of the storage pod designs and disk statistics - having data in the cloud is not my thing but might be a option for some people)

Edited by IG-88
  • Like 1
Link to comment
Share on other sites

On 3/31/2020 at 10:47 PM, IG-88 said:

you might want to read this, its a good way of understanding the process for recovering a raid system

 

https://xpenology.com/forum/topic/24525-help-save-my-55tb-shr1-or-mount-it-via-ubuntu/

 

a easy estimate about the potential loss of date will be to know the sequence number of the disks, to get something you can access again you would need to force one of the disks into the raid set - meaning loss of data, - and then rebuild the redundancy disk

but before doing this you need to know what caused the problem and make sure it does not happen again, as in the case above seen it will be a fruitless effort trying to repair a unstable system

in a "real" recovery where your date would be worth money you would clone all disks and work with copy's or image files on a proven stable hardware, professional would not try to repair on a system with unknown's

 

this will tell you more about the state of the disks


mdadm --detail /dev/md2

mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd'

also check log files and check s.m.a.r.t. status of the disks, you need to know how it happened

 

 

are you sure? i can tell you from personal experience that even a 2 disk redundancy is not enough and there can be also file system corruptions

just to make sure - you need backup, not more redundancy

btw. did you ever thought about possible "other" scenarios? theft, fire, ...

in some cases it just needs one big usb disk to cover the most valuable data or maybe cloud storage like this

https://www.backblaze.com/blog/synology-cloud-backup-guide/

https://www.backblaze.com/b2/partner-synology.html

(i followed them over the years because of the storage pod designs and disk statistics - having data in the cloud is not my thing but might be a option for some people)

Hi IG-88,

 

I will try to follow the commands given by Flyride (And hope he joins this thread too) that will give you an understanding of my situation. I am not that worried because I did not mess the data, it is a combination of bad lucks.

 

First, a bit of background story.

I moved to another place the server. Then i restarted it and totally forgot that I add to plug 2 RJ45 cables to its intel network card to see it on the network as the connection where binded to have a better speed. So I kept on rebooting it and could not see it. My bad is that I forgot to replug both network cable. So I took of the network cart thinking it was ******* up and went for the default one that i used previously. It now works like this.

In moving the network card, I broke a sata connector of a drive. The one that is now in partition failed state.

I also have a drive that is physically getting damage with SMART errors... i tried to repair that but got stuck at 90%.

Then I recreated a USB key with same JunMod, i booted on it but did not install the PAT of synology. I took of this new USB and reverted to the previous one...

So now I boot in the NAS but I do not see data.  

So there are 4 drives and here is the results of the commands

 

 

root@DiskStation:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdb5[2] sdd5[4]
      8776594944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/2] [__UU]

md1 : active raid1 sdb2[0] sdc2[1] sdd2[2]
      2097088 blocks [12/3] [UUU_________]

md0 : active raid1 sdb1[2] sdd1[3]
      2490176 blocks [12/2] [__UU________]

unused devices: <none>



 

root@DiskStation:~# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Mon May 13 08:39:01 2013
     Raid Level : raid5
     Array Size : 8776594944 (8370.01 GiB 8987.23 GB)
  Used Dev Size : 2925531648 (2790.00 GiB 2995.74 GB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Apr  4 16:00:56 2020
          State : clean, FAILED
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : VirDiskStation:2
           UUID : 75762e2e:4629b4db:259f216e:a39c266d
         Events : 15401

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       2       8       21        2      active sync   /dev/sdb5
       4       8       53        3      active sync   /dev/sdd5

 

root@DiskStation:~# mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Sun Mar 29 11:48:30 2020
     Raid Level : raid1
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 12
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sat Apr  4 15:59:32 2020
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           UUID : 147c15b1:a7d68154:3017a5a8:c86610be (local to host DiskStation)
         Events : 0.26

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       34        1      active sync   /dev/sdc2
       2       8       50        2      active sync   /dev/sdd2
       -       0        0        3      removed
       -       0        0        4      removed
       -       0        0        5      removed
       -       0        0        6      removed
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed

 

root@DiskStation:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Sat Jun  4 18:58:23 2016
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sat Apr  4 16:17:00 2020
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : b46ca73c:a07c1c08:3017a5a8:c86610be (local to host DiskStation)
         Events : 0.2615542

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       2       8       17        2      active sync   /dev/sdb1
       3       8       49        3      active sync   /dev/sdd1
       -       0        0        4      removed
       -       0        0        5      removed
       -       0        0        6      removed
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed


 

root@DiskStation:~# ls /dev/sd*
/dev/sda  /dev/sdb1  /dev/sdb5  /dev/sdc1  /dev/sdc5  /dev/sdd1  /dev/sdd5
/dev/sdb  /dev/sdb2  /dev/sdc   /dev/sdc2  /dev/sdd   /dev/sdd2
root@DiskStation:~# ls /dev/md*
/dev/md0  /dev/md1  /dev/md2
root@DiskStation:~# ls /dev/vg*
/dev/vga_arbiter

mdadm --examine /dev/sd[bcdefklmnopqr]5 >>/tmp/raid.status
 


          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 75762e2e:4629b4db:259f216e:a39c266d
           Name : VirDiskStation:2
  Creation Time : Mon May 13 08:39:01 2013
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5851063680 (2790.00 GiB 2995.74 GB)
     Array Size : 8776594944 (8370.01 GiB 8987.23 GB)
  Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=384 sectors
          State : clean
    Device UUID : 7eee55dc:dbbf5609:e737801d:87903b6c

    Update Time : Sat Apr  4 16:00:56 2020
       Checksum : b039a921 - correct
         Events : 15401

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : ..AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x2
     Array UUID : 75762e2e:4629b4db:259f216e:a39c266d
           Name : VirDiskStation:2
  Creation Time : Mon May 13 08:39:01 2013
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5851063680 (2790.00 GiB 2995.74 GB)
     Array Size : 8776594944 (8370.01 GiB 8987.23 GB)
  Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
Recovery Offset : 0 sectors
   Unused Space : before=1968 sectors, after=384 sectors
          State : clean
    Device UUID : 6ba575e4:53121f53:a8fe4876:173d11a9

    Update Time : Sun Mar 22 14:01:34 2020
       Checksum : 17b3f446 - correct
         Events : 15357

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 75762e2e:4629b4db:259f216e:a39c266d
           Name : VirDiskStation:2
  Creation Time : Mon May 13 08:39:01 2013
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5851063680 (2790.00 GiB 2995.74 GB)
     Array Size : 8776594944 (8370.01 GiB 8987.23 GB)
  Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=384 sectors
          State : clean
    Device UUID : fb417ce4:fcdd58fb:72d35e06:9d7098b5

    Update Time : Sat Apr  4 16:00:56 2020
       Checksum : dc9d9663 - correct
         Events : 15401

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : ..AA ('A' == active, '.' == missing, 'R' == replacing)

 

and this command below does not do anything - or I dont know how to use it:

# mdadm --examine /dev/sd[bcdefklmnopqr]5 | egrep 'Event|/dev/sd'

 

What is the next step you think?

 

Also, you are fully right, I will backup outside this NAS now. Probably with the service you point at me.

Please continue helping/guiding me.

Thanks

 

Link to comment
Share on other sites

Before you do anything else, heed IG-88's advice to understand what happened and hopefully determine that it won't happen again.  From what you have posted, DSM cannot see any data on disk #1 (/dev/sda). There is an incomplete Warning message that might tell us more about /dev/sda.  Also disk #3 (/dev/sdc) MIGHT have data on it but we aren't sure yet.  In order to effect a recovery, one of those two drives has to be functional and contain data.

 

So first, please investigate and report on the circumstances that caused the failure. Also, consider power-cycling the NAS and/or reset the drive connector on disk #1.  Once /dev/sda is up and running (or you are absolutely certain that it won't come up), complete the last investigation step IG-88 proposed.

  

17 hours ago, jbesclapez said:

and this command below does not do anything - or I dont know how to use it:

# mdadm --examine /dev/sd[bcdefklmnopqr]5 | egrep 'Event|/dev/sd'

 

You only have four drives.  So adapt the last command as follows:

# mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd'

 

Edited by flyride
Link to comment
Share on other sites

i alreday gave you commands matching your system above (abcd)

the examine you did from the other thread does not contain "a" so its missing "/dev/sda" informations

 

so please do execute this to get the information about the state of the disks

mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd'

it also seems you cut some output lines at the beginning from the command? (the part where it say's /dev/sdb5)

mdadm --examine /dev/sd[bcdefklmnopqr]5 >>/tmp/raid.status

please be careful, sloppiness might have heavy consequences, be very careful when doing such stuff some of the commands cant be undone so easily

it important to be precise

from what we have now it would be possible to do a recovery of the raid with /dev/sdc but lets see what /dev/sda has to offer

maybe nothing at all for /dev/sda because
 

root@DiskStation:~# ls /dev/sd*

did not show any partitions for /dev/sda, there should be /dev/sda1 /dev/sda2 /dev/sda5

but what we have from /dev/sdc might be enough

the loss would be 44 x 64k, 2,75 MByte

 

Edited by IG-88
Link to comment
Share on other sites

Thanks for of you for stepping up and helping me like this. Really appreciated. You guessed from what I wrote that I do not always understand what I do as I am not experienced in this. So I will try my best! I also add difficulty copying the message from raid.status with VI. That might explain why the message got cut.

 

Here is the commands IG88 wanted me to write originally :
 

root@DiskStation:~# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Mon May 13 08:39:01 2013
     Raid Level : raid5
     Array Size : 8776594944 (8370.01 GiB 8987.23 GB)
  Used Dev Size : 2925531648 (2790.00 GiB 2995.74 GB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Apr  4 19:33:15 2020
          State : clean, FAILED
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : VirDiskStation:2
           UUID : 75762e2e:4629b4db:259f216e:a39c266d
         Events : 15405

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       2       8       21        2      active sync   /dev/sdb5
       4       8       53        3      active sync   /dev/sdd5
root@DiskStation:~# mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd'
/dev/sdb5:
         Events : 15405
/dev/sdc5:
         Events : 15357
/dev/sdd5:
         Events : 15405

Here is the second one :

root@DiskStation:~# vi /tmp/raid.status
/dev/sdb5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 75762e2e:4629b4db:259f216e:a39c266d
           Name : VirDiskStation:2
  Creation Time : Mon May 13 08:39:01 2013
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5851063680 (2790.00 GiB 2995.74 GB)
     Array Size : 8776594944 (8370.01 GiB 8987.23 GB)
  Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=384 sectors
          State : clean
    Device UUID : 7eee55dc:dbbf5609:e737801d:87903b6c

    Update Time : Sat Apr  4 19:33:15 2020
       Checksum : b039dae8 - correct
         Events : 15405

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : ..AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x2
     Array UUID : 75762e2e:4629b4db:259f216e:a39c266d
           Name : VirDiskStation:2
  Creation Time : Mon May 13 08:39:01 2013
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5851063680 (2790.00 GiB 2995.74 GB)
     Array Size : 8776594944 (8370.01 GiB 8987.23 GB)
  Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
Recovery Offset : 0 sectors
   Unused Space : before=1968 sectors, after=384 sectors
          State : clean
    Device UUID : 6ba575e4:53121f53:a8fe4876:173d11a9

    Update Time : Sun Mar 22 14:01:34 2020
       Checksum : 17b3f446 - correct
         Events : 15357

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 75762e2e:4629b4db:259f216e:a39c266d
           Name : VirDiskStation:2
  Creation Time : Mon May 13 08:39:01 2013
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5851063680 (2790.00 GiB 2995.74 GB)
     Array Size : 8776594944 (8370.01 GiB 8987.23 GB)
  Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=384 sectors
          State : clean
    Device UUID : fb417ce4:fcdd58fb:72d35e06:9d7098b5

    Update Time : Sat Apr  4 19:33:15 2020
       Checksum : dc9dc82a - correct
         Events : 15405

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : ..AA ('A' == active, '.' == missing, 'R' == replacing)
"/tmp/raid.status" 85L, 2674C

and the latest command to be sure

root@DiskStation:~# ls /dev/sd*
/dev/sda  /dev/sdb  /dev/sdb1  /dev/sdb2  /dev/sdb5  /dev/sdc  /dev/sdc1  /dev/sdc2  /dev/sdc5  /dev/sdd  /dev/sdd1  /dev/sdd2  /dev/sdd5

 

Link to comment
Share on other sites

13 hours ago, flyride said:

Before you do anything else, heed IG-88's advice to understand what happened and hopefully determine that it won't happen again.  From what you have posted, DSM cannot see any data on disk #1 (/dev/sda). There is an incomplete Warning message that might tell us more about /dev/sda.  Also disk #3 (/dev/sdc) MIGHT have data on it but we aren't sure yet.  In order to effect a recovery, one of those two drives has to be functional and contain data.

 

So first, please investigate and report on the circumstances that caused the failure. Also, consider power-cycling the NAS and/or reset the drive connector on disk #1.  Once /dev/sda is up and running (or you are absolutely certain that it won't come up), complete the last investigation step IG-88 proposed.

 

For good news, you have a simple RAID (not SHR).  The array you are concerned with is /dev/md2.  Anything relating to /dev/vg* or managing LVM does not apply to you.

  

 

You only have four drives.  So adapt the last command as follows:


# mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd'

 

What is the next step guys?

Link to comment
Share on other sites

11 hours ago, jbesclapez said:

I dont know if it is important but as a reminder,  I did a SMART extended on the "broken" disk- it is now failing.... slowly...

 

When you say something like this, be very specific as to which disk it is.  Is it disk #1 or disk #3? (please answer)

 

12 hours ago, jbesclapez said:

root@DiskStation:~# mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd'

/dev/sdb5: Events : 15405

/dev/sdc5: Events : 15357

/dev/sdd5: Events : 15405

 

This is not too bad, there might be some modest data corruption (IG-88 quantified), but most files would be ok.

 

Do you know if your filesystem is btrfs or ext4?  (please answer)

 

And please answer the two questions I asked in the first place.

 

13 hours ago, flyride said:

There is an incomplete Warning message that might tell us more about /dev/sda.

 

13 hours ago, flyride said:

So first, please investigate and report on the circumstances that caused the failure.

 

Link to comment
Share on other sites

Hi Flyride an

1 hour ago, flyride said:

 

When you say something like this, be very specific as to which disk it is.  Is it disk #1 or disk #3? (please answer)

 

 

This is not too bad, there might be some modest data corruption (IG-88 quantified), but most files would be ok.

 

Do you know if your filesystem is btrfs or ext4?  (please answer)

 

And please answer the two questions I asked in the first place.

 

 

 

 

Sorry for not being precise enought. I am in a learning process now, and to be honest, it is not that simple :-)
 

So, you asked me to be specific about the disk I did a smart test on. The problem here is that I can get the disk model and serial from DSM so the model is WD30EZRX-00MMMB0 and the serial is WD-WCAWZ2266285 but that is all I can get. I tried hwinfo --disk, sudoblkid, lsblk -f, command but they do not work. I do not know how to recover the UUID as I think it is what you need no? What I am sure of it that in DSM this is stated as Disk1.

 

Regarding the filesytem, I had to do some research. The RAID type was SHR but the filesystem I can not find it, I am sorry. Do you know a command to find that? I cannot find it...

 

Regarding the Warning of Disk 1 it is about the SMART test and it says Warning, Failing (see below)

 

Regarding what caused the failure, it is probably because after I moved the server I broke the SATA cable, it lost the drive. I tried to repair the cable and plugged it in. Then it was discovered and there was a system partition failed and I repaired - but the repair stopped in the middle I think, I restarted it. 

 

Thanks Flyride for spending time on helping a stranger! Really appreciated. Can I get you a drink ;-)

 

image.thumb.png.b44f413cf1a2490a3ad70b6256108227.png

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Well it looks like that drive 1 is not worth trying to use.  Lots of bad sectors and no partitions with obvious data on them.

 

Let's try restarting your array in degraded mode using the remaining drives.

# mdadm --force --assemble /dev/md2 /dev/sd[bcd]5

Post the output exactly.

Link to comment
Share on other sites

5 minutes ago, flyride said:

Well it looks like that drive 1 is not worth trying to use.  Lots of bad sectors and no partitions with obvious data on them.

 

Let's try restarting your array in degraded mode using the remaining drives.


# mdadm --force --assemble /dev/md2 /dev/sd[bcd]5

Post the output exactly.

I do not get anything from that command. What I am doing wrong?

 

image.png.f967fe1205a3534244ff31b845a28029.png

Link to comment
Share on other sites

26 minutes ago, jbesclapez said:

I do not get anything from that command. What I am doing wrong?

 

image.png.f967fe1205a3534244ff31b845a28029.png


The pound sign I typed was to represent the operating system prompt and so you knew the command was to be run with elevated privilege. When you entered the command with a preceding pound sign, you made it into a comment and exactly nothing was done. 
 

Please do not click that repair button right now. It won’t be helpful. 

Link to comment
Share on other sites

1 minute ago, flyride said:


The pound sign I typed was to represent the operating system prompt and so you knew the command was to be run with elevated privilege. When you entered the command with a preceding pound sign, you made it into a comment and exactly nothing was done. 
 

Please do not click that repair button right now. It won’t be helpful. 

Shame on me. I should have noticed that. Sorry.

 

Update

 

root@DiskStation:~# mdadm --force --assemble /dev/md2 /dev/sd[bcd]5
mdadm: --force does not set the mode, and so cannot be the first option.

 

Link to comment
Share on other sites

3 minutes ago, flyride said:


The pound sign I typed was to represent the operating system prompt and so you knew the command was to be run with elevated privilege. When you entered the command with a preceding pound sign, you made it into a comment and exactly nothing was done. 
 

Please do not click that repair button right now. It won’t be helpful. 

should it be assemble first??

 

mdadm  --assemble --force /dev/md2 /dev/sd[bcd]5


 

Link to comment
Share on other sites

2 hours ago, flyride said:


The pound sign I typed was to represent the operating system prompt and so you knew the command was to be run with elevated privilege. When you entered the command with a preceding pound sign, you made it into a comment and exactly nothing was done. 
 

Please do not click that repair button right now. It won’t be helpful. 

OK, I did teh assemble before and it seemed to be OK

 

root@DiskStation:~# mdadm  --assemble --force /dev/md2 /dev/sd[bcd]5
mdadm: /dev/sdb5 is busy - skipping
mdadm: /dev/sdd5 is busy - skipping
mdadm: Found some drive for an array that is already active: /dev/md2
mdadm: giving up.


And then 

root@DiskStation:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF                                                         1]
md2 : active raid5 sdb5[2] sdd5[4]
      8776594944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/2] [__UU]

md1 : active raid1 sdb2[0] sdc2[1] sdd2[2]
      2097088 blocks [12/3] [UUU_________]

md0 : active raid1 sdb1[2] sdd1[3]
      2490176 blocks [12/2] [__UU________]

unused devices: <none>

 

Link to comment
Share on other sites

8 minutes ago, flyride said:

Yes, it's hard to do all this remotely and from memory.

 


# mdadm --stop /dev/md2

then


# mdadm --assemble --force /dev/md2 /dev/sd[bcd]5

Sorry for the false start.

 

Good to hear that. At least it proves I have done good homework LOL.

 

After the stop mdadm i get this

root@DiskStation:~# mdadm --assemble --force /dev/md2 /dev/sd[bcd]5
mdadm: /dev/md2 assembled from 2 drives - not enough to start the array.

Is it bad? Now I will go and do some research about it... scary
 

 

 

 

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...