Jump to content
XPEnology Community

Volume crashed after reboot


costyy

Recommended Posts

I had a volume crushed after my system ssd went bad. I have a raid 5 volume composed from 10x4tb drives and got one initialized and another one not initialized. I installed ubuntu on another machine and got this message from the attached pics. From the 10 drives 9 have the linux partition on them the 10th is free space .Can I save the data from the drives or is it all lost?

WhatsApp Image 2022-04-03 at 1.31.55 PM (1).jpeg

WhatsApp Image 2022-04-03 at 1.31.55 PM.jpeg

Edited by costyy
Link to comment
Share on other sites

7 hours ago, costyy said:

I had a volume crushed after my system ssd went bad. I have a raid 5 volume composed from 10x4tb drives

not likely that the ssd is the reason as dsm does a raid1 over all disks and if one fails the the other disks still have a valid system

usually if its raid 5 and you still have 9 of 10 disks it looks ok but in yous last screenshot it say's raid0 for md3

if you are still on the same hardware where the disks failed you might need to stop and try to make sure its not a problem with the hardware (ram, psu, ...)

it might help to look for the old system partition and try to see whats in the logs

might be better to know what happened - when the reason is unclear - before doing anything that might not be reversible

Link to comment
Share on other sites

I had the system installation and volume 1 on the SSD and volume2 on 10x4TB drives. I got a notification that volume1 was crushed then I tried to repair it but failed because of bad sectors on the ssds.I shut down the system because I had to go to work but when I tried to restart the system got the message that volume 2 was crushed.this is with the original hardware the bad ssd is not in the list but the system partition is the only one that works on it. pls help i got all of my data on that raid volume.thanks

5w.jpeg

4w.jpeg

3w.jpeg

2w.jpeg

1w.jpeg

Link to comment
Share on other sites

1 hour ago, costyy said:

I had the system installation and volume 1 on the SSD and volume2 on 10x4TB drives.

 

DSM system is on ALL drives.  What was on Volume 1 that makes you think that it's the "system" installation?

 

Now that you have restored your disks to DSM, you could start by posting a current mdstat so we have some idea about what the system thinks about the configuration of your arrays and their statuses.

Link to comment
Share on other sites

first i did a clean install of xpenology on the ssd after that i added the drives with the volume on it from a previous machine.i cant connect with ssh to the machine.the first pics from my 1 post are the drives from synology attached to a ubuntu machine. i wanted to recover the volume in ubuntu and reinstall a new machine on another ssd because my ssd is full of bad sectors

Link to comment
Share on other sites

To orient you a bit on what is being reported:

 

/dev/md0 is a RAID1 array and is the DSM "system" spanning all your disks.  This array is working but some members have failed.

/dev/md1 is a RAID1 array and is the Linux swap partition spanning all your disks.  This array is working but some members have failed.

/dev/md3 is a RAID5 array and is probably what you called /volume2 - this is currently crashed.

Since you think you have a /volume1, it would make sense that it was /dev/md2 but at the moment the system isn't recognizing it at all.

 

What to do next:

 

1. Generally it would be easier if you cut and pasted text from TELNET instead of picture screenshots as you gather information.

 

2. Get more information about the possibility of /dev/md2's existence by querying one of the disks for partitions.  Post the results of this command:

fdisk -l /dev/sda

 

3. Get more information about /dev/md3 by querying the array for its members.  Post the results of this command:

mdadm --detail /dev/md3

 

 

Link to comment
Share on other sites

i changed the last pic because one drive was missing

 

 

root@NAS:~# fdisk -l /dev/sda
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: EFC71854-746D-45B0-A19B-52AEB1C62FE7

Device       Start        End    Sectors  Size Type
/dev/sda1     2048    4982527    4980480  2.4G Linux RAID
/dev/sda2  4982528    9176831    4194304    2G Linux RAID
/dev/sda3  9437184 7813766815 7804329632  3.6T Linux RAID
root@NAS:~# mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 3902163776 (3721.39 GiB 3995.82 GB)
   Raid Devices : 10
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Mon Apr  4 22:17:29 2022
          State : clean, FAILED
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : NAS:3  (local to host NAS)
           UUID : 9858866f:1e986c69:0e23d03d:34731de4
         Events : 231229

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       35        1      active sync   /dev/sdc3
       2       8       51        2      active sync   /dev/sdd3
       3       8       83        3      active sync   /dev/sdf3
       -       0        0        4      removed
       8       8      147        5      active sync   /dev/sdj3
       6       8      179        6      active sync   /dev/sdl3
      10       8       19        7      active sync   /dev/sdb3
       9       8      163        8      active sync   /dev/sdk3
       -       0        0        9      removed
root@NAS:~# root@NAS:~# fdisk -l /dev/sda
        Version : 1.2
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 3902163776 (3721.39 GiB 3995.82 GB)
   Raid Devices : 10
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Mon Apr  4 22:17:29 2022
          State : clean, FAILED
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : NAS:3  (local to host NAS)
           UUID : 9858866f:1e986c69:0e23d03d:34731de4
         Events : 231229

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       35        1      active sync   /dev/sdc3
       2       8       51        2      active sync   /dev/sdd3
       3       8       83        3      active sync   /dev/sdf3
       -       0        0        -ash: root@NAS:~#: command not found
4      removed
       8       8      147        5      active sync   /dev/sdj3
       6       8      179        6      active sync   /dev/sdl3
      10       8       19        7      active sync   /dev/sdb3
       9       8      163        8      active sync   /dev/sdk3
       -       0        0        9      removed
root@NAS:~# Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
-ash: Disk: command not found
root@NAS:~# Units: sectors of 1 * 512 = 512 bytes
-ash: Units:: command not found
root@NAS:~# Sector size (logical/physical): 512 bytes / 4096 bytes
-ash: syntax error near unexpected token `('
root@NAS:~# I/O size (minimum/optimal): 4096 bytes / 4096 bytes
-ash: syntax error near unexpected token `('
root@NAS:~# Disklabel type: gpt
-ash: Disklabel: command not found
root@NAS:~# Disk identifier: EFC71854-746D-45B0-A19B-52AEB1C62FE7
-ash: Disk: command not found
root@NAS:~#
root@NAS:~# Device       Start        End    Sectors  Size Type
-ash: Device: command not found
root@NAS:~# /dev/sda1     2048    4982527    4980480  2.4G Linux RAID
-ash: /dev/sda1: Permission denied
root@NAS:~# /dev/sda2  4982528    9176831    4194304    2G Linux RAID
-ash: /dev/sda2: Permission denied
root@NAS:~# /dev/sda3  9437184 7813766815 7804329632  3.6T Linux RAID
-ash: /dev/sda3: Permission denied
root@NAS:~# root@NAS:~# mdadm --detail /dev/md3
-ash: root@NAS:~#: command not found
root@NAS:~# /dev/md3:
-ash: /dev/md3:: No such file or directory
root@NAS:~#         Version : 1.2
-ash: Version: command not found
root@NAS:~#   Creation Time : Sun Aug  9 11:21:40 2020
-ash: Creation: command not found
root@NAS:~#      Raid Level : raid5
-ash: Raid: command not found
root@NAS:~#      Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
-ash: syntax error near unexpected token `('
root@NAS:~#   Used Dev Size : 3902163776 (3721.39 GiB 3995.82 GB)
-ash: syntax error near unexpected token `('
root@NAS:~#    Raid Devices : 10
-ash: Raid: command not found
root@NAS:~#   Total Devices : 8
-ash: Total: command not found
root@NAS:~#     Persistence : Superblock is persistent
-ash: Persistence: command not found
root@NAS:~#
root@NAS:~#     Update Time : Mon Apr  4 22:17:29 2022
-ash: Update: command not found
root@NAS:~#           State : clean, FAILED
-ash: State: command not found
root@NAS:~#  Active Devices : 8
-ash: Active: command not found
root@NAS:~# Working Devices : 8
-ash: Working: command not found
root@NAS:~#  Failed Devices : 0
-ash: Failed: command not found
root@NAS:~#   Spare Devices : 0
-ash: Spare: command not found
root@NAS:~#
root@NAS:~#          Layout : left-symmetric
-ash: Layout: command not found
root@NAS:~#      Chunk Size : 64K
-ash: Chunk: command not found
root@NAS:~#
root@NAS:~#            Name : NAS:3  (local to host NAS)
-ash: syntax error near unexpected token `('
root@NAS:~#            UUID : 9858866f:1e986c69:0e23d03d:34731de4
-ash: UUID: command not found
root@NAS:~#          Events : 231229
-ash: Events: command not found
root@NAS:~#
root@NAS:~#     Number   Major   Minor   RaidDevice State
-ash: Number: command not found
root@NAS:~#        0       8        3        0      active sync   /dev/sda3
-ash: 0: command not found
root@NAS:~#        1       8       35        1      active sync   /dev/sdc3
-ash: 1: command not found
root@NAS:~#        2       8       51        2      active sync   /dev/sdd3
-ash: 2: command not found
root@NAS:~#        3       8       83        3      active sync   /dev/sdf3
-ash: 3: command not found
root@NAS:~#        -       0        0        4      removed
-ash: -: command not found
root@NAS:~#        8       8      147        5      active sync   /dev/sdj3
-ash: 8: command not found
root@NAS:~#        6       8      179        6      active sync   /dev/sdl3
-ash: 6: command not found
root@NAS:~#       10       8       19        7      active sync   /dev/sdb3
-ash: 10: command not found
root@NAS:~#        9       8      163        8      active sync   /dev/sdk3
-ash: 9: command not found
root@NAS:~#        -       0        0        9      removed
-ash: -: command not found
 

Edited by costyy
Link to comment
Share on other sites

Dude.

 

Please don't change your historical posts.  I can't tell what has changed now since the original pic is gone.

 

The only way to get data back is to deliberately work through a validation process until something actionable is found.  If things change midstream, the process will either start over (at best) or destroy your data beyond recovery (at worst).

 

You also seem to be typing in commands without understanding what they are for.  I suggest you stop doing that.  Your data is at risk.

Link to comment
Share on other sites

I'm trying to noodle out how your system is configured vs. the mdadm reports.

  • You say you have 10x 4TB disks.
  • Your RAID5 array thinks it has 10 members, but it can only find 8.
  • Later you report an SSD.  Can you tell me for certain which device the SSD is?  I assume it is /dev/sde
  • /dev/sdh and /dev/sdi are missing, but that could be because you have multiple controllers and you didn't resolve a "gap" between them, or it could be that the drive(s) isn't reporting any usable partitions

We need to figure out more information.  Please post results of this command:

ls /dev/sd*

 

We also can see if we can get any corroborating information from the array members about the construction of the /dev/md3 array:

mdadm --examine /dev/sd[abcdfgjkl]3

 

That last command will report a lot of data.  It looked like you accidentally pasted the previous results back into DSM and it tried to execute the results as commands.  Please use care to make sure the copy/paste is accurate.

Edited by flyride
Link to comment
Share on other sites

now is the original drives from the machine the ssd is not available in storage manager on the nas

 

root@NAS:~# ls /dev/sd*
/dev/sda   /dev/sdb2  /dev/sdd   /dev/sdf2  /dev/sdh   /dev/sdk1  /dev/sdl3
/dev/sda1  /dev/sdb3  /dev/sdd1  /dev/sdf3  /dev/sdj   /dev/sdk2
/dev/sda2  /dev/sdc   /dev/sdd2  /dev/sdg   /dev/sdj1  /dev/sdk3
/dev/sda3  /dev/sdc1  /dev/sdd3  /dev/sdg1  /dev/sdj2  /dev/sdl
/dev/sdb   /dev/sdc2  /dev/sdf   /dev/sdg2  /dev/sdj3  /dev/sdl1
/dev/sdb1  /dev/sdc3  /dev/sdf1  /dev/sdg3  /dev/sdk   /dev/sdl2
 

 

 

 

mdadm --examine /dev/sd[abcdfgjkl]3
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : e6bdb94e:43119dd5:eaf58c3b:2fe8a5ff

    Update Time : Tue Apr  5 17:06:29 2022
       Checksum : fc0251ab - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : c1e30c0d:9a95561a:236ce739:19445dc3

    Update Time : Tue Apr  5 17:06:29 2022
       Checksum : c120ce01 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 7
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : c8079b49:ad09810a:d893fb57:04e781e5

    Update Time : Tue Apr  5 17:06:29 2022
       Checksum : 2e1230b3 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : db99484d:ee26b85c:a92fa754:34e98ca2

    Update Time : Tue Apr  5 17:06:29 2022
       Checksum : 3dad7e09 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : d7e5fc86:dc5e26ff:a7aa9438:57e35830

    Update Time : Tue Apr  5 17:06:29 2022
       Checksum : 8b897715 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdg3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : ddc65815:dc638fa5:43af17f2:ae8573b3

    Update Time : Fri Apr  1 14:35:29 2022
       Checksum : fcf0987f - correct
         Events : 230661

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 9
   Array State : AAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdj3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : edf7269d:22d055ef:65375e04:d2573885

    Update Time : Tue Apr  5 17:06:29 2022
       Checksum : b28bfbaf - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 5
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdk3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : e33b11e6:9a94ec8a:a2528d74:eff14c2a

    Update Time : Tue Apr  5 17:06:29 2022
       Checksum : ac50b978 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 8
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdl3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : a564d2ce:0f719a0d:a9b3fc0c:6a93e9b8

    Update Time : Tue Apr  5 17:06:29 2022
       Checksum : 3ecbc12e - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 6
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
 

 

Link to comment
Share on other sites

/dev/sde is missing again.  Is there a reason for this or did it go offline?  Also, is that the SSD you are speaking of?

 

/dev/sdh is probably our missing /dev/md2 array member.  It isn't showing any partitions so it may be completely trashed.  But let's confirm this:

mdadm --examine /dev/sdh

 

Link to comment
Share on other sites

 

posible that /dev/sde is the ssd that is not in the list

 

 mdadm --examine /dev/sdh
/dev/sdh:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)

Edited by costyy
Link to comment
Share on other sites

7 minutes ago, costyy said:

posible that /dev/sde is the ssd that is not in the list

Do you know why /dev/sde was not online, then online, and now offline again?  If there is an unresolved problem, we need it sorted before we start trying to fix arrays.  Please advise on what is happening here.

 

Now, in regard to your lost /dev/md3 array -  the bad news is it looks like somehow the partitions have been deleted on /dev/sdh, which I think is one of the two missing members.  This isn't a corruption as a new partition has been created in place.  Maybe the /dev/sdh device got put into a Windows machine and accidentally overwrite the existing partitions that would not have been recognized there?  Please advise on what may have happened here.

 

Since /dev/sdh is kaput, that means the only possibility of recovering your /dev/md3 is /dev/sdg3.  It seems intact but is out of date (about 600 writes).  We can theoretically force it back into the array and it should work, but there will be a small amount of corrupted files that were affected by the missing writes.

Link to comment
Share on other sites

same drives are with system partition failed and

 

 ls /dev/sd*
/dev/sda   /dev/sdb2  /dev/sdd   /dev/sdf2  /dev/sdh   /dev/sdk1  /dev/sdl3
/dev/sda1  /dev/sdb3  /dev/sdd1  /dev/sdf3  /dev/sdj   /dev/sdk2
/dev/sda2  /dev/sdc   /dev/sdd2  /dev/sdg   /dev/sdj1  /dev/sdk3
/dev/sda3  /dev/sdc1  /dev/sdd3  /dev/sdg1  /dev/sdj2  /dev/sdl
/dev/sdb   /dev/sdc2  /dev/sdf   /dev/sdg2  /dev/sdj3  /dev/sdl1
/dev/sdb1  /dev/sdc3  /dev/sdf1  /dev/sdg3  /dev/sdk   /dev/sdl2
root@NAS:~# mdadm --examine /dev/sd[abcdfgjkl]3
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : e6bdb94e:43119dd5:eaf58c3b:2fe8a5ff

    Update Time : Tue Apr  5 18:30:43 2022
       Checksum : fc026569 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : c1e30c0d:9a95561a:236ce739:19445dc3

    Update Time : Tue Apr  5 18:30:43 2022
       Checksum : c120e1bf - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 7
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : c8079b49:ad09810a:d893fb57:04e781e5

    Update Time : Tue Apr  5 18:30:43 2022
       Checksum : 2e124471 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : db99484d:ee26b85c:a92fa754:34e98ca2

    Update Time : Tue Apr  5 18:30:43 2022
       Checksum : 3dad91c7 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : d7e5fc86:dc5e26ff:a7aa9438:57e35830

    Update Time : Tue Apr  5 18:30:43 2022
       Checksum : 8b898ad3 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdg3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : ddc65815:dc638fa5:43af17f2:ae8573b3

    Update Time : Fri Apr  1 14:35:29 2022
       Checksum : fcf0987f - correct
         Events : 230661

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 9
   Array State : AAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdj3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : edf7269d:22d055ef:65375e04:d2573885

    Update Time : Tue Apr  5 18:30:43 2022
       Checksum : b28c0f6d - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 5
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdk3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : e33b11e6:9a94ec8a:a2528d74:eff14c2a

    Update Time : Tue Apr  5 18:30:43 2022
       Checksum : ac50cd36 - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 8
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdl3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9858866f:1e986c69:0e23d03d:34731de4
           Name : NAS:3  (local to host NAS)
  Creation Time : Sun Aug  9 11:21:40 2020
     Raid Level : raid5
   Raid Devices : 10

 Avail Dev Size : 7804327584 (3721.39 GiB 3995.82 GB)
     Array Size : 35119473984 (33492.54 GiB 35962.34 GB)
  Used Dev Size : 7804327552 (3721.39 GiB 3995.82 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : a564d2ce:0f719a0d:a9b3fc0c:6a93e9b8

    Update Time : Tue Apr  5 18:30:43 2022
       Checksum : 3ecbd4ec - correct
         Events : 231229

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 6
   Array State : AAAA.AAAA. ('A' == active, '.' == missing, 'R' == replacing)
 

Link to comment
Share on other sites

 cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/md3 /volume2 btrfs  0 0
root@NAS:~# df -v
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/md0         2385528 1624380    642364  72% /
none            16459916       0  16459916   0% /dev
/tmp            16475452     928  16474524   1% /tmp
/run            16475452    4696  16470756   1% /run
/dev/shm        16475452       4  16475448   1% /dev/shm
none                   4       0         4   0% /sys/fs/cgroup
cgmfs                100       0       100   0% /run/cgmanager/fs
 

Link to comment
Share on other sites

Interesting, DSM doesn't think your SSD has anything useful on it at all.  Again, we'll ignore it for now.

 

Current status:

 

You have 8 members of a 10-member array that are functioning.  RAID5 needs 9 to work in "critical" (i.e. non-redundant) mode.  You have one more member that is accessible but is slightly out of date. The next step is to try and force that member back into the array in hopes that your data will be accessible.

 

If it works, there will be a slight amount of corruption due to the stale array member.  This may or may not be important.  If that corruption is manifested in file data, btrfs will report it as a checksum error, but will not be able to correct it due to the lack of the 10th drive for redundancy.

 

There are several methods of restarting the array with the stale member.  The first one we will try is a non-destructive method - just resetting the stale flag.

mdadm --stop /dev/md3

 

then

mdadm --assemble --force /dev/md3 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sdf3 /dev/sdg3 /dev/sdj3 /dev/sdk3 /dev/sdl3

 

then

cat /proc/mdstat

 

Post output, including error messages from each of these.

 

Link to comment
Share on other sites

mdadm --stop /dev/md3
mdadm: stopped /dev/md3
root@NAS:~# mdadm --assemble --force /dev/md3 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sdf3 /dev/sdg3 /dev/sdj3 /dev/sdk3 /dev/sdl3
mdadm: forcing event count in /dev/sdg3(9) from 230661 upto 231229
mdadm: /dev/md3 assembled from 9 drives - not enough to start the array.
root@NAS:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sdf2[5] sdg2[6] sdj2[7] sdk2[8] sdl2[9]
      2097088 blocks [12/9] [UUUU_UUUUU__]

md0 : active raid1 sda1[0] sdb1[5] sdc1[1] sdd1[2] sdf1[4] sdk1[8] sdl1[9]
      2490176 blocks [12/7] [UUU_UU__UU__]

unused devices: <none>
 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...