Jump to content
XPEnology Community

RAID5 - 3 disk Voulme Crashed. DSM7


Recommended Posts

3 Disk 4TB each In raid five . EXt4 (most probably) Volume Crashed. First, Disk 2 was kicked out of the system. Then I tried to repair volume ,In between of repair ,system freeze. Then a force reboot and Now it shows only 2 drives (1 & 3) but no Volume. see pic. 

 

System stats as follows. I am afraid , i cant see the md* of RAID5 in this :(

 

cat /proc/mdstat

:/$  cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md3 : active raid1 sdd3[0]
      971940544 blocks super 1.2 [1/1] [U]

md1 : active raid1 sda2[0] sdd2[3] sdc2[2]
      2097088 blocks [12/3] [U_UU________]

md0 : active raid1 sda1[0] sdd1[3] sdc1[1]
      2490176 blocks [12/3] [UU_U________]

unused devices: <none>

 

# ls /dev/sd* &mg* &vg*

:/$ ls /dev/sd*
/dev/sda   /dev/sda2  /dev/sdb   /dev/sdb2  /dev/sdc   /dev/sdc2  /dev/sdd   /dev/sdd2
/dev/sda1  /dev/sda3  /dev/sdb1  /dev/sdb3  /dev/sdc1  /dev/sdc3  /dev/sdd1  /dev/sdd3

 ls /dev/md*
/dev/md0  /dev/md1  /dev/md3

 ls /dev/vg*
/dev/vga_arbiter

 

fdisk 

Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WD40EFRX-68N32N0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 4474BB43-A9CA-4DFD-88E8-0BA2B74DAEFA

Device       Start        End    Sectors  Size Type
/dev/sda1     2048    4982527    4980480  2.4G Linux RAID
/dev/sda2  4982528    9176831    4194304    2G Linux RAID
/dev/sda3  9437184 7813832351 7804395168  3.6T Linux RAID


Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WD40EFZX-68AWUN0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F4E61428-D392-4531-BD1D-5627E5B74CE9

Device       Start        End    Sectors  Size Type
/dev/sdb1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdb2  4982528    9176831    4194304    2G Linux RAID
/dev/sdb3  9437184 7813832351 7804395168  3.6T Linux RAID

Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WD40EFRX-68N32N0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F89224BD-A6CE-4E43-B931-1B27A81851D5

Device       Start        End    Sectors  Size Type
/dev/sdc1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdc2  4982528    9176831    4194304    2G Linux RAID
/dev/sdc3  9437184 7813832351 7804395168  3.6T Linux RAID

 

 mdadm -E /dev/sd[abc]3

:/$ sudo mdadm -E /dev/sd[abc]3
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 11ce188f:371222c5:8a4761cc:b7dadd5e
           Name : *******:2  (local to host *******)
  Creation Time : Sun Oct 10 20:47:44 2021
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 7804393120 (3721.42 GiB 3995.85 GB)
     Array Size : 7804393088 (7442.85 GiB 7991.70 GB)
  Used Dev Size : 7804393088 (3721.42 GiB 3995.85 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : active
    Device UUID : b2d1d339:fe245365:d3cbdc43:500b2530

    Update Time : Tue Nov 15 23:54:18 2022
       Checksum : 32868cc9 - correct
         Events : 435242

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : A.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x2
     Array UUID : 11ce188f:371222c5:8a4761cc:b7dadd5e
           Name : ********:2  (local to host *******)
  Creation Time : Sun Oct 10 20:47:44 2021
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 7804393120 (3721.42 GiB 3995.85 GB)
     Array Size : 7804393088 (7442.85 GiB 7991.70 GB)
  Used Dev Size : 7804393088 (3721.42 GiB 3995.85 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
Recovery Offset : 57125640 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : active
    Device UUID : d361e948:90e86d91:74fc12b8:990fe2b9

    Update Time : Tue Nov 15 20:57:46 2022
       Checksum : 6f148faf - correct
         Events : 433354

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 11ce188f:371222c5:8a4761cc:b7dadd5e
           Name : *******:2  (local to host *******)
  Creation Time : Sun Oct 10 20:47:44 2021
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 7804393120 (3721.42 GiB 3995.85 GB)
     Array Size : 7804393088 (7442.85 GiB 7991.70 GB)
  Used Dev Size : 7804393088 (3721.42 GiB 3995.85 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : 1e7abd48:b47b32d8:bddf08b0:a7f7b166

    Update Time : Wed Nov 16 11:39:24 2022
       Checksum : 570a3827 - correct
         Events : 436965

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : ..A ('A' == active, '.' == missing, 'R' == replacing)

 

 

 

Screenshot 2022-11-16 115842.jpg

Edited by kaku
fdisk mdadm info
Link to comment
Share on other sites

Other stats'

 

cat /etc/fstab

none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/mapper/cachedev_0 /volume2 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime,nodev 0 0

 

lvdisplay -v  &  vgdisplay -v

   Using logical volume(s) on command line.
    No volume groups found.

 

Link to comment
Share on other sites

 

update

 

I did a fore mount using  mdadm -Af 

ash-4.4# mdadm -Af /dev/md2 /dev/sd[abc]3
mdadm: forcing event count in /dev/sda3(0) from 435242 upto 436965
mdadm: Marking array /dev/md2 as 'clean'
mdadm: /dev/md2 has been started with 2 drives (out of 3).

 

And restarted NAS. Now The raid is loaded in read-only mode with Disk 2 as Crashed. (attached). Atleast backup can happen. 

 

Still get no output from lvdisplay or vg display.

 

How how to fully restore the RAID Back to original state?

 

ash-4.4# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sda3[4] sdc3[2]
      7804393088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/2] [U_U]

md3 : active raid1 sdd3[0]
      971940544 blocks super 1.2 [1/1] [U]

md1 : active raid1 sda2[0] sdd2[3] sdc2[2]
      2097088 blocks [12/3] [U_UU________]

md0 : active raid1 sda1[0] sdd1[3] sdc1[1]
      2490176 blocks [12/3] [UU_U________]

unused devices: <none>
ash-4.4# cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/mapper/cachedev_0 /volume2 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime,nodev 0 0
/dev/mapper/cachedev_1 /volume1 btrfs auto_reclaim_space,ssd,synoacl,relatime,nodev,ro,recovery 0 0

 

ash-4.4# mdadm --examine /dev/md2
mdadm: No md superblock detected on /dev/md2.

 

Screenshot 2022-11-17 135759.jpg

Screenshot 2022-11-17 140502.jpg

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...