Jump to content
XPEnology Community

Crashed Raid5 and volume missing


Recommended Posts

Hello all and thanks in advance for any help or assistance.  I think I am pretty much screwed but figured I would ask first before i make things worse.  I have a system with 12 drives, I have 1 raid 6 that correlates to volume 2 and a raid 5 that correlates to volume 1.  I moved my setup a few days ago and when I plugged it back in, my raid 5 lost 2 of the 4 drives.  1 drive was completely hosed, not readable in anything else.  The other drive seemed to just be empty and not in the raid like it was previously.  I think part of the reason for the drive just removing itself from the raid is that I use 6 onboard sata connections and have an 8 port sas lsi card.  It has actually happened before a few times but when it dropped out, I had 3 of the 4 drives still working so I could just add it back in and repair and I was good till the next outage.  This time with 2 bad drives, it just got hosed.  Either I could not or I didnt know how to add the working drive back into the raid properly so it would go from crashed to degraded and then replace the bad drive and rebuild.  

 

Honestly I think my first mistake was moving drives around to see if it was a bad drive, or bad cable, or bad sas card.  While moving drives around I figured I would just put all the raid 5 drives on the internal sata connections and put all the raid 6 drives on the lsi sas card.  the raid 6 had 2 drives that removed themselves from the raid, but i was able to put 2 drives back in it and repair and volume 2 is good with no data loss.  I tried alot of commands ( I apologize but I do not remember them all) to get the raid 5 back.  In the end I just replaced the bad drive, so at this point I had 2 original raid 5 good drives, and 2 other drives that did not show in the raid 5.  

 

I ended up do 

mdadm --create /dev/md2 --assume-clean --level=5 --verbose --raid-devices=4 /dev/sda3 missing /dev/sdc3 /dev/sdd3

this put the raid back in a degraded stat which allowed me to repair using the newly replaced drive.  The repair completed but now volume1 which did show up under volumes as crashed, is missing under volumes.  I have tried to follow a few guides to check things out.  All of the lv/vg commands do not show anything at all.  The closest I am able to get to anything is trying to run 

:~# vgcfgrestore vg1000
  Couldn't find device with uuid h448fL-VaTW-5n9w-W7FY-Gb4O-50Jb-l0ADjn.
  Couldn't find device with uuid Ppyi69-5Osn-gJtL-MTxB-aGAd-cLYJ-7hy199.
  Couldn't find device with uuid 8NeE7P-Bmf5-ErdT-zZKB-jMJ3-LspS-9C3uLg.
  Cannot restore Volume Group vg1000 with 3 PVs marked as missing.
  Restore failed.

 

:~# e2fsck -pvf /dev/md2
e2fsck: Bad magic number in super-block while trying to open /dev/md2
/dev/md2:
The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>

 

:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md3 : active raid6 sdd3[6] sdl3[5] sdk3[10] sdh3[7] sdj3[9] sdg3[8]
      7794770176 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md2 : active raid5 sdf3[4] sdb3[3] sde3[2] sda3[1]
      11706589632 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid1 sda2[0] sdb2[1] sdc2[11] sdd2[2] sde2[3] sdf2[9] sdg2[4] sdh2[5] sdi2[10] sdj2[6] sdk2[7] sdl2[8]
      2097088 blocks [12/12] [UUUUUUUUUUUU]

md0 : active raid1 sda1[1] sdb1[5] sdc1[11] sdd1[3] sde1[4] sdf1[6] sdg1[9] sdh1[7] sdi1[10] sdj1[8] sdk1[2] sdl1[0]
      2490176 blocks [12/12] [UUUUUUUUUUUU]

 

parted -l
Model: WDC WD40EZRX-00SPEB0 (scsi)
Disk /dev/hda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2551MB  2550MB  ext4                  raid
 2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
 3      4832MB  4001GB  3996GB                        raid


Model: WDC WD40EZRX-00SPEB0 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2551MB  2550MB  ext4                  raid
 2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
 3      4832MB  4001GB  3996GB                        raid


Model: WDC WD40EZRZ-00GXCB0 (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2551MB  2550MB  ext4                  raid
 2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
 3      4832MB  4001GB  3996GB                        raid


Model: ATA ST3000DM001-1CH1 (scsi)
Disk /dev/sdc: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2551MB  2550MB  ext4                  raid
 2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
 3      4832MB  3000GB  2996GB                        raid


Model: ATA ST2000DM001-1CH1 (scsi)
Disk /dev/sdd: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  2551MB  2550MB  primary               raid
 2      2551MB  4699MB  2147MB  primary               raid
 3      4832MB  2000GB  1995GB  primary               raid


Model: ATA ST4000DM005-2DP1 (scsi)
Disk /dev/sde: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2551MB  2550MB  ext4                  raid
 2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
 3      4832MB  4001GB  3996GB                        raid


Model: WDC WD40EZRZ-00GXCB0 (scsi)
Disk /dev/sdf: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2551MB  2550MB  ext4                  raid
 2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
 3      4832MB  4001GB  3996GB                        raid


Model: Linux Software RAID Array (md)
Disk /dev/md0: 2550MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  2550MB  2550MB  ext4


Model: Linux Software RAID Array (md)
Disk /dev/md1: 2147MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2147MB  2147MB  linux-swap(v1)


Error: /dev/md2: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md2: 12.0TB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: Linux Software RAID Array (md)
Disk /dev/md3: 7982GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  7982GB  7982GB  ext4


Model: WDC WD2003FYYS-02W0B (scsi)
Disk /dev/sdg: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  2551MB  2550MB  primary               raid
 2      2551MB  4699MB  2147MB  primary               raid
 3      4832MB  2000GB  1995GB  primary               raid


Model: WDC WD2003FYYS-02W0B (scsi)
Disk /dev/sdh: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  2551MB  2550MB  primary               raid
 2      2551MB  4699MB  2147MB  primary               raid
 3      4832MB  2000GB  1995GB  primary               raid


Model: ATA ST3000DM001-1E61 (scsi)
Disk /dev/sdi: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2551MB  2550MB  ext4                  raid
 2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
 3      4832MB  3001GB  2996GB                        raid


Model: ATA ST2000DM001-1CH1 (scsi)
Disk /dev/sdj: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  2551MB  2550MB  primary               raid
 2      2551MB  4699MB  2147MB  primary               raid
 3      4832MB  2000GB  1995GB  primary               raid


Model: WDC WD30EZRX-00MMMB0 (scsi)
Disk /dev/sdk: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2551MB  2550MB  ext4                  raid
 2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
 3      4832MB  3001GB  2996GB                        raid


Model: WDC WD2003FYYS-02W0B (scsi)
Disk /dev/sdl: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  2551MB  2550MB  primary               raid
 2      2551MB  4699MB  2147MB  primary               raid
 3      4832MB  2000GB  1995GB  primary               raid


Model: Unknown (unknown)
Disk /dev/zram0: 2499MB
Sector size (logical/physical): 4096B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2499MB  2499MB  linux-swap(v1)


Model: Unknown (unknown)
Disk /dev/zram1: 2499MB
Sector size (logical/physical): 4096B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2499MB  2499MB  linux-swap(v1)


Model: Unknown (unknown)
Disk /dev/zram2: 2499MB
Sector size (logical/physical): 4096B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2499MB  2499MB  linux-swap(v1)


Model: Unknown (unknown)
Disk /dev/zram3: 2499MB
Sector size (logical/physical): 4096B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2499MB  2499MB  linux-swap(v1)


Model: SanDisk Cruzer Fit (scsi)
Disk /dev/synoboot: 8003MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name    Flags
 1      1049kB  16.8MB  15.7MB  fat16        boot    boot, esp
 2      16.8MB  48.2MB  31.5MB  fat16        image
 3      48.2MB  52.4MB  4177kB               legacy  bios_grub

 

:~# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Fri Nov 29 14:05:37 2019
     Raid Level : raid5
     Array Size : 11706589632 (11164.27 GiB 11987.55 GB)
  Used Dev Size : 3902196544 (3721.42 GiB 3995.85 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Dec  2 23:52:06 2019
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : LiquidXPe:2  (local to host LiquidXPe)
           UUID : 2e3bde16:7a255483:e4de0929:70dc3562
         Events : 137

    Number   Major   Minor   RaidDevice State
       4       8       83        0      active sync   /dev/sdf3
       1       8        3        1      active sync   /dev/sda3
       2       8       67        2      active sync   /dev/sde3
       3       8       19        3      active sync   /dev/sdb3

 

:~# cat /etc/lvm/backup/vg1000
# Generated by LVM2 version 2.02.38 (2008-06-11): Sun Sep 25 16:25:42 2016

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing '/sbin/lvextend --alloc inherit /dev/vg1000/lv -l100%VG'"

creation_host = "LiquidXPe"     # Linux LiquidXPe 3.10.35 #1 SMP Sat Dec 12 17:01:14 MSK 2015 x86_64
creation_time = 1474838742      # Sun Sep 25 16:25:42 2016

vg1000 {
        id = "dJc33I-psOe-q3Nu-Qdt6-lKUr-KGB3-gHOdGz"
        seqno = 19
        status = ["RESIZEABLE", "READ", "WRITE"]
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0

        physical_volumes {

                pv0 {
                        id = "h448fL-VaTW-5n9w-W7FY-Gb4O-50Jb-l0ADjn"
                        device = "/dev/md2"     # Hint only

                        status = ["ALLOCATABLE"]
                        dev_size = 19438624128  # 9.05181 Terabytes
                        pe_start = 1152
                        pe_count = 2372878      # 9.05181 Terabytes
                }

                pv1 {
                        id = "Ppyi69-5Osn-gJtL-MTxB-aGAd-cLYJ-7hy199"
                        device = "/dev/md3"     # Hint only

                        status = ["ALLOCATABLE"]
                        dev_size = 17581371264  # 8.18696 Terabytes
                        pe_start = 1152
                        pe_count = 2146163      # 8.18696 Terabytes
                }

                pv2 {
                        id = "8NeE7P-Bmf5-ErdT-zZKB-jMJ3-LspS-9C3uLg"
                        device = "/dev/md4"     # Hint only

                        status = ["ALLOCATABLE"]
                        dev_size = 1953484672   # 931.494 Gigabytes
                        pe_start = 1152
                        pe_count = 238462       # 931.492 Gigabytes
                }

                pv3 {
                        id = "RM205l-f2bw-BBbm-OYyg-sKK8-VHRv-4Mv9OX"
                        device = "/dev/md5"     # Hint only

                        status = ["ALLOCATABLE"]
                        dev_size = 9767427968   # 4.54831 Terabytes
                        pe_start = 1152
                        pe_count = 1192312      # 4.54831 Terabytes
                }
        }

        logical_volumes {

                lv {
                        id = "g5hc5i-t2eR-Wj1v-MTwg-3EHX-APQe-sDLOe5"
                        status = ["READ", "WRITE", "VISIBLE"]
                        segment_count = 8

                        segment1 {
                                start_extent = 0
                                extent_count = 237287   # 926.902 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                        segment2 {
                                start_extent = 237287
                                extent_count = 715387   # 2.72898 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 0
                                ]
                        }
                        segment3 {
                                start_extent = 952674
                                extent_count = 949152   # 3.62073 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 237287
                                ]
                        }
                        segment4 {
                                start_extent = 1901826
                                extent_count = 238463   # 931.496 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 715387
                                ]
                        }
                        segment5 {
                                start_extent = 2140289
                                extent_count = 238462   # 931.492 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv2", 0
                                ]
                        }
                        segment6 {
                                start_extent = 2378751
                                extent_count = 1192312  # 4.54831 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv3", 0
                                ]
                        }
                        segment7 {
                                start_extent = 3571063
                                extent_count = 1192313  # 4.54831 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 953850
                                ]
                        }
                        segment8 {
                                start_extent = 4763376
                                extent_count = 1186439  # 4.52591 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 1186439
                                ]
                        }
                }
        }
}

the lvm backup data all seems to be old, from 2016 and I have rebuilt both volumes since.  I use to be shr, but moved to a raid setup.

 

 

Again, any help would be greatly appreciated.

Link to comment
Share on other sites

9 hours ago, bughatti said:

I have a [raid 5] that correlates to volume 1.  I moved my setup a few days ago and when I plugged it back in, my raid 5 lost 2 of the 4 drives.  1 drive was completely hosed, not readable in anything else.

 

[snip]

 

I tried alot of commands ( I apologize but I do not remember them all) to get the raid 5 back.  In the end I just replaced the bad drive, so at this point I had 2 original raid 5 good drives, and 2 other drives that did not show in the raid 5.  

 

I ended up do mdadm --create /dev/md2 --assume-clean --level=5 --verbose --raid-devices=4 /dev/sda3 missing /dev/sdc3 /dev/sdd3

this put the raid back in a degraded stat which allowed me to repair using the newly replaced drive.  The repair completed but now volume1 which did show up under volumes as crashed, is missing under volumes. 

 

Sorry for the event and to bring you bad news.  As you know, RAID 5 spans parity across the array such that all members, less one must be present for data integrity. Your data may have been recoverable at one time, but once the repair operation was initiated with only 2 valid drives, the data on all four drives was irreparably lost.  I've highlighted the critical items above.

Edited by flyride
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...