Jump to content
XPEnology Community

Recommended Posts

So I wanted to post this here as I have spent 3 days trying to fix my volume.  

I am running xpenology on a JBOD nas with 11 drives.

DS3615xs  DSM 5.2-5644 

So back a few months ago I had a drive go bad and the volume went into degraded mode, I failed to replace the bad drive at the time because the volume still worked.  A few days ago I had a power outage and the nas came back up as crashed.  I searched many google pages on what to do to fix it and nothing worked.  The bad drive was not recoverable at all. I am no linux guru but I had similar issues before on this nas with other drives so I tried to focus on mdadm commands.  Problem was that I could not copy any data over from the old drive.  I found a post here https://forum.synology.com/enu/viewtopic.php?f=39&t=102148#p387357 that talked about finding the last known configs of the md raids.  I was able to determine that the bad drive was /dev/sdk  After trying fdisk, and gparted and realizing I could not use gdisk since it is not native in xpenology and my drive was 4tb and gpt I plugged the drive into a usb hard drive bay in a seperate linux machine.  I was able to use another 4tb that was working and copy the partition tables almost identically using gdisk.  Don't try to do it on windows, I did not find a worthy tool to partition it correctly.  After validating my partition numbers, start-end size and file system type FD00 I stuck the drive back in my nas.  I was able to do mdadm --manage /dev/md3 --add /dev/sdk6 and as soon as they showed under cat /proc/mdstat I see the raids rebuilding.  I have 22tb of space and the bad drive was lost on md2, md3 and md5 so it will take a while.  I am hoping my volume comes back up after they are done.

Link to comment
Share on other sites

Just to check - are you running 11 drives as a JBOD or SHR?

My experience is that JBOD volumes might 'survive' a disk failure - the volume becomes read only to make a copy, but a second disk failure will crash everything and I dont think recovery is easy. I've had some success copying Synology partitions using Macrium Reflect, but only with identical disk ie same manufacturer/disk size and only if the source drive was able to be read ok with not too many sector errors etc 

Link to comment
Share on other sites

Thanks for the reply, I am running SHR on it.  Unfortunately I have not had much luck yet, as far as I got today was after adding all partitions from the new drive back into each /dev/mdx and each rebuild was complete I got a healthy on the top right under system health in the GUI It says healthy but the volume still says crashed.  When I reboot the new drive goes back to available spare but when i look at mdadm -detail /dev/md0 and md1 the partitions from the new drive are in them but the new drive sdk5, sdk6 and sdk7 are not in md2, md3 and md5.  I can add them back in but as soon as I reboot, they fall back out again.  When I run 

 

fsck.ext4 -pvf /dev/md3

 I get

fsck.ext4: Bad magic number in super-block while trying to open /dev/md2
/dev/md3:
The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>

 

I did some research on the error and found the backup superblocks and tried a few different ones but I get the same error.

 

I have yet to try fixing the issue using a live ubuntu image.

 

Below are my details

LiquidXPe> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md5 : active raid5 sdb7[0] sdi7[4] sdh7[3] sdf7[2] sde7[1]
      4883714560 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
md3 : active raid5 sdk6[10] sdb6[0] sde6[6] sdf6[7] sdh6[8] sdi6[9] sdj6[5] sdg6[4] sdd6[2] sdc6[1]
      8790686208 blocks super 1.2 level 5, 64k chunk, algorithm 2 [10/9] [UUUUU_UUUU]
      [>....................]  recovery =  0.0% (646784/976742912) finish=150.9min speed=107797K/sec
md2 : active raid5 sdb5[2] sde5[8] sdf5[9] sdh5[10] sdi5[11] sdg5[7] sdc5[4] sdd5[5] sdj5[6] sdl5[3]
      9719312640 blocks super 1.2 level 5, 64k chunk, algorithm 2 [11/10] [UUUUUU_UUUU]
md4 : active raid1 sde8[0] sdf8[1]
      976742912 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdg2[5] sdh2[6] sdi2[7] sdj2[8] sdk2[10] sdl2[9]
      2097088 blocks [11/11] [UUUUUUUUUUU]
md0 : active raid1 sdb1[1] sdc1[9] sdd1[8] sde1[0] sdf1[7] sdg1[3] sdh1[10] sdi1[6] sdj1[5] sdk1[2] sdl1[4]
      2490176 blocks [11/11] [UUUUUUUUUUU]

unused devices: <none>
~ # lvm vgdisplay
  --- Volume group ---
  VG Name               vg1000
  System ID
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               22.70 TB
  PE Size               4.00 MB
  Total PE              5949815
  Alloc PE / Size       5949815 / 22.70 TB
  Free  PE / Size       0 / 0
  VG UUID               dJc33I-psOe-q3Nu-Qdt6-lKUr-KGB3-gHOdGz
~ # lvm lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg1000/lv
  VG Name                vg1000
  LV UUID                g5hc5i-t2eR-Wj1v-MTwg-3EHX-APQe-sDLOe5
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                22.70 TB
  Current LE             5949815
  Segments               8
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:0
LiquidXPe> mdadm --detail /dev/md*
/dev/md0:
        Version : 0.90
  Creation Time : Fri Dec 31 18:00:05 1999
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 11
  Total Devices : 11
Preferred Minor : 0
    Persistence : Superblock is persistent
    Update Time : Thu Jul 27 20:49:01 2017
          State : clean
 Active Devices : 11
Working Devices : 11
 Failed Devices : 0
  Spare Devices : 0
           UUID : 249cd984:e79b4c51:3017a5a8:c86610be
         Events : 0.15252995
    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       1       8       17        1      active sync   /dev/sdb1
       2       8      161        2      active sync   /dev/sdk1
       3       8       97        3      active sync   /dev/sdg1
       4       8      177        4      active sync   /dev/sdl1
       5       8      145        5      active sync   /dev/sdj1
       6       8      129        6      active sync   /dev/sdi1
       7       8       81        7      active sync   /dev/sdf1
       8       8       49        8      active sync   /dev/sdd1
       9       8       33        9      active sync   /dev/sdc1
      10       8      113       10      active sync   /dev/sdh1
/dev/md1:
        Version : 0.90
  Creation Time : Wed Jul 26 20:10:31 2017
     Raid Level : raid1
     Array Size : 2097088 (2048.28 MiB 2147.42 MB)
  Used Dev Size : 2097088 (2048.28 MiB 2147.42 MB)
   Raid Devices : 11
  Total Devices : 11
Preferred Minor : 1
    Persistence : Superblock is persistent
    Update Time : Thu Jul 27 18:02:00 2017
          State : clean
 Active Devices : 11
Working Devices : 11
 Failed Devices : 0
  Spare Devices : 0
           UUID : 7f14e91e:ed74d57f:9b23d1f3:72b7d250 (local to host LiquidXPe)
         Events : 0.40
    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       34        1      active sync   /dev/sdc2
       2       8       50        2      active sync   /dev/sdd2
       3       8       66        3      active sync   /dev/sde2
       4       8       82        4      active sync   /dev/sdf2
       5       8       98        5      active sync   /dev/sdg2
       6       8      114        6      active sync   /dev/sdh2
       7       8      130        7      active sync   /dev/sdi2
       8       8      146        8      active sync   /dev/sdj2
       9       8      178        9      active sync   /dev/sdl2
      10       8      162       10      active sync   /dev/sdk2
/dev/md2:
        Version : 1.2
  Creation Time : Thu Dec 17 10:21:31 2015
     Raid Level : raid5
     Array Size : 9719312640 (9269.06 GiB 9952.58 GB)
  Used Dev Size : 971931264 (926.91 GiB 995.26 GB)
   Raid Devices : 11
  Total Devices : 11
    Persistence : Superblock is persistent
    Update Time : Thu Jul 27 18:07:06 2017
          State : clean, degraded
 Active Devices : 10
Working Devices : 11
 Failed Devices : 0
  Spare Devices : 1
         Layout : left-symmetric
     Chunk Size : 64K
           Name : XPenology:2
           UUID : c88d79ed:5575471a:d6d4e7aa:282ecf4c
         Events : 7316850
    Number   Major   Minor   RaidDevice State
       2       8       21        0      active sync   /dev/sdb5
       3       8      181        1      active sync   /dev/sdl5
       6       8      149        2      active sync   /dev/sdj5
       5       8       53        3      active sync   /dev/sdd5
       4       8       37        4      active sync   /dev/sdc5
       7       8      101        5      active sync   /dev/sdg5
      12       8      165        6      spare rebuilding   /dev/sdk5
      11       8      133        7      active sync   /dev/sdi5
      10       8      117        8      active sync   /dev/sdh5
       9       8       85        9      active sync   /dev/sdf5
       8       8       69       10      active sync   /dev/sde5
/dev/md3:
        Version : 1.2
  Creation Time : Thu Aug 11 17:02:44 2016
     Raid Level : raid5
     Array Size : 8790686208 (8383.45 GiB 9001.66 GB)
  Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
   Raid Devices : 10
  Total Devices : 10
    Persistence : Superblock is persistent
    Update Time : Thu Jul 27 20:46:56 2017
          State : clean, degraded, recovering
 Active Devices : 9
Working Devices : 10
 Failed Devices : 0
  Spare Devices : 1
         Layout : left-symmetric
     Chunk Size : 64K
 Rebuild Status : 75% complete
           Name : XPenology:3
           UUID : 3cef14a9:214bd5de:c71c244c:e59eb342
         Events : 2504149
    Number   Major   Minor   RaidDevice State
       0       8       22        0      active sync   /dev/sdb6
       1       8       38        1      active sync   /dev/sdc6
       2       8       54        2      active sync   /dev/sdd6
       4       8      102        3      active sync   /dev/sdg6
       5       8      150        4      active sync   /dev/sdj6
      10       8      166        5      spare rebuilding   /dev/sdk6
       9       8      134        6      active sync   /dev/sdi6
       8       8      118        7      active sync   /dev/sdh6
       7       8       86        8      active sync   /dev/sdf6
       6       8       70        9      active sync   /dev/sde6
/dev/md4:
        Version : 1.2
  Creation Time : Sat Sep 24 22:30:44 2016
     Raid Level : raid1
     Array Size : 976742912 (931.49 GiB 1000.18 GB)
  Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent
    Update Time : Wed Jul 26 22:32:04 2017
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
           Name : LiquidXPe:4  (local to host LiquidXPe)
           UUID : d3a426d3:fafd9c0a:e0393702:79750b47
         Events : 2
    Number   Major   Minor   RaidDevice State
       0       8       72        0      active sync   /dev/sde8
       1       8       88        1      active sync   /dev/sdf8
/dev/md5:
        Version : 1.2
  Creation Time : Sat Sep 24 22:30:45 2016
     Raid Level : raid5
     Array Size : 4883714560 (4657.47 GiB 5000.92 GB)
  Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent
    Update Time : Thu Jul 27 18:07:23 2017
          State : clean, degraded
 Active Devices : 5
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 1
         Layout : left-symmetric
     Chunk Size : 64K
           Name : LiquidXPe:5  (local to host LiquidXPe)
           UUID : cec93e77:b4134947:fea5cfba:eee99979
         Events : 2402456
    Number   Major   Minor   RaidDevice State
       0       8       23        0      active sync   /dev/sdb7
       1       8       71        1      active sync   /dev/sde7
       2       8       87        2      active sync   /dev/sdf7
       3       8      119        3      active sync   /dev/sdh7
       4       8      135        4      active sync   /dev/sdi7
       6       8      167        5      spare rebuilding   /dev/sdk7

 

LiquidXPe> sfdisk -l /dev/sd*
/dev/sdb1                  2048         4982527         4980480  fd
/dev/sdb2               4982528         9176831         4194304  fd
/dev/sdb5               9453280      1953318239      1943864960  fd
/dev/sdb6            1953334336      3906822239      1953487904  fd
/dev/sdb7            3906838336      5860326239      1953487904  fd

[/dev/sdb1] is a partition
[/dev/sdb2] is a partition
[/dev/sdb5] is a partition
[/dev/sdb6] is a partition
[/dev/sdb7] is a partition
/dev/sdc1                  2048         4982527         4980480  fd
/dev/sdc2               4982528         9176831         4194304  fd
/dev/sdc3               9437184      3907015007      3897577824   f
/dev/sdc5               9453280      1953318239      1943864960  fd
/dev/sdc6            1953334336      3906822239      1953487904  fd

[/dev/sdc1] is a partition
[/dev/sdc2] is a partition
[/dev/sdc3] is a partition
[/dev/sdc5] is a partition
[/dev/sdc6] is a partition
/dev/sdd1                  2048         4982527         4980480  fd
/dev/sdd2               4982528         9176831         4194304  fd
/dev/sdd3               9437184      3907015007      3897577824   f
/dev/sdd5               9453280      1953318239      1943864960  fd
/dev/sdd6            1953334336      3906822239      1953487904  fd

[/dev/sdd1] is a partition
[/dev/sdd2] is a partition
[/dev/sdd3] is a partition
[/dev/sdd5] is a partition
[/dev/sdd6] is a partition
/dev/sde1                  2048         4982527         4980480  fd
/dev/sde2               4982528         9176831         4194304  fd
/dev/sde5               9453280      1953318239      1943864960  fd
/dev/sde6            1953334336      3906822239      1953487904  fd
/dev/sde7            3906838336      5860326239      1953487904  fd
/dev/sde8            5860342336      7813830239      1953487904  fd

[/dev/sde1] is a partition
[/dev/sde2] is a partition
[/dev/sde5] is a partition
[/dev/sde6] is a partition
[/dev/sde7] is a partition
[/dev/sde8] is a partition
/dev/sdf1                  2048         4982527         4980480  fd
/dev/sdf2               4982528         9176831         4194304  fd
/dev/sdf5               9453280      1953318239      1943864960  fd
/dev/sdf6            1953334336      3906822239      1953487904  fd
/dev/sdf7            3906838336      5860326239      1953487904  fd
/dev/sdf8            5860342336      7813830239      1953487904  fd

[/dev/sdf1] is a partition
[/dev/sdf2] is a partition
[/dev/sdf5] is a partition
[/dev/sdf6] is a partition
[/dev/sdf7] is a partition
[/dev/sdf8] is a partition
/dev/sdg1                  2048         4982527         4980480  fd
/dev/sdg2               4982528         9176831         4194304  fd
/dev/sdg3               9437184      3907015007      3897577824   f
/dev/sdg5               9453280      1953318239      1943864960  fd
/dev/sdg6            1953334336      3906822239      1953487904  fd

[/dev/sdg1] is a partition
[/dev/sdg2] is a partition
[/dev/sdg3] is a partition
[/dev/sdg5] is a partition
[/dev/sdg6] is a partition
/dev/sdh1                  2048         4982527         4980480  fd
/dev/sdh2               4982528         9176831         4194304  fd
/dev/sdh5               9453280      1953318239      1943864960  fd
/dev/sdh6            1953334336      3906822239      1953487904  fd
/dev/sdh7            3906838336      5860326239      1953487904  fd

[/dev/sdh1] is a partition
[/dev/sdh2] is a partition
[/dev/sdh5] is a partition
[/dev/sdh6] is a partition
[/dev/sdh7] is a partition
/dev/sdi1                  2048         4982527         4980480  fd
/dev/sdi2               4982528         9176831         4194304  fd
/dev/sdi5               9453280      1953318239      1943864960  fd
/dev/sdi6            1953334336      3906822239      1953487904  fd
/dev/sdi7            3906838336      5860326239      1953487904  fd

[/dev/sdi1] is a partition
[/dev/sdi2] is a partition
[/dev/sdi5] is a partition
[/dev/sdi6] is a partition
[/dev/sdi7] is a partition
/dev/sdj1                  2048         4982527         4980480  fd
/dev/sdj2               4982528         9176831         4194304  fd
/dev/sdj3               9437184      3907015007      3897577824   f
/dev/sdj5               9453280      1953318239      1943864960  fd
/dev/sdj6            1953334336      3906822239      1953487904  fd

[/dev/sdj1] is a partition
[/dev/sdj2] is a partition
[/dev/sdj3] is a partition
[/dev/sdj5] is a partition
[/dev/sdj6] is a partition
/dev/sdk1                  2048         4982783         4980736  fd
/dev/sdk2               4982784         9177087         4194304  fd
/dev/sdk5               9451520      1953318239      1943866720  fd
/dev/sdk6            1953333248      3906822239      1953488992  fd
/dev/sdk7            3906836480      5860326239      1953489760  fd
/dev/sdk8            5860341760      7813830239      1953488480  fd

[/dev/sdk1] is a partition
[/dev/sdk2] is a partition
[/dev/sdk5] is a partition
[/dev/sdk6] is a partition
[/dev/sdk7] is a partition
[/dev/sdk8] is a partition
/dev/sdl1                  2048         4982527         4980480  fd
/dev/sdl2               4982528         9176831         4194304  fd
/dev/sdl3               9437184      1953511007      1944073824   f
/dev/sdl5               9453280      1953318239      1943864960  fd

[/dev/sdl1] is a partition
[/dev/sdl2] is a partition
[/dev/sdl3] is a partition
[/dev/sdl5] is a partition
/dev/sdu1                    63           49151           49089   e

[/dev/sdu1] is a partition

 

I partitioned the new drive as /dev/sdk because looking at /etc/space/last know good config, sdk was the only one missing

LiquidXPe> cat /etc/space/space_history_20170311_233820.xml
<?xml version="1.0" encoding="UTF-8"?>
<spaces>
        <space path="/dev/vg1000/lv" reference="/volume1" uuid="g5hc5i-t2eR-Wj1v-MTwg-3EHX-APQe-sDLOe5" device_type="1" container_type="2">
                <device>
                        <lvm path="/dev/vg1000" uuid="dJc33I-psOe-q3Nu-Qdt6-lKUr-KGB3-gHOdGz" designed_pv_counts="4" status="normal" total_size="24955332853760" free_size="0" pe_size="4194304" expansible="0" max_size="24370456320">
                                <raids>
                                        <raid path="/dev/md5" uuid="cec93e77:b4134947:fea5cfba:eee99979" level="raid5" version="1.2">
                                                <disks>
                                                        <disk status="normal" dev_path="/dev/sdb7" model="WD30EZRX-00MMMB0        " serial="WD-WCAWZ1343231" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="0">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sde7" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E7ZAL0ZX" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="1">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdf7" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E2LRRLFA" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="2">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdh7" model="ST3000DM001-1E6166      " serial="" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="3">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdi7" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="4">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdk7" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="5">
                                                        </disk>
                                                </disks>
                                        </raid>
                                        <raid path="/dev/md2" uuid="c88d79ed:5575471a:d6d4e7aa:282ecf4c" level="raid5" version="1.2">
                                                <disks>
                                                        <disk status="normal" dev_path="/dev/sdb5" model="WD30EZRX-00MMMB0        " serial="WD-WCAWZ1343231" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="0">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdc5" model="WD2003FYYS-02W0B1       " serial="WD-WMAY04632022" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="4">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdd5" model="WD2003FYYS-02W0B0       " serial="WD-WMAY02585893" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="3">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sde5" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E7ZAL0ZX" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="10">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdf5" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E2LRRLFA" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="9">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdg5" model="ST2000DM001-1CH164      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="5">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdh5" model="ST3000DM001-1E6166      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="8">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdi5" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="7">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdj5" model="ST2000DM001-1CH164      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="2">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdk5" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="6">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdl5" model="ST1000DM003-1CH162      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="1">
                                                        </disk>
                                                </disks>
                                        </raid>
                                        <raid path="/dev/md3" uuid="3cef14a9:214bd5de:c71c244c:e59eb342" level="raid5" version="1.2">
                                                <disks>
                                                        <disk status="normal" dev_path="/dev/sdb6" model="WD30EZRX-00MMMB0        " serial="WD-WCAWZ1343231" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="0">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdc6" model="WD2003FYYS-02W0B1       " serial="WD-WMAY04632022" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="1">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdd6" model="WD2003FYYS-02W0B0       " serial="WD-WMAY02585893" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="2">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sde6" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E7ZAL0ZX" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="9">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdf6" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E2LRRLFA" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="8">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdg6" model="ST2000DM001-1CH164      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="3">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdh6" model="ST3000DM001-1E6166      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="7">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdi6" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="6">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdj6" model="ST2000DM001-1CH164      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="4">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdk6" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="5">
                                                        </disk>
                                                </disks>
                                        </raid>
                                        <raid path="/dev/md4" uuid="d3a426d3:fafd9c0a:e0393702:79750b47" level="raid1" version="1.2">
                                                <disks>
                                                        <disk status="normal" dev_path="/dev/sde8" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E7ZAL0ZX" partition_version="8" partition_start="5860342336" partition_size="1953487904" slot="0">
                                                        </disk>
                                                        <disk status="normal" dev_path="/dev/sdf8" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E2LRRLFA" partition_version="8" partition_start="5860342336" partition_size="1953487904" slot="1">
                                                        </disk>
                                                </disks>
                                        </raid>
                                </raids>
                        </lvm>
                </device>
                <reference>
                        <volumes>
                                <volume path="/volume1" dev_path="/dev/vg1000/lv" uuid="g5hc5i-t2eR-Wj1v-MTwg-3EHX-APQe-sDLOe5">
                                </volume>
                        </volumes>
                </reference>
        </space>
</spaces>

 

I know its alot of data but anyone willing to look at my issue, I am much appreciated.

Link to comment
Share on other sites

Here is some more info, seems something is up with the superblocks, any idea how to fix it?

 

LiquidXPe> mdadm --assemble --scan --verbose
mdadm: looking for devices for further assembly
mdadm: cannot open device /dev/sdu1: Device or resource busy
mdadm: cannot open device /dev/sdu: Device or resource busy
mdadm: no recogniseable superblock on /dev/dm-0
mdadm: cannot open device /dev/md5: Device or resource busy
mdadm: cannot open device /dev/md3: Device or resource busy
mdadm: cannot open device /dev/md2: Device or resource busy
mdadm: cannot open device /dev/md4: Device or resource busy
mdadm: cannot open device /dev/zram3: Device or resource busy
mdadm: cannot open device /dev/zram2: Device or resource busy
mdadm: cannot open device /dev/zram1: Device or resource busy
mdadm: cannot open device /dev/zram0: Device or resource busy
mdadm: cannot open device /dev/md1: Device or resource busy
mdadm: cannot open device /dev/md0: Device or resource busy
mdadm: cannot open device /dev/sdh7: Device or resource busy
mdadm: cannot open device /dev/sdh6: Device or resource busy
mdadm: cannot open device /dev/sdh5: Device or resource busy
mdadm: cannot open device /dev/sdh2: Device or resource busy
mdadm: cannot open device /dev/sdh1: Device or resource busy
mdadm: cannot open device /dev/sdh: Device or resource busy
mdadm: cannot open device /dev/sdi7: Device or resource busy
mdadm: cannot open device /dev/sdi6: Device or resource busy
mdadm: cannot open device /dev/sdi5: Device or resource busy
mdadm: cannot open device /dev/sdi2: Device or resource busy
mdadm: cannot open device /dev/sdi1: Device or resource busy
mdadm: cannot open device /dev/sdi: Device or resource busy
mdadm: cannot open device /dev/sdl5: Device or resource busy
mdadm: no recogniseable superblock on /dev/sdl3
mdadm: cannot open device /dev/sdl2: Device or resource busy
mdadm: cannot open device /dev/sdl1: Device or resource busy
mdadm: cannot open device /dev/sdl: Device or resource busy
mdadm: cannot open device /dev/sdj6: Device or resource busy
mdadm: cannot open device /dev/sdj5: Device or resource busy
mdadm: no recogniseable superblock on /dev/sdj3
mdadm: cannot open device /dev/sdj2: Device or resource busy
mdadm: cannot open device /dev/sdj1: Device or resource busy
mdadm: cannot open device /dev/sdj: Device or resource busy
mdadm: cannot open device /dev/sdg6: Device or resource busy
mdadm: cannot open device /dev/sdg5: Device or resource busy
mdadm: no recogniseable superblock on /dev/sdg3
mdadm: cannot open device /dev/sdg2: Device or resource busy
mdadm: cannot open device /dev/sdg1: Device or resource busy
mdadm: cannot open device /dev/sdg: Device or resource busy
mdadm: no recogniseable superblock on /dev/sdk8
mdadm: cannot open device /dev/sdk7: Device or resource busy
mdadm: cannot open device /dev/sdk6: Device or resource busy
mdadm: cannot open device /dev/sdk5: Device or resource busy
mdadm: cannot open device /dev/sdk2: Device or resource busy
mdadm: cannot open device /dev/sdk1: Device or resource busy
mdadm: cannot open device /dev/sdk: Device or resource busy
mdadm: cannot open device /dev/sdf8: Device or resource busy
mdadm: cannot open device /dev/sdf7: Device or resource busy
mdadm: cannot open device /dev/sdf6: Device or resource busy
mdadm: cannot open device /dev/sdf5: Device or resource busy
mdadm: cannot open device /dev/sdf2: Device or resource busy
mdadm: cannot open device /dev/sdf1: Device or resource busy
mdadm: cannot open device /dev/sdf: Device or resource busy
mdadm: cannot open device /dev/sde8: Device or resource busy
mdadm: cannot open device /dev/sde7: Device or resource busy
mdadm: cannot open device /dev/sde6: Device or resource busy
mdadm: cannot open device /dev/sde5: Device or resource busy
mdadm: cannot open device /dev/sde2: Device or resource busy
mdadm: cannot open device /dev/sde1: Device or resource busy
mdadm: cannot open device /dev/sde: Device or resource busy
mdadm: cannot open device /dev/sdd6: Device or resource busy
mdadm: cannot open device /dev/sdd5: Device or resource busy
mdadm: no recogniseable superblock on /dev/sdd3
mdadm: cannot open device /dev/sdd2: Device or resource busy
mdadm: cannot open device /dev/sdd1: Device or resource busy
mdadm: cannot open device /dev/sdd: Device or resource busy
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: no recogniseable superblock on /dev/sdc3
mdadm: cannot open device /dev/sdc2: Device or resource busy
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: cannot open device /dev/sdb7: Device or resource busy
mdadm: cannot open device /dev/sdb6: Device or resource busy
mdadm: cannot open device /dev/sdb5: Device or resource busy
mdadm: cannot open device /dev/sdb2: Device or resource busy
mdadm: cannot open device /dev/sdb1: Device or resource busy
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: No arrays found in config file or automatically

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...