Jump to content
XPEnology Community

Failed migration to DSM7 - lost SHR


Chrunch

Recommended Posts

Initially when i got my DSM7 installation started on ESXI, Synology only found 3 of 5 disks in my SHR volume. That of course caused some panic.

My disks are all passed trough a sata controller.

 

I managed to fix DSM not seeing the remaining disks via satamap so they are identical to my setup from DSM 6.2, however i'm unable to repair it.

 

My SHR consists of 5 disks, 4x 8TB, and 1x 3TB.

It seems it refuses to repair unless i replace my 3TB disk with another 8TB now?

 

Error now

UFVqKja.png

 

Error when trying to repair:

igzPGMg.png

 

How disks were with DSM 6.2

Zq2pJpc.png

 

Hoping for some suggestions

Link to comment
Share on other sites

I'm contemplating about creating a new VM with DSM7 with a new .vmdk file etc. - and let DSM try to "migrate" the installation?

Not sure if that would make it worse however. As i understand "system partition failed" is that DSM isn't installed across all disks, which makes sense as all the disks wasn't visible to DSM when DSM7 got installed due to my mistake.

Link to comment
Share on other sites

Repairing the system partition should not require another disk.  It almost looks as if you somehow added another disk to your array but that is not something that happens as part of the upgrade.

 

Always better to post an mdstat, you might want to search for other data recovery threads so you can see some of the commands involved.

 

cat /proc/mdstat from the command line is the place to start.

Link to comment
Share on other sites

Hi,

 

All the data is visble and accesible, atleast i couldn't find anything missing :)

Just the error with "system partition failed" and warnings from DSM.
 

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md4 : active raid1 sdg3[0]
      1948692544 blocks super 1.2 [1/1] [U]
md2 : active raid5 sdd5[7] sdc5[5] sdb5[6] sdf5[2] sde5[8]
      11701741824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
md3 : active raid5 sdb6[5] sdd6[3] sdc6[1]
      14651252736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
md1 : active raid1 sdg2[3] sdb2[0] sdd2[2] sdc2[1]
      2097088 blocks [16/4] [UUUU____________]
md0 : active raid1 sdg1[2] sdb1[0] sdc1[4] sdd1[1]
      2490176 blocks [12/4] [UUU_U_______]
unused devices: <none>
Link to comment
Share on other sites

Out of my league by now, so i'm glad for your help :)

 


 

sudo mdadm -D /dev/md3
Password:
/dev/md3:
        Version : 1.2
  Creation Time : Mon Aug 12 05:25:30 2019
     Raid Level : raid5
     Array Size : 14651252736 (13972.52 GiB 15002.88 GB)
  Used Dev Size : 4883750912 (4657.51 GiB 5000.96 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent
    Update Time : Mon Aug  8 22:58:54 2022
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 64K
           Name : Xpenology:3  (local to host Xpenology)
           UUID : 934a52c6:b8d5e8f1:5653d81b:340aed0c
         Events : 219081
    Number   Major   Minor   RaidDevice State
       5       8       22        0      active sync   /dev/sdb6
       1       8       38        1      active sync   /dev/sdc6
       3       8       54        2      active sync   /dev/sdd6
       -       0        0        3      removed
Link to comment
Share on other sites

Not sure what changed since last, besides a reboot. Files are still accessible.

Z4oOfvX.png

 

sudo mdadm -D /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Mon Aug 12 05:25:30 2019
     Raid Level : raid5
     Array Size : 14651252736 (13972.52 GiB 15002.88 GB)
  Used Dev Size : 4883750912 (4657.51 GiB 5000.96 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent
    Update Time : Mon Aug  8 23:53:10 2022
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 64K
           Name : Xpenology:3  (local to host Xpenology)
           UUID : 934a52c6:b8d5e8f1:5653d81b:340aed0c
         Events : 219089
    Number   Major   Minor   RaidDevice State
       5       8       22        0      active sync   /dev/sdb6
       1       8       38        1      active sync   /dev/sdc6
       3       8       54        2      active sync   /dev/sdd6
       -       0        0        3      removed

 

sudo fdisk -l /dev/sdb
Disk /dev/sdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: WD80EZAZ-11TDBA0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9E87F26C-B630-11E9-8F59-0CC47AC3A20D
Device          Start         End    Sectors  Size Type
/dev/sdb1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdb2     4982528     9176831    4194304    2G Linux RAID
/dev/sdb5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdb6  5860342336 15627846239 9767503904  4.6T Linux RAID

 

sudo fdisk -l /dev/sdc
Disk /dev/sdc: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: WD80EZAZ-11TDBA0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B9B51C1C-CA99-43CC-8DC1-80EA2EDF294A
Device          Start         End    Sectors  Size Type
/dev/sdc1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdc2     4982528     9176831    4194304    2G Linux RAID
/dev/sdc5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdc6  5860342336 15627846239 9767503904  4.6T Linux RAID

 

sudo fdisk -l /dev/sdd
Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: WD80EZAZ-11TDBA0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9F406465-1A66-436C-BB6E-8A558D642FC9
Device          Start         End    Sectors  Size Type
/dev/sdd1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdd2     4982528     9176831    4194304    2G Linux RAID
/dev/sdd5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdd6  5860342336 15627846239 9767503904  4.6T Linux RAID

 

sudo fdisk -l /dev/sde
Disk /dev/sde: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: WD80EZAZ-11TDBA0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 70959473-593B-4676-BE10-42738C834B2C
Device          Start         End    Sectors  Size Type
/dev/sde1        2048     4982527    4980480  2.4G Linux RAID
/dev/sde2     4982528     9176831    4194304    2G Linux RAID
/dev/sde5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sde6  5860342336 15627846239 9767503904  4.6T Linux RAID

 

sudo fdisk -l /dev/sdf
Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WD30EZRX-00DC0B0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9E87F269-B630-11E9-8F59-0CC47AC3A20D
Device       Start        End    Sectors  Size Type
/dev/sdf1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdf2  4982528    9176831    4194304    2G Linux RAID
/dev/sdf5  9453280 5860326239 5850872960  2.7T Linux RAID
Mino@Xpenology:~$
Link to comment
Share on other sites

Well we haven't done anything yet.

 

First thing to do is to fix the broken /dev/md3

 

sudo mdadm --manage /dev/md3 -a /dev/sde6

 

You can monitor its progress from Storage Manager or by repeatedly cat /proc/mdstat

 

Post the final

 

cat /proc/mdstat

 

when it is finished. At that point you probably will be able to use the link to fix the System Partition, but we should review the state first.

Link to comment
Share on other sites

Hi Flyride,

 

It took some time, so i left it overnight to complete.

Seems like the volume got repaired now, and is now in "warning" status.

 

Should i just try the repair link from the overview page, or should i do something else before?

 

Overview status:

Qzw7FUA.png

 

Volume status:

B63mSG7.png

 

 

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdd5[7] sdc5[5] sdb5[6] sdf5[2] sde5[8]
      11701741824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
md3 : active raid5 sde6[4] sdb6[5] sdd6[3] sdc6[1]
      14651252736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md4 : active raid1 sdg3[0]
      1948692544 blocks super 1.2 [1/1] [U]
md1 : active raid1 sdb2[0]
      2097088 blocks [16/1] [U_______________]
md0 : active raid1 sdb1[0]
      2490176 blocks [12/1] [U___________]
unused devices: <none>

 

sudo fdisk -l /dev/sdb
Disk /dev/sdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: WD80EZAZ-11TDBA0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9E87F26C-B630-11E9-8F59-0CC47AC3A20D
Device          Start         End    Sectors  Size Type
/dev/sdb1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdb2     4982528     9176831    4194304    2G Linux RAID
/dev/sdb5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdb6  5860342336 15627846239 9767503904  4.6T Linux RAID

 

sudo fdisk -l /dev/sdc
Disk /dev/sdc: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: WD80EZAZ-11TDBA0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B9B51C1C-CA99-43CC-8DC1-80EA2EDF294A
Device          Start         End    Sectors  Size Type
/dev/sdc1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdc2     4982528     9176831    4194304    2G Linux RAID
/dev/sdc5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdc6  5860342336 15627846239 9767503904  4.6T Linux RAID

 

sudo fdisk -l /dev/sdd
Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: WD80EZAZ-11TDBA0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9F406465-1A66-436C-BB6E-8A558D642FC9
Device          Start         End    Sectors  Size Type
/dev/sdd1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdd2     4982528     9176831    4194304    2G Linux RAID
/dev/sdd5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdd6  5860342336 15627846239 9767503904  4.6T Linux RAID

 

sudo fdisk -l /dev/sde
Disk /dev/sde: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: WD80EZAZ-11TDBA0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 70959473-593B-4676-BE10-42738C834B2C
Device          Start         End    Sectors  Size Type
/dev/sde1        2048     4982527    4980480  2.4G Linux RAID
/dev/sde2     4982528     9176831    4194304    2G Linux RAID
/dev/sde5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sde6  5860342336 15627846239 9767503904  4.6T Linux RAID


 

sudo fdisk -l /dev/sdf
Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WD30EZRX-00DC0B0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9E87F269-B630-11E9-8F59-0CC47AC3A20D
Device       Start        End    Sectors  Size Type
/dev/sdf1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdf2  4982528    9176831    4194304    2G Linux RAID
/dev/sdf5  9453280 5860326239 5850872960  2.7T Linux RAID


 

Link to comment
Share on other sites

  • 1 month later...

Hello, 

Soory to hijack this topic, but I have EXACTLY the same issue :)

I have migrate my NAS from 6.2 to 7.1

 

I have a VM (ESXI) and SAS with plenty of HDD.

Migration was ok, except with this error you describe @Crunch.

 

I have type the different command listed here, except

sudo mdadm --manage /dev/md3 -a /dev/sde6

 

because I don't think I have the same HDD in error.

 

Please find below the result of the previous command lines :

 

admin@Vidz:~$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md3 : active raid5 sdag5[1] sdak5[5] sdaj5[4] sdai5[3] sdah5[2]
      39045977280 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU]
md4 : active raid5 sdal6[0] sdan6[2] sdam6[1]
      7813997824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md2 : active raid5 sdal5[0] sdap5[4] sdao5[3] sdan5[2] sdam5[1]
      15608749824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
md5 : active raid5 sdb5[0] sdd5[2] sdc5[1]
      45654656 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md6 : active raid1 sdc6[0] sdd6[1]
      71295424 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sdb2[0] sdc2[12] sdd2[11] sdap2[10] sdao2[9] sdan2[8] sdam2[7] sdal2[6] sdak2[5] sdaj2[4] sdai2[3] sdah2[2] sdag2[1]
      2097088 blocks [16/13] [UUUUUUUUUUUUU___]
md0 : active raid1 sdb1[0] sdc1[12] sdag1[11] sdah1[10] sdai1[9] sdaj1[8] sdak1[7] sdal1[6] sdam1[5] sdan1[4] sdao1[3] sdap1[2] sdd1[1]
      2490176 blocks [16/13] [UUUUUUUUUUUUU___]

 

So I can see md3 is in error : 5 HDD ok and 1 HDD in error...

 

admin@Vidz:~$ sudo mdadm -D /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Sun Nov 15 18:27:08 2020
     Raid Level : raid5
     Array Size : 39045977280 (37237.15 GiB 39983.08 GB)
  Used Dev Size : 7809195456 (7447.43 GiB 7996.62 GB)
   Raid Devices : 6
  Total Devices : 5
    Persistence : Superblock is persistent
    Update Time : Sun Oct  2 14:23:45 2022
          State : active, degraded
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 64K
           Name : Vidz2:3
           UUID : fddca54d:8b2c0bba:cde1b911:8a91e14d
         Events : 6928
    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1      66        5        1      active sync   /dev/sdag5
       2      66       21        2      active sync   /dev/sdah5
       3      66       37        3      active sync   /dev/sdai5
       4      66       53        4      active sync   /dev/sdaj5
       5      66       69        5      active sync   /dev/sdak5


but here, I cannot read which HDD (sdxxx) I have to put in the command line below (instead of sde6) :

sudo mdadm --manage /dev/md3 -a /dev/sde6

 

I would say 

sudo mdadm --manage /dev/md3 -a /dev/sdaf5

but as I'm not sure, I don't want to break something.

I have still access to my files, only this message is display (Volume Degraded)

 

Huge thanks in advance for your help @flyride

Link to comment
Share on other sites

On 10/2/2022 at 7:35 PM, lokiki said:

I have still access to my files, only this message is display (Volume Degraded)

Before you do anything else, be smart and ensure all your files are backed up elsewhere.

 

Does the system not offer the ability to repair the array in the GUI?  If it does, that is preferred over manual intervention.

If it does not, this command should help identify where a partition is not being serviced.  Post the results.

sudo fdisk -l /dev/sd? | grep "^/dev/"
Link to comment
Share on other sites

Thanks for your prompt reply!

 

I know, I need to have a backup unfortunately, I can't have enough space to do it (24TB to backup....)

 

Does the system not offer the ability to repair the array in the GUI?  If it does, that is preferred over manual intervention.

-> no, it is the same message as above : need one more disk to repair (7.4TB). And the main issue, is I don't have a such disk.

Eveything before the migration was good, all HDD were OK.

 

Here is the result of your command:

 

admin@Vidz:~$ sudo fdisk -l /dev/sd? | grep "^/dev/"
/dev/sdb1           8192 16785407 16777216    8G fd Linux raid autodetect
/dev/sdb2       16785408 20979711  4194304    2G fd Linux raid autodetect
/dev/sdb3       21241856 67107423 45865568 21.9G  f W95 Ext'd (LBA)
/dev/sdb5       21257952 66914655 45656704 21.8G fd Linux raid autodetect
/dev/sdc1           8192  16785407  16777216    8G fd Linux raid autodetect
/dev/sdc2       16785408  20979711   4194304    2G fd Linux raid autodetect
/dev/sdc3       21241856 209700351 188458496 89.9G  f W95 Ext'd (LBA)
/dev/sdc5       21257952  66914655  45656704 21.8G fd Linux raid autodetect
/dev/sdc6       66930752 209523679 142592928   68G fd Linux raid autodetect
/dev/sdd1           8192  16785407  16777216    8G fd Linux raid autodetect
/dev/sdd2       16785408  20979711   4194304    2G fd Linux raid autodetect
/dev/sdd3       21241856 209700351 188458496 89.9G  f W95 Ext'd (LBA)
/dev/sdd5       21257952  66914655  45656704 21.8G fd Linux raid autodetect
/dev/sdd6       66930752 209523679 142592928   68G fd Linux raid autodetect
Link to comment
Share on other sites

I used Automated Redpill Loader v0.4-alpha9 for building my loader.

 

For the value of Satamap SataPortMap=1 

 

And here is the result of the command line:

 

/dev/sdaf1     256     622815     622560  2.4G Linux RAID
/dev/sdaf2  622816    1147103     524288    2G Linux RAID
/dev/sdaf5 1181660 1953480779 1952299120  7.3T Linux RAID
/dev/sdag1     256     622815     622560  2.4G Linux RAID
/dev/sdag2  622816    1147103     524288    2G Linux RAID
/dev/sdag5 1181660 1953480779 1952299120  7.3T Linux RAID
/dev/sdah1     256     622815     622560  2.4G Linux RAID
/dev/sdah2  622816    1147103     524288    2G Linux RAID
/dev/sdah5 1181660 1953480779 1952299120  7.3T Linux RAID
/dev/sdai1     256     622815     622560  2.4G Linux RAID
/dev/sdai2  622816    1147103     524288    2G Linux RAID
/dev/sdai5 1181660 1953480779 1952299120  7.3T Linux RAID
/dev/sdaj1     256     622815     622560  2.4G Linux RAID
/dev/sdaj2  622816    1147103     524288    2G Linux RAID
/dev/sdaj5 1181660 1953480779 1952299120  7.3T Linux RAID
/dev/sdak1     256     622815     622560  2.4G Linux RAID
/dev/sdak2  622816    1147103     524288    2G Linux RAID
/dev/sdak5 1181660 1953480779 1952299120  7.3T Linux RAID
/dev/sdal1       2048     4982527    4980480  2.4G Linux RAID
/dev/sdal2    4982528     9176831    4194304    2G Linux RAID
/dev/sdal5    9453280  7813830239 7804376960  3.6T Linux RAID
/dev/sdal6 7813846336 15627846239 7813999904  3.7T Linux RAID
/dev/sdam1       2048     4982527    4980480  2.4G Linux RAID
/dev/sdam2    4982528     9176831    4194304    2G Linux RAID
/dev/sdam5    9453280  7813830239 7804376960  3.6T Linux RAID
/dev/sdam6 7813846336 15627846239 7813999904  3.7T Linux RAID
/dev/sdan1       2048     4982527    4980480  2.4G Linux RAID
/dev/sdan2    4982528     9176831    4194304    2G Linux RAID
/dev/sdan5    9453280  7813830239 7804376960  3.6T Linux RAID
/dev/sdan6 7813846336 15627846239 7813999904  3.7T Linux RAID
/dev/sdao1    2048    4982527    4980480  2.4G Linux RAID
/dev/sdao2 4982528    9176831    4194304    2G Linux RAID
/dev/sdao5 9453280 7813830239 7804376960  3.6T Linux RAID
/dev/sdap1    2048    4982527    4980480  2.4G Linux RAID
/dev/sdap2 4982528    9176831    4194304    2G Linux RAID
/dev/sdap5 9453280 7813830239 7804376960  3.6T Linux RAID
/dev/sdb1           8192 16785407 16777216    8G fd Linux raid autodetect
/dev/sdb2       16785408 20979711  4194304    2G fd Linux raid autodetect
/dev/sdb3       21241856 67107423 45865568 21.9G  f W95 Ext'd (LBA)
/dev/sdb5       21257952 66914655 45656704 21.8G fd Linux raid autodetect
/dev/sdb3p1      16096 45672799 45656704 21.8G fd Linux raid autodetect
/dev/sdc1           8192  16785407  16777216    8G fd Linux raid autodetect
/dev/sdc2       16785408  20979711   4194304    2G fd Linux raid autodetect
/dev/sdc3       21241856 209700351 188458496 89.9G  f W95 Ext'd (LBA)
/dev/sdc5       21257952  66914655  45656704 21.8G fd Linux raid autodetect
/dev/sdc6       66930752 209523679 142592928   68G fd Linux raid autodetect
Failed to read extended partition table (offset=45684934): Invalid argument
/dev/sdc3p1         16096  45672799  45656704 21.8G fd Linux raid autodetect
/dev/sdc3p2      45684934 188281823 142596890   68G  5 Extended
/dev/sdd1           8192  16785407  16777216    8G fd Linux raid autodetect
/dev/sdd2       16785408  20979711   4194304    2G fd Linux raid autodetect
/dev/sdd3       21241856 209700351 188458496 89.9G  f W95 Ext'd (LBA)
/dev/sdd5       21257952  66914655  45656704 21.8G fd Linux raid autodetect
/dev/sdd6       66930752 209523679 142592928   68G fd Linux raid autodetect
Failed to read extended partition table (offset=45684934): Invalid argument
/dev/sdd3p1         16096  45672799  45656704 21.8G fd Linux raid autodetect
/dev/sdd3p2      45684934 188281823 142596890   68G  5 Extended

 

Again many thanks for your help, I understand what your are doing, but I'm afraid to type a wrong command with disastrous consequences.

Link to comment
Share on other sites

On the surface, I would agree with your proposed command to restore the array.  However I have two comments and will reiterate strongly my prior advice:

 

1. The sdaf disk is missing altogether from mdadm and the sdaf1 and sdaf2 partitions should be present in the md0 and md1 arrays.  This is an abnormal state.  If you had just dropped the disk, there should be evidence of the md0 and md1 missing array members.

 

2. The SHR is really, really fragmented and is a liability even if it was healthy.  You will be served by rebuilding the SHR from scratch with all of the physical disks present from the beginning.  I would not include the virtual disks and build a second Disk Group if you need them.

 

Given that your data is intact, you really MUST offload the data and get a backup.  You have time to procure or borrow adequate storage.  Go get it.  Then delete and rebuild the SHR which resolves both problems.  It would also be a safe opportunity to correct the incomplete DiskIdxMap/SataPortMap.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Thanks again for all your advices.

I have tried the command line and it seems to work (volume is rebuilding the RAID).

I know it is super risky, but I could not backup all these data, the size is really too big (and not critical data so HDD crash will not ruine my life).

I will defrag all the volumes after.

 

Thanks again @flyride you have done a super job!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...