Jump to content
XPEnology Community

DSM 6.2 on ESXi 6,7, storage pool crashed (Raid5, ext4)


Rihc0

Recommended Posts

Hello everyone,

 

So I have a Dell r720 rack server installed with ESXi 6.7 u3.

the server is running virtual machines like xpenology, but my dsm crashed somehow (really dont know what happend).

i have 5x 4tb sas drives configured in the raid controller as 1 disk each Virtual disks. so 5 virtual disks total. to push the whole disk to the virtual machine and use raw disk mapping.

my expenology has 5 RDM (5x 4 tb sas drives) . my DSM handles the raid and it has been set in a raid 5 ext4 file system. but now when i booted xpenology VM i got this (Picture1). when when i try to repair the storage pool (picture 3,4,5) you can see the message :(.

Can anyone help me with this :D

 

PS: Now i remember i deleted the virtual disks in the raid controller, made a raid 5 virtual disk for windows, booted a windows vm with the virtual disk (for some data recovery) and made seperare virtual disk for each disk and added them to expenology again)

Screenshot 2020-11-11 at 03.26.56.png

Screenshot 2020-11-11 at 03.27.32.png

Screenshot 2020-11-11 at 03.27.55.png

Screenshot 2020-11-11 at 03.28.06.png

Screenshot 2020-11-11 at 03.29.08.png

Edited by Rihc0
Link to comment
Share on other sites

On 11/11/2020 at 3:30 AM, Rihc0 said:

Now i remember i deleted the virtual disks in the raid controller, made a raid 5 virtual disk for windows, booted a windows vm with the virtual disk (for some data recovery) and made separate virtual disk for each disk and added them to xpenology again)

then there was/is not data on that volume?

the simple way to get rid of the mass is to delete the storage pool (incl. the volume) and create a new with the now free unused disks

if there are any messages about system partitions not working let dsm repair it (system is a raid1 over all disks, as long as one is still working it will copy to the others by raid1)

 

Link to comment
Share on other sites

24 minutes ago, Rihc0 said:

There is data on it that I want to keep :(, I can't repair it because dam says I need more disks for some reason

if you had 5 raid0 disks on the physical controller then destroyed 3 disks, moved these 3 disks into a raid5 (on then hardware controller) , installed windows on it and after that you destroyed that raid5 and created new raid0 disks, how will there be any usable data left that can fit into a raid?

if you remove 3 disks from raid5 disk set of 5 disks and destroy the data on that 3 disks, then there is no way to fix it, raid5 has one disk redundancy, there are cases where more disks drop out of a raid set and you can force them back in as long as there is not much difference but having 3 disks reconfigured and written to  in that new configuration (raid5 of 3 disks)  i'd guess there is nothing left of you old data that you can use

 

thats what i took from your comment, maybe i got it wrong?

Link to comment
Share on other sites

I think he is saying ->

My DSM crashed. How to recover data?

I know! I will mount the disks under Windows and pull them off.

Hmmm. It's RAID 5. How do I do that?

I know! I'll define the disks as RAID 5 using the storage controller and mount the entire RAID to a running Windows machine and pull the data off.

Whoops! that didn't work. I'll redefine the disks using the storage controller and mount them individually again under DSM and let DSM go back to using them as a RAID.

Ah! Nothing works!

Link to comment
Share on other sites

okay, so the physical controller had the 5 disks in raid 0 each disk working on the dsm (everything was fine from here and smoothly. I made a raid 5 disk and put it in windows because I wanted to try data recovery on it because I used the disk as raid 5 on a virtual machine before I used it for DSM, when I scanned the disk and recovered the data. I configured the 5 disks as raid 0 each disk. and connected them to dsm and now I have the error, 

 

do you guys get it, might be a bit confusing xD

Link to comment
Share on other sites

26 minutes ago, Rihc0 said:

I made a raid 5 disk and put it in windows

from what drives? you had 5 disks raid0 in your system used for dsm

you added 3-5 disks from another system to create the raid5 for windows?

 

having a closer look at your pictures it looks really strange, pool1 is raid5 but volume 1 says raid1 and its 2.28 GIGABYTE not TB

2nd picure of the storage pool, right below storage pool 1 it says raid5 and on the right with the error its raid1

also a small GB number and THAT number from the size 2.37 GB  looks like the size of a dsm system partition (these are raid1)

the for some reason the pool thinks these raid1 partitions are part of the pool

that looks very strange

 

maybe you should list some basics from the command line like list of disk (/dev/sda-sd... and list of partitions on every disk, this should do it

sudo fdisk -l /dev/sd*

 

also the output of

sudo cat /proc/mdstat

 

Link to comment
Share on other sites

Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa43eb840

Device     Boot   Start     End Sectors Size Id Type
/dev/sdb1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 33349631 23912448 11.4G fd Linux raid autodetect
Disk /dev/sdb1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb3: 11.4 GiB, 12243173376 bytes, 23912448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x7a551fe7

Device     Boot   Start     End Sectors Size Id Type
/dev/sdc1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdc2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdc3       9437184 33349631 23912448 11.4G fd Linux raid autodetect
Disk /dev/sdc1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc3: 11.4 GiB, 12243173376 bytes, 23912448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AB3D10CC-2A35-4075-AF8B-135C47B30870

Device       Start       End   Sectors Size Type
/dev/sdd1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdd2  4982528    9176831    4194304    2G Linux RAID
/dev/sdd3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdd1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd3: 3.6 TiB, 3995288469504 bytes, 7803297792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sde: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E3D1C69D-D406-4A97-BF00-168262F1025C

Device       Start       End   Sectors Size Type
/dev/sde1     2048    4982527    4980480  2.4G Linux RAID
/dev/sde2  4982528    9176831    4194304    2G Linux RAID
/dev/sde3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdf: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: EB6537B4-CC88-4A1B-99A4-C235A327CFB6

Device       Start       End   Sectors Size Type
/dev/sdf1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdf2  4982528    9176831    4194304    2G Linux RAID
/dev/sdf3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdf1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdf2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdf3: 3.6 TiB, 3995288469504 bytes, 7803297792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdg: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdh: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F681C773-F2AE-452F-8E71-55511FA5AEE0

Device       Start       End   Sectors Size Type
/dev/sdh1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdh2  4982528    9176831    4194304    2G Linux RAID
/dev/sdh3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdh1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdh2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdh3: 3.6 TiB, 3995288469504 bytes, 7803297792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdm3: 4 MiB, 4177408 bytes, 8159 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

the 5 drives in my Synology right now, are the drives I had in the first place. 

Link to comment
Share on other sites

2 hours ago, Rihc0 said:

okay, so the physical controller had the 5 disks in raid 0 each disk working on the dsm (everything was fine from here and smoothly. I made a raid 5 disk and put it in windows because I wanted to try data recovery on it because I used the disk as raid 5 on a virtual machine before I used it for DSM, when I scanned the disk and recovered the data. I configured the 5 disks as raid 0 each disk. and connected them to dsm and now I have the error, 

 

do you guys get it, might be a bit confusing xD

 

RAID5 on a hardware controller is going to use the full amount of the disk for the array.  There probably is no metadata on the drives at all; it's usually on NVRAM in the controller. With a hardware controller RAID, the array member numbering will correspond to the physical ports on the controller.

 

RAID5 on DSM is over a data partition (actually the third data partition) across all three drives. Each member has metadata information that explains how it fits into the array, and there is no guarantee they are in port order.

 

There is no way to interchange between hardware raid and mdm, and it's incredibly likely that critical data structures have been corrupted by trying to do so.  I think this is a basket case.

Edited by flyride
Link to comment
Share on other sites

19 minutes ago, Rihc0 said:

you think I won't be able to repair this?

if you are a data recovery specialist working for kroll ontrack you could ;-)

you could asks a professional recovery company and describe your case in details but my guess would be the recovery would cost you at least a one digit thousand bugs

 

beside this, you fdisk listing shows two 16GB disks and four 4TB disks

you stated its 5 x 4TB - somehow even that does not match

maybe you should think about doing backups and maybe a original synology nas

Edited by IG-88
Link to comment
Share on other sites

1250163351_Screenshot2020-11-13at01_02_55.thumb.png.6e7ae21f9b3b5b6b6df879557cde7fcc.png

I created the 2x 16gb so I could boot the Synology Nas. I had a ds918+ but the hardware was slow :(.

I asked if you think I won't be able to repair this because if not then, I know it is pointless on working on this problem. I should learn more abut the raid and how it works, I am too unfamiliar with it :(

Link to comment
Share on other sites

3 minutes ago, Rihc0 said:

I asked if you think I won't be able to repair this because if not then, I know it is pointless on working on this problem. I should learn more abut the raid and how it works, I am too unfamiliar with it

 

1 hour ago, flyride said:

think this is a basket case.

if he say's so i#m pretty sure you wont be able to do that yourself just by reading some articles about mdadm

he is the one helping people with such problems and might be the one with the most experience here

 

you would need to have a deep knowledge about the the data structures on that madam disks and would need to reconstruct some data manually and even then there will be some losses depending on how much was written to the disks - that was the meaning of my comment about working for a world known recovery company

if you just initialized the hardware raid and the controller did not write the whole disk then most sectors of the disk still have the original content and if the controller only wrote to the first few MB of the disk then your data might be untouched as they are on the 3rd partition (2.4GB dsm system, 2GB swap, your date partition form the raid5)

you would be needed to know what the controller did when making the disks into a raid5 set and what happened after this (what you did with windows)

i guess a recovery professional would do some kind of interview to reconstruct what happened have a rough estimate if its even worth to start, like if the controller did a full cycle of zero the disks in the raid (would at least take 1-2 hours i guess) then its pointless to even start but the fact that we still see the old partitions indicate that the zeroing out part did not happen

Link to comment
Share on other sites

Okay, I had to do this on an another virtual machine because I deleted the other one because i thought it was hopeless :P.

 

output of sudo fdisk -l /dev/sd*

ash-4.3# sudo fdisk -l /dev/sd* 
Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x22d5f435

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sdb1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 33349631 23912448 11.4G fd Linux raid autodetect
Disk /dev/sdb1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb3: 11.4 GiB, 12243173376 bytes, 23912448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8504927a

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sdc1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdc2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdc3       9437184 33349631 23912448 11.4G fd Linux raid autodetect
Disk /dev/sdc1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc3: 11.4 GiB, 12243173376 bytes, 23912448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd: 3.7 TiB, 3999688294400 bytes, 7811891200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 88AB940C-A74C-425C-B303-3DD15285C607

Device     Start        End    Sectors  Size Type
/dev/sdd1   2048 7811889152 7811887105  3.7T unknown
Disk /dev/sdd1: 3.7 TiB, 3999686197760 bytes, 7811887105 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: EB6537B4-CC88-4A1B-99A4-C235A327CFB6

Device       Start        End    Sectors  Size Type
/dev/sde1     2048    4982527    4980480  2.4G Linux RAID
/dev/sde2  4982528    9176831    4194304    2G Linux RAID
/dev/sde3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sde1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde3: 3.6 TiB, 3995288469504 bytes, 7803297792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdf: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AB3D10CC-2A35-4075-AF8B-135C47B30870

Device       Start        End    Sectors  Size Type
/dev/sdf1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdf2  4982528    9176831    4194304    2G Linux RAID
/dev/sdf3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdf1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdf2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdf3: 3.6 TiB, 3995288469504 bytes, 7803297792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdg: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F681C773-F2AE-452F-8E71-55511FA5AEE0

Device       Start        End    Sectors  Size Type
/dev/sdg1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdg2  4982528    9176831    4194304    2G Linux RAID
/dev/sdg3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdg1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdg2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdg3: 3.6 TiB, 3995288469504 bytes, 7803297792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdh: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E3D1C69D-D406-4A97-BF00-168262F1025C

Device       Start        End    Sectors  Size Type
/dev/sdh1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdh2  4982528    9176831    4194304    2G Linux RAID
/dev/sdh3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdi: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdm3: 4 MiB, 4177408 bytes, 8159 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
ash-4.3# 

 

output of cat /proc/mdstat

ash-4.3# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] 
md3 : active raid5 sdf3[0] sde3[1]
      15606591488 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/2] [UU___]
      
md2 : active raid1 sdb3[0] sdc3[1]
      11955200 blocks super 1.2 [2/2] [UU]
      
md127 : active raid1 sde1[1] sdf1[0]
      2490176 blocks [12/2] [UU__________]
      
md1 : active raid1 sdb2[0] sdc2[1] sde2[2] sdf2[3] sdg2[4]
      2097088 blocks [12/5] [UUUUU_______]
      
md0 : active raid1 sdb1[0] sdc1[1]
      2490176 blocks [12/2] [UU__________]
      
unused devices: <none>
ash-4.3# 

 

 

output of: mdadm --detail /dev/md3

ash-4.3# mdadm --detail /dev/md3 
/dev/md3:
        Version : 1.2
  Creation Time : Sat Jun 20 00:46:08 2020
     Raid Level : raid5
     Array Size : 15606591488 (14883.61 GiB 15981.15 GB)
  Used Dev Size : 3901647872 (3720.90 GiB 3995.29 GB)
   Raid Devices : 5
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Nov 13 02:22:44 2020
          State : clean, FAILED 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : Dabadoo:2
           UUID : ff64862b:9edfe233:c498ea84:9d4b9ffd
         Events : 39091

    Number   Major   Minor   RaidDevice State
       0       8       83        0      active sync   /dev/sdf3
       1       8       67        1      active sync   /dev/sde3
       -       0        0        2      removed
       -       0        0        3      removed
       -       0        0        4      removed
ash-4.3# 

 

output of: mdadm --examine /dev/sd[defgh]3 | egrep 'Event|/dev/sd'

ash-4.3# mdadm --examine /dev/sd[defgh]3 | egrep 'Event|/dev/sd' 
mdadm: No md superblock detected on /dev/sdg3.
/dev/sde3:
         Events : 39091
/dev/sdf3:
         Events : 39091
ash-4.3# 

 

Link to comment
Share on other sites

Okay, Sorry for the late reply, the lifecycle controller on my server was not doing great and had to troubleshoot it.

 

Ill redo all the commands because there might have been some changes. 

 

output of sudo fdisk -l /dev/sd*

ash-4.3# sudo fdisk -l /dev/sd*
Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x22d5f435

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sdb1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 33349631 23912448 11.4G fd Linux raid autodetect
Disk /dev/sdb1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb3: 11.4 GiB, 12243173376 bytes, 23912448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8504927a

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sdc1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdc2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdc3       9437184 33349631 23912448 11.4G fd Linux raid autodetect
Disk /dev/sdc1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc3: 11.4 GiB, 12243173376 bytes, 23912448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: EB6537B4-CC88-4A1B-99A4-C235A327CFB6

Device       Start        End    Sectors  Size Type
/dev/sde1     2048    4982527    4980480  2.4G Linux RAID
/dev/sde2  4982528    9176831    4194304    2G Linux RAID
/dev/sde3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sde1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde3: 3.6 TiB, 3995288469504 bytes, 7803297792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdf: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E3D1C69D-D406-4A97-BF00-168262F1025C

Device       Start        End    Sectors  Size Type
/dev/sdf1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdf2  4982528    9176831    4194304    2G Linux RAID
/dev/sdf3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdg: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AB3D10CC-2A35-4075-AF8B-135C47B30870

Device       Start        End    Sectors  Size Type
/dev/sdg1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdg2  4982528    9176831    4194304    2G Linux RAID
/dev/sdg3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdg1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdg2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdg3: 3.6 TiB, 3995288469504 bytes, 7803297792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdh: 3.7 TiB, 4000225165312 bytes, 7812939776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F681C773-F2AE-452F-8E71-55511FA5AEE0

Device       Start        End    Sectors  Size Type
/dev/sdh1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdh2  4982528    9176831    4194304    2G Linux RAID
/dev/sdh3  9437184 7812734975 7803297792  3.6T Linux RAID
Disk /dev/sdh1: 2.4 GiB, 2550005760 bytes, 4980480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdh2: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdh3: 3.6 TiB, 3995288469504 bytes, 7803297792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdm3: 4 MiB, 4177408 bytes, 8159 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

 

output of cat /proc/mdstat

ash-4.3# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] 
md3 : active raid5 sdg3[0] sde3[1]
      15606591488 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/2] [UU___]
      
md2 : active raid1 sdb3[0] sdc3[1]
      11955200 blocks super 1.2 [2/2] [UU]
      
md127 : active raid1 sde1[1] sdg1[0]
      2490176 blocks [12/2] [UU__________]
      
md1 : active raid1 sdb2[0] sdc2[1] sde2[2] sdg2[3] sdh2[4]
      2097088 blocks [12/5] [UUUUU_______]
      
md0 : active raid1 sdb1[0] sdc1[1]
      2490176 blocks [12/2] [UU__________]
      
unused devices: <none>

 

 

output of: mdadm --detail /dev/md3

ash-4.3# mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Sat Jun 20 00:46:08 2020
     Raid Level : raid5
     Array Size : 15606591488 (14883.61 GiB 15981.15 GB)
  Used Dev Size : 3901647872 (3720.90 GiB 3995.29 GB)
   Raid Devices : 5
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Nov 17 03:41:41 2020
          State : clean, FAILED 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : Dabadoo:2
           UUID : ff64862b:9edfe233:c498ea84:9d4b9ffd
         Events : 39101

    Number   Major   Minor   RaidDevice State
       0       8       99        0      active sync   /dev/sdg3
       1       8       67        1      active sync   /dev/sde3
       -       0        0        2      removed
       -       0        0        3      removed
       -       0        0        4      removed

 

 

output of: mdadm --examine /dev/sd[defgh]3 | egrep 'Event|/dev/sd'

ash-4.3# mdadm --examine /dev/sd[defgh]3 | egrep 'Event|/dev/sd'
mdadm: No md superblock detected on /dev/sdh3.
/dev/sde3:
         Events : 39101
/dev/sdg3:
         Events : 39101

 

 

 

mdadm --examine /dev/sdg3

ash-4.3# mdadm --examine /dev/sdg3
/dev/sdg3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ff64862b:9edfe233:c498ea84:9d4b9ffd
           Name : Dabadoo:2
  Creation Time : Sat Jun 20 00:46:08 2020
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 7803295744 (3720.90 GiB 3995.29 GB)
     Array Size : 15606591488 (14883.61 GiB 15981.15 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : e0c37824:42d56226:4bb0cdcc:d29cca2f

    Update Time : Tue Nov 17 03:41:41 2020
       Checksum : cf00f943 - correct
         Events : 39101

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AA... ('A' == active, '.' == missing, 'R' == replacing)

 

 

mdadm --examine /dev/sdh3

ash-4.3# mdadm --examine /dev/sdh3
mdadm: No md superblock detected on /dev/sdh3.

 

 

Link to comment
Share on other sites

As you suggest, it seems like disk assignments have moved around since your first post.

Again, a hardware raid will always use the port position to determine array member order, but mdraid does not because it maintains an on-drive superblock.

 

In this second run, md knows that the drive assigned to /dev/sdg3 is array member 0, and /dev/sde3 is #1.  We need four disks out of five to recover data from the array.

 

There are two really big problems with the array at the moment.

 

/dev/sdd doesn't seem to have a recognizable partition table at all, /dev/sdf and /dev/sdh have RAID partition structures but their superblocks are no longer intact.  As mentioned before this is very likely to have occurred when the disks were introduced to a foreign controller and array structure.

 

It is possible usable data still exists on the inaccessible disks.  We still could try and start the array from sde/f/g/h but the data available doesn't tell us how to order sdf and sdh.  Might you have any fdisk -l dumps from prior to the crash (or any other array information saved) so we can match the drive uuids to the array member positions?  

Link to comment
Share on other sites

This is the step in the process where you must understand that your best chances for recovery are to send your disks to a forensic lab.  It's expensive and there is no guarantee they can recover data, but once we start to try brute-force methods to overcome the corruption that has occurred on your disks, some of the information that a lab uses to figure out how to work will be overwritten and their job will become harder or impossible.

 

Again, the issue is that three of your five disks are now missing array administrative information, and /dev/sdd doesn't even have a partition table.  Because the information defining the order of those three disks is lost, we have to work through the six possible permutations and see if any of them result in the recovery of a filesystem.  Logically, I recommend we omit /dev/sdd, because it is the most likely to be corrupted into the array partition. If we include it in the array and it has corrupted data, the array will be inaccessible.  If we leave it out and the remaining disks are okay, the array will function due to parity redundancy.

 

Here's a table of the different array member permutations that will need to be attempted for recovery:

 

image.png.a66d0ab661470636ada4bccea88c5735.png

 

The process will be to manually override the array with each of these configurations, then attempt to access it in a read-only mode, and if successful, STOP.  Don't reboot, don't click on any repair options in the DSM UI.  Your best result will be to copy data off immediately.  Once you have recovered as much data as possible, only then consider whether it makes sense to repair the array in place, or to delete and rebuild it.

 

If you wish to proceed, please confirm whether you chose RAID5 or SHR when you created the array in DSM.  Also is the filesystem ext4 or btrfs?

 

EDIT: I now realize that you posted that information in the beginning.  RAID5 over SHR makes this job easier - that's good.  Unfortunately a data recovery like this is enhanced by btrfs over ext4, as btrfs would be able to detect corruption if the checksumming features were properly enabled on the filesystem.  In any case, it is irrelevant for this recovery.

Edited by flyride
Link to comment
Share on other sites

I see, I thought I used raid 5, but not sure, and filesystem is ext4 for sure. lets try.

 

Btw, how did you configure your nas? I want to do it the best way but at the moment I learn by making mistakes, and I don't know what is the best way to set it up.

Link to comment
Share on other sites

Work emergency took my idle time away, yay for the weekend.

 

On 11/17/2020 at 7:50 AM, flyride said:

Here's a table of the different array member permutations that will need to be attempted for recovery:

 

image.png.a66d0ab661470636ada4bccea88c5735.png

 

The process will be to manually override the array with each of these configurations, then attempt to access it in a read-only mode, and if successful, STOP.  Don't reboot, don't click on any repair options in the DSM UI.  Your best result will be to copy data off immediately.  Once you have recovered as much data as possible, only then consider whether it makes sense to repair the array in place, or to delete and rebuild it.

 

We have to model the array in sequence according to the table above.  Only one of the six permutations will be successful, so it is important that we do not write to the array until we are sure that it is good.  The following will create the array in the configuration of the first line of the table (note that we are root):

 

# cat /etc/fstab


# mdadm --stop /dev/md3
# mdadm -v --create --assume-clean -e1.2 -n5 -l5 /dev/md3 /dev/sdg3 /dev/sde3 /dev/sdf3 /dev/sdh3 missing -uff64862b:9edfe233:c498ea84:9d4b9ffd
# cat /proc/mdstat
# mount -o ro,noload /dev/md3 /volume1

 

I expect that all the commands will execute without error except the mount command.  If the mount command succeeds, we may have found the correct sequence of drives in the array. If it fails, we need to investigate further before proceeding to the next sequence.

 

Remember NOT to use any DSM UI features to edit the array, fix the System Partition, or make any other changes.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...