Jump to content
XPEnology Community

Help! - Two disks randomly dropped from SHR Volume


humancaviar

Recommended Posts

Hello, was wondering if anyone had any experience recovering a volume crash after healthy disks are randomly dropped from SHR volume. After system partition issues and a reboot, two healthy disks were dropped from my volume rendering it useless. I have confirmed that all of the raid partitions for the volume are still on those disks. They appear as initialized disks in the storage manager. I have backups but there was interim data that I would love to recover.

 

Manually adding the disks back into the raid with mdadm is not an option because...

mdadm --examine /dev/sdg

mdadm: No md superblock detected on /dev/sdg.

mdadm --examine /dev/sdh

mdadm: No md superblock detected on /dev/sdh.

 

Boot Ubuntu, attempt recovery (how to do?)? Attempt fresh install/update? Any ideas would be greatly appreciated.

 

Current version: DSM 5.2-5644 Update 3

 

###fdisk output for disks###

fdisk /dev/sdg -l

 

Disk /dev/sdg: 2000.3 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdg1 1 311 2490240 fd Linux raid autodetect

Partition 1 does not end on cylinder boundary

/dev/sdg2 311 572 2097152 fd Linux raid autodetect

Partition 2 does not end on cylinder boundary

/dev/sdg3 588 243201 1948788912 f Win95 Ext'd (LBA)

/dev/sdg5 589 243189 1948684480 fd Linux raid autodetect

 

fdisk /dev/sdh -l

 

Disk /dev/sdh: 2000.3 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdh1 1 311 2490240 fd Linux raid autodetect

Partition 1 does not end on cylinder boundary

/dev/sdh2 311 572 2097152 fd Linux raid autodetect

Partition 2 does not end on cylinder boundary

/dev/sdh3 588 243201 1948788912 f Win95 Ext'd (LBA)

/dev/sdh5 589 243189 1948684480 fd Linux raid autodetect

Link to comment
Share on other sites

I ended up saying F-it, zeroing the disks and going to restore backup. After doing some research, it seems that old RAID data on previously used disks can cause issues like this. Fully zeroing should prevent it from happening in the future.

Link to comment
Share on other sites

×
×
  • Create New...