Jump to content
XPEnology Community
  • 0

Volume Crashed - Need Help!


commie83

Question

Hi all,

 

I had been getting S.M.A.R.T. errors for a few months. It finally kicked the bucket the other day so I get another hard drive to replace it.

 

My system was a 4x3TB SHR Synology Hybrid RAID (5.2 Update 5). When I took the failed drive out there were system partition errors and the Volume had crashed but I recovered it. It crashed a couple more times since then. I believe it was in an "degraded" state when I turned it off to swap out the drive. When I turned it back on with the new drive in place the volume had crashed, the new drive was "Not Initialized" but I don't have the ability to add it because the "Manage" button is greyed out.

 

Is there anyway to recover the partition myself? I don't know a whole lot about mounting and fixing partitions in Linux but I'd like to learn and more than anything I'd like to recover my files :sad:

 

As of right now I don't see anything when I check /volume1 and I get errors when I try what others with similar problems try from other posts on this forum. Is there a first command I should run which might tell me if this is recoverable?

 

Any help would be really appreciated!

 

Thanks!

 

Update:

I ran testdisk but I've never used it. I can't seem to find a relevant tutorial for my particular situation. Any pointers?

Link to comment
Share on other sites

7 answers to this question

Recommended Posts

  • 0

I added some drives to my system, rebooted and now the system wont start.  Synology assistance says its ready, I can SSH into the system and see that all my data is missing.  I removed the drives I added, rebooted again, same thing.  Any suggestions?

Link to comment
Share on other sites

  • 0

You might be 'lucky' and be able to recover using the Ubuntu mount method as suggested by @brantje, that generally works if you can reassemble the array. However, looking at your description its possible that you had a second drive failure that that has crashed your volume.

 

I've been 'lucky' myself in this situation and found that by putting the original drives back in their original order DSM reassembles the raid. If this works for you you may get warnings to do a file system check - don't do that as it takes ages and the intense disk activity might crash the bad drive again. Instead take a data backup and config backup, replace your failed drive(s) and create a new volume/restore.

Link to comment
Share on other sites

  • 0

I ssh'd into my system and found the hard drives with 

parted -l

and I also found /dev/md2 like this

Error: /dev/md2: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md2: 2986GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Is there anyway I can put this array back together?  I followed the suggested instructions and got this

mdadm: Found some drive for an array that is already active: /dev/md/0_0
mdadm: giving up.
mdadm: No arrays found in config file or automatically

which doesn't resolve anything.

 

UPDATE:  I just ran cat /proc/mdstat and it shows md2 undergoing recovery, I'm going to wait for it to finish and see whats what.

Edited by mckaycr
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...