Jump to content
XPEnology Community

help recovering crashed volume on ProLiant N54L


DragonsDream

Recommended Posts

I have a Proliant N54L that has 6xWD Red 4TB drives running DSM 5.0 in a single SHR.

 

I have been upgrading by replcing the 4TB drives with 6TB ones, one at a time. I would replace one, rebuilt the RAID but not expand as I figured I'd do that just once at the end of the process. The process went fine for the first 4 discs. However, I have a serious problem now after swapping out disk 5 (the one in the eSATA slot), and deciding it was time to expand the volume (since I wasn't sure the final disk would be replaceable - it's in the optical slot and I haven't seen confirmation yet that 6TB drives are supported there).

 

DSM refused to expand. The process just didn't do anything. I tried using the Ubuntu method described elsewhere to force an expansion but that also didn't work (I suspect it didn't like the different drive sizes). The process would run, but no progress bar would show and nothing was actually being done. After 2 days I crashed out of the process and went back to DSM. However, now my volume is crashed, DSM only sees 4 drives and lists drives 2-4 as not having a system partition.

 

Can anyone help get my volume back? I need the surefire absolute settings for getting DSM to see drives in the 5th and 6th slots (I had this for a moment after upgrading drive 5 but before expanding) but seem to have lost it now (yes I am using the modded BIOS).

 

Do I need to reinstall DSM to get the system partitions back? can I reinstall with a crashed volume and not risk losing the volume completely?

 

Thanks in advance for any help!

 

ps, please don't tell me what I should have done, such as "back up your data" I already know that. If I had 19TB of free space to back up to, I wouldn't be here asking for help and pointing it out now doesn't help me fix the current problem

Link to comment
Share on other sites

I went through a similar situation a couple years ago, except my box crashed during expanding, and wiped out my partition tables. After I fixed the partition tables (took lots of research, using another linux system w/ testdisk), I could not get DSM to recognize the array - it would show as crashed in DSM, but via console my volume1 was there, along with all my data. I eventually just backed up all my data to another drive and started over.

 

I know synology has a way of fixing crashed arrays so they are recognized in DSM, but I'm not sure exactly what they do.

 

This probably doesn't help you much, but I would check via console/shell and see if your volume is there, even though DSM marks it as crashed. And if that doesn't work, mount the array in linux, and see if you can get your data that way. If you're determined to fix DSM and your array, and figure out a way, I'd be curious on how you did it.

Link to comment
Share on other sites

It looks like the same problemen i got. Couldn expand raid shr volume. I tried another bootimage. When i used the new bootimage tot USB it expanded and works perfectly.

 

This is the topic. Further in it there is a link tot an fixed bootimage. Try that ons.

viewtopic.php?f=2&t=5233

I know just enough to be dangerous when it comes to RAIDs and servers. do i just put that image on a USB and boot with it or do I need to install something? will this fix that partition problem? Thanks again for your help. I'm studying that thread now to figure this all out.

Link to comment
Share on other sites

That test image is for a 5.1 fix to initial XPEnology. As far as I know 5.0 w/ nanaoboot had no issue. If you use that image you'd be upgrading your system from 5.0 to 5.1, and will be offered to migrate your data (if it can find it) or format your drives to build a new array...

Link to comment
Share on other sites

Well, the problem I am having is that DSM (and Ubuntu running in an old PC case) doesn't see discs 5 and 6 at all. They are showing up in the BIOS and I have copied the settings from my second server that also has 6 discs so I know those are correct. Before I take any further steps I want to be sure that I can get those discs to show up. I am assuming all I really need to do is get to a point where I can see 5 discs in a degraded array so I can at least start copying off data (guess I'm going to be buying more hard drives yay :-/ ) Would that be a correct assumption?

Link to comment
Share on other sites

I was in the same situation...

I managed to get my data back with using a win7 livecd and ufsexplorer (a portable version).

I could "rebuild" the raid and backup my data on another disc. Now I have to reinstall dsm.

Maybe there is a way to fix it but my linux "level" does not allow me to do it :oops::lol:

Link to comment
Share on other sites

Before I take the next step here, I just want confirmation that I will be able to see at least 5 (if not all 6) drives when I use the test image to boot. Do I need to change any settings? Will this damage anything further if I can't get to drives 5 & 6? Rule #1 here is to not do any further damage.

 

edit:

here is what I am getting right now, before doing anything

https://www.dropbox.com/s/snpbpx40xb0oybc/Crashed%20Hawk.png?dl=0

Link to comment
Share on other sites

W00T!!!!

succces!

well... enough success that I am doing a happy dance. I messed around with the settings and BIOS and managed to get 1 of the 2 missing drives found (the ODD) so now I up to "degraded." I never thought I'd be excited about seeing my box boot up "degraded" but at least now we can copy off everything. That means scrounging for every spare 1 and 2 TB drive we have lying around, but at least now I can work on getting the figuring out why the eSata drive isn't showing up without the fear of losing everything.

Link to comment
Share on other sites

×
×
  • Create New...