Jump to content
XPEnology Community

DragonsDream

Member
  • Posts

    12
  • Joined

  • Last visited

Everything posted by DragonsDream

  1. I have a crashed RAID 5 made up of 6x6TB discs on an HP Proliant MicroServer and using Xponology 5.1. A few days ago we had a street wide power outage. When my server box rebooted, something happened. I didn't notice until later. Disc 1 failed and the RAID went into degraded mode. I started a repair, which I calculated was going to take 16 hours for the first step. No idea for the second part it does. We were gone for the weekend, so I just let the repair run while we were away. When we got back, disc 1 was found but not initialized, so I initialized it, but as soon as that finished, disc 3 crashed. Now, disc 3 is showing crashed and disc 1 as initialized but no longer part of the volume so the whole array is crashed. Is there a way to get disc 1 back in to the volume so I can copy off data before redoing the whole setup (testing the drives, et al). With 2 discs not in the volume, it's crashed. with only 1 missing, it should be degraded and I can still access it. I am not being given the option to "manage" the volume in the Storage Manager. I'm also wondering if it's possible to clone disc 3 and use the cloned disc in the server to get the RAID array back. is this possible? Thanks for any help
  2. W00T!!!! succces! well... enough success that I am doing a happy dance. I messed around with the settings and BIOS and managed to get 1 of the 2 missing drives found (the ODD) so now I up to "degraded." I never thought I'd be excited about seeing my box boot up "degraded" but at least now we can copy off everything. That means scrounging for every spare 1 and 2 TB drive we have lying around, but at least now I can work on getting the figuring out why the eSata drive isn't showing up without the fear of losing everything.
  3. unfortunately not. as each disk got replaced and the RAID was successfully rebuilt, those 4TB drives went into our other Proliant to upgrade that ones 2TB drives. I'm wishing now we hadn't been impatient since if I still had the original disks, 'm sure this would be a non issue.
  4. Before I take the next step here, I just want confirmation that I will be able to see at least 5 (if not all 6) drives when I use the test image to boot. Do I need to change any settings? Will this damage anything further if I can't get to drives 5 & 6? Rule #1 here is to not do any further damage. edit: here is what I am getting right now, before doing anything https://www.dropbox.com/s/snpbpx40xb0oybc/Crashed%20Hawk.png?dl=0
  5. Well, the problem I am having is that DSM (and Ubuntu running in an old PC case) doesn't see discs 5 and 6 at all. They are showing up in the BIOS and I have copied the settings from my second server that also has 6 discs so I know those are correct. Before I take any further steps I want to be sure that I can get those discs to show up. I am assuming all I really need to do is get to a point where I can see 5 discs in a degraded array so I can at least start copying off data (guess I'm going to be buying more hard drives yay :-/ ) Would that be a correct assumption?
  6. I know just enough to be dangerous when it comes to RAIDs and servers. do i just put that image on a USB and boot with it or do I need to install something? will this fix that partition problem? Thanks again for your help. I'm studying that thread now to figure this all out.
  7. I have a Proliant N54L that has 6xWD Red 4TB drives running DSM 5.0 in a single SHR. I have been upgrading by replcing the 4TB drives with 6TB ones, one at a time. I would replace one, rebuilt the RAID but not expand as I figured I'd do that just once at the end of the process. The process went fine for the first 4 discs. However, I have a serious problem now after swapping out disk 5 (the one in the eSATA slot), and deciding it was time to expand the volume (since I wasn't sure the final disk would be replaceable - it's in the optical slot and I haven't seen confirmation yet that 6TB drives are supported there). DSM refused to expand. The process just didn't do anything. I tried using the Ubuntu method described elsewhere to force an expansion but that also didn't work (I suspect it didn't like the different drive sizes). The process would run, but no progress bar would show and nothing was actually being done. After 2 days I crashed out of the process and went back to DSM. However, now my volume is crashed, DSM only sees 4 drives and lists drives 2-4 as not having a system partition. Can anyone help get my volume back? I need the surefire absolute settings for getting DSM to see drives in the 5th and 6th slots (I had this for a moment after upgrading drive 5 but before expanding) but seem to have lost it now (yes I am using the modded BIOS). Do I need to reinstall DSM to get the system partitions back? can I reinstall with a crashed volume and not risk losing the volume completely? Thanks in advance for any help! ps, please don't tell me what I should have done, such as "back up your data" I already know that. If I had 19TB of free space to back up to, I wouldn't be here asking for help and pointing it out now doesn't help me fix the current problem
  8. I have a Proliant N54L that has 6xWD Red 4TB drives running DSM 5.0 in a single SHR. I have been upgrading by replcing the 4TB drives with 6TB ones, one at a time. I would replace one, rebuilt the RAID but not expand as I figured I'd do that just once at the end of the process. The process went fine for the first 4 discs. However, I have a serious problem now after swapping out disk 5 (the one in the eSATA slot), and deciding it was time to expand the volume (since I wasn't sure the final disk would be replaceable - it's in the optical slot and I haven't seen confirmation yet that 6TB drives are supported there). DSM refused to expand. The process just didn't do anything. I tried using the Ubuntu method described elsewhere to force an expansion but that also didn't work (I suspect it didn't like the different drive sizes). The process would run, but no progress bar would show and nothing was actually being done. After 2 days I crashed out of the process and went back to DSM. However, now my volume is crashed, DSM only sees 4 drives and lists drives 2-4 as not having a system partition. Can anyone help get my volume back? I need the surefire absolute settings for getting DSM to see drives in the 5th and 6th slots (I had this for a moment after upgrading drive 5 but before expanding) but seem to have lost it now (yes I am using the modded BIOS). Do I need to reinstall DSM to get the system partitions back? can I reinstall with a crashed volume and not risk losing the volume completely? Thanks in advance for any help! ps, please don't tell me what I should have done, such as "back up your data" I already know that. If I had 19TB of free space to back up to, I wouldn't be here asking for help and pointing it out now doesn't help me fix the current problem
  9. thank you for the reply & hep. I did in fact look at those tutorials. Those are for installing. Very little on upgrading. Or at least, nothing specifically on what to do differently (if anything - in which case that also should be specified). Am I to understand that migrating and installing are basically following the same steps?
  10. so no one in the Xpenology forum knows how to upgrade Xpenology?
  11. Hi, I have 2 HP Proliant Microsevers both with XPenology 4.3 and both with 6x4TB drives in an SHR. It's probably time to update them to the latest DSM I am a little nervous as to how to proceed however as I don't want to lose what is one my drives. In my Synology Assistant, one is listed as ready and the other as not configured, which is weird since it has been installed for years now and works just fine. I have tried to update from within the OS using DSM 5.0-4528 (DSM_DS3612xs_4528.pat) but get an error message "There is a temporary directory access error (error #2) during update" Is there some half-way step I need to do? install without drives in (just the USB stick)?any help would be appreciated.
×
×
  • Create New...