Transition Members
  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About shaggy0928

  • Rank
  1. Any data recovery advice?

    Would I need a single external drive that is larger than the raid array? It seems like it could be copied to an external (all I have handy) and then possible re-mounted in an array via Ubuntu for copying. A quick update, I rebooted into Synology and it showed ALL drives as being ok: And it starts Parity Checking, and then of course fails. This gives me a little hope that maybe Disk 1 could somehow be able to be mounted. For now I am just running a straight copy onto some hard drives in order to save as much data as I can and if no more possible solutions pop up I will just RMA the drive (still in warranty) and remake the partition with the other 3 and add the 4th when WD sends me a new one.
  2. Any data recovery advice?

    Alright, strap yourselves in, because this might get long... Hardware setup: 4x WD 2TB Red in SHR ASRock H81M-HDS Mobo Intel Celeron Processor 8GB Crucial Ballistix RAM First, some background: A few days ago I noted the network drives that I have on my system were not showing up in Windows so I navigated to the system via my browser and the system told me I needed to install an update and that my drives were from an old system and would need migration. I wrote a different post about that here: The versions it wanted to install was the same version (or slightly higher) of 5.2 so I thought nothing of it and agreed to let the system update. It went through the install smoothly, but never rebooted. Eventually I was able to navigate back to the web browser and it told me I now had 5.2 firmware, but 6.1-15152 DSM. I am still unclear how this install happened, but I assume that it downloaded it automatically from the internet even though I had enabled the "only install security patches" option. As I posted in the Tutorial forum a few posts after the linked one, I was able to get Jun's loader installed and boot into 6.1-15152 and I thought all was well. However, when I booted into the DSM, I was in a world of hurt. I have one bad disk in the array clearly that lists bad sectors, but that's the point of the SHR array right? Well I let the RAID start to repair itself and always around 1.5% into the repair it crashes and tells me the System Volume has crashed. However, you'll notice in the Disk Info Section there are only 3 disks. Looking into the logs show that Disk 5 (the bad one) failed at trying to correct bad sectors: However, when this happens Disk 1 (AFAIK, perfectly fine drive) switched into Initialized, Normal but drops out of the RAID array and then it goes into crash mode. I don't understand the relationship between Disk 5 crashing out when repairing the RAID and Disk 1 disappearing. It stands to reason that if Disk 1 is fine, which is seems to be that it would just fail and stay in degraded mode until I can swap in a new drive. I have tried starting the system with Disk 5 unplugged, but that does no good. I have also begun playing around with attempts at data recovery in a LiveUSB of Ububntu using some of Synology's guides as well as just googling around. So I suppose I have a few questions. 1. Does anyone know of a relationship between possibly installing the new system, and the bad disk causing the good disk to crash? 2. How likely is it that Disk 1 (AFAIK good disk) is also toast. 3. Do you have any tips for recovering data from a situation like this? I would greatly appreciate any help or advice you can provide. I have been banging my head against a wall for 3 nights working on this. I have all the really important stuff backed up to the cloud so it is not a matter of life and death (5 year, 10000 photos) but there is a lot of other media that I am willing to do a lot to not replace or only replace some of.
  3. I was able to install the jun loader successfully. Looks like a bad drive caused the failure. Unclear why it booted me out of the system totally. I need to get a new 4th drive (looks like I can still do an RMA on this WD one) so I am copying files off now and then will shut down totally until I can do that. Thanks for the help!
  4. ASRock H81M-HDS So according to the ASRock website, that means Realtek RTL8111G LAN (built in) and just using the built in SATA ports.
  5. This is a bit of troubleshooting, but also related to this tutorial so I am hoping that this will be the best place to post my question. Hardware is as follows: Fractal Node 804 4x WD Red 2TB AsRock mobo (will edit when I am home with exact model) Intel celeron processor I tried to access my Xpenology box last night and saw the same screen as jeannotmer, however, I have not updated anything for a long while. I always let it download security updates, but not automatically install them. The only difference between what I saw and what jearnnotmer had was my firmware was 5.2-5644 and it wanted me to install a newer version of 5.2, so I clicked install (this small version upgrade seemed like a reasonable thing at the time). Seemed to install fine, but never rebooted back to the DSM login screen. So I attached an ethernet straight to the box and went back to the Synology installer webpage, and now saw that my firmware was 5.2-5644 and my DSM version is now 6.1-15152. I am extremely unclear exactly how it made this jump, or what I did to cause this to happen. When I attach the monitor, I launch into the old XPEnoboot and when it tries to load the firmware, I see that tons of the files are missing and the system never powers up. So now my question is, should I just make the attempt to update the boot drive to by 6.1 in order to bring everything up to date? And if I do, does anyone have any idea if anything on the drives will still be there. (I haven't made a full update in ages because I did not plan on updating. Really important stuff is backed up on google drive through cloud sync.) Or might it be better to attempt to downgrade the DSM back to a version XPEnoboot can boot and see what is salvageable that way? Thank you for any help!