Jump to content
XPEnology Community

Tutorial: How to downgrade from 6.2 to 6.1 - Recovering a bricked system


Guest

Recommended Posts

Hello! This is a short guide about how to downgrade from DSM 6.2 to DSM 6.1 after a failed upgrade. I made this mistake myself, so i'm sharing how to fix it! 

 

  • Your going to need a spare HDD or SSD. Make sure there is nothing on it. 
  • You've installed DSM before, so you should have your synoboot.img from JunsLoader.
  • Your current bootloader on your USB in now NONFUNCTIONING, you will need to reflash your USB with JunsLoader. 

 

  1. Once you have your new, reflashed USB stick, unplug ALL drives from the motherboard. Make sure there are NO SATA interfaces connected.
  2. Plug in your new, empty drive to the motherboard and boot from the USB stick. (You will need a keyboard for this step) The first option in the GRUB interface is what you are looking for.
  3. Use http://find.synology.com or Synology Web Assistant to find your DSM.
  4. Follow the steps and install DSM 6.1 from a PAT file. DO NOT INSTALL 6.2 
    • IF YOU LEFT A DRIVE IN IT WILL SAY RECOVER. GOTO 2. 
  5. Let Synology do its thing and eventually you will have a working 6.1. 
  6. While the PC is ON, connect the drives to the motherboard (with you data on it). Under Storage Manager they will pop up and say Unused.
  7. You might need to reboot, at this option. It took me some fiddling around, but eventually Synology will prompt you with a Recover Crashed Partition prompt. Click Yes.
  8. All of your data will appear, and Synology will automatically perform RAID scrubbing. 
Link to comment
Share on other sites

thanks this worked for me!

In my case my update to 6.1.7 didnt work..

I wasnt able to find my nas.

So followed the steps and then on step 4 I chose 6.1.7 again and while on step 6 I removed my spare ssd and then reconnected my other drives and restarted from the menu.

Once I got back to the main screen it was like nothing had ever happened everything was back to normal.

Link to comment
Share on other sites

Hello, I've tried your guide.

I get ou my 4 disk (with data) I boot my HP proliant gen 8 with Jun's loader  on key and one blank disk.

I can access the NAS with DSM 6.1.

BUT, when I exchange the blank disk, to put in my 4 disks, the NAS in invisible again on my Net

May I let the blank disk, and put 3 of my 4 disks ?In the case, will I found my data back ?

Thanks

Link to comment
Share on other sites

On 25/06/2018 at 4:58 PM, Porkchop said:

Hello! This is a short guide about how to downgrade from DSM 6.2 to DSM 6.1 after a failed upgrade. I made this mistake myself, so i'm sharing how to fix it! 

 

  • Your going to need a spare HDD or SSD. Make sure there is nothing on it. 
  • You've installed DSM before, so you should have your synoboot.img from JunsLoader.
  • Your current bootloader on your USB in now NONFUNCTIONING, you will need to reflash your USB with JunsLoader. 

 

  1. Once you have your new, reflashed USB stick, unplug ALL drives from the motherboard. Make sure there are NO SATA interfaces connected.
  2. Plug in your new, empty drive to the motherboard and boot from the USB stick. (You will need a keyboard for this step) The first option in the GRUB interface is what you are looking for.
  3. Use http://find.synology.com or Synology Web Assistant to find your DSM.
  4. Follow the steps and install DSM 6.1 from a PAT file. DO NOT INSTALL 6.2 
    • IF YOU LEFT A DRIVE IN IT WILL SAY RECOVER. GOTO 2. 
  5. Let Synology do its thing and eventually you will have a working 6.1. 
  6. While the PC is ON, connect the drives to the motherboard (with you data on it). Under Storage Manager they will pop up and say Unused.
  7. You might need to reboot, at this option. It took me some fiddling around, but eventually Synology will prompt you with a Recover Crashed Partition prompt. Click Yes.
  8. All of your data will appear, and Synology will automatically perform RAID scrubbing. 

This is a variation of the 'Clean Drive' downgrade process I've described before in several threads

 

1) Disconnect your raid drives

2) Connect spare HDD to SATA port 1

3) Install DSM 6.x from fresh 1.02b loader, create admin user and server name matching live system, DONT create volume/raid. Update DSM to match your pre-beta version

4) Shutdown, reconnect raid drives to SATA ports 2-n

5) Boot, login, repair system partitions on raid, shutdown

6) Remove spare HDD and reconnect raid drives to SATA ports 1-n

7) Boot, repair packages etc

 

 

My additional steps would be to power down the server between drive swaps and update DSM on the clean drive to the version matching the one running before crash. 

  • Thanks 2
Link to comment
Share on other sites

  • 3 weeks later...
On 7/4/2018 at 1:56 AM, sbv3000 said:

This is a variation of the 'Clean Drive' downgrade process I've described before in several threads

 

1) Disconnect your raid drives - Did this

2) Connect spare HDD to SATA port 1 - Did this

3) Install DSM 6.x from fresh 1.02b loader, create admin user and server name matching live system, DONT create volume/raid. Update DSM to match your pre-beta version - Did this

actually tried to install a PAT version earlier than the last version I was running. Syno Assistant told me which version I had to install and so I did.

4) Shutdown, reconnect raid drives to SATA ports 2-n - Did this but shutdown system at about 10% rebuild process. It occurred to me that leaving the spare HDD in to be part of the rebuild  is redundant.

 

5) Boot, login, repair system partitions on raid, Shutdown Did this

6) Remove spare HDD and reconnect raid drives to SATA ports 1-n - Did this

7) Boot, repair packages etc

 

On 7/4/2018 at 1:56 AM, sbv3000 said:

My additional steps would be to power down the server between drive swaps and update DSM on the clean drive to the version matching the one running before crash. 

 

The above was posted by SBV3000 and many thanks. I did make a couple of changes to shorten the process. It worked for me but no warranty implied

Everything in coloured text is mine, all else belongs to SBV3000 and others

Edited by Polanskiman
Added proper quotes.
Link to comment
Share on other sites

  • 2 weeks later...
On 7/20/2018 at 1:03 AM, manfriday said:
On 7/4/2018 at 1:56 AM, sbv3000 said:

This is a variation of the 'Clean Drive' downgrade process I've described before in several threads

 

1) Disconnect your raid drives - Did this

2) Connect spare HDD to SATA port 1 - Did this

3) Install DSM 6.x from fresh 1.02b loader, create admin user and server name matching live system, DONT create volume/raid. Update DSM to match your pre-beta version - Did this

actually tried to install a PAT version earlier than the last version I was running. Syno Assistant told me which version I had to install and so I did.

4) Shutdown, reconnect raid drives to SATA ports 2-n - Did this but shutdown system at about 10% rebuild process. It occurred to me that leaving the spare HDD in to be part of the rebuild  is redundant.

 

5) Boot, login, repair system partitions on raid, Shutdown Did this

6) Remove spare HDD and reconnect raid drives to SATA ports 1-n - Did this

7) Boot, repair packages etc

 

On 7/4/2018 at 1:56 AM, sbv3000 said:

My additional steps would be to power down the server between drive swaps and update DSM on the clean drive to the version matching the one running before crash. 

 

The above was posted by SBV3000 and many thanks. I did make a couple of changes to shorten the process. It worked for me but no warranty implied

Everything in coloured text is mine, all else belongs to SBV3000 and others

 

I follow your steps and managed to reinstall the DSM 6.1 with a new USB and blank SSD. However, when I insert my original raid drives (3 out of 4) into remaining bays, there is no option for me to repair the system partition. Can anyone offer some advice, thank you so much.

 

HP microserver gen8

Raid: SHR

 

Edited by Polanskiman
Added proper quotes.
Link to comment
Share on other sites

5 hours ago, Kelvin616 said:

I follow your steps and managed to reinstall the DSM 6.1 with a new USB and blank SSD. However, when I insert my original raid drives (3 out of 4) into remaining bays, there is no option for me to repair the system partition. Can anyone offer some advice, thank you so much.

 

HP microserver gen8

Raid: SHR

 

I followed the steps from other posters based on a 6.2 upgrade failure. My hardware is Intel based (J3455M) vs your AMD (I don't know if this makes a difference). Perhaps try another recovery process from earlier in this thread?

Link to comment
Share on other sites

On 7/31/2018 at 8:00 AM, Kelvin616 said:

I follow your steps and managed to reinstall the DSM 6.1 with a new USB and blank SSD. However, when I insert my original raid drives (3 out of 4) into remaining bays, there is no option for me to repair the system partition. Can anyone offer some advice, thank you so much.

 

HP microserver gen8

Raid: SHR

 

Glad it worked. You are correct about the spare drive not being needed for raid rebuild, thats why I say dont create a volume on it, avoids volume/raid naming issues. What this process does is to make a clean 'master' copy of DSM on the system partition of clean drive, using standard install processes (ie no need to mount ubuntu mdadm etc). DSM then finds 'corrupted' system parts on the other drives (because of the version mismatch) and in Storage Manager there should be the option to 'Repair System Partition' and when clicked warns to keep DRIVE 1 connected etc. On boot,  the system looks for a 'good' version of DSM on the lowest sata channel drive, hence use sata 1 etc 

Link to comment
Share on other sites

15 hours ago, GBKev said:

So how crucial is using your original synoboot.img ?

 

Mine was backed up on said NAS drive that I can't access yet.

 

Is it ok to just create another which will likely have a different generated serial?

 

 

~ Kev

It shouldn't make any difference.

Link to comment
Share on other sites

  • 2 months later...

i brick my nas because you know stupidity, and im desperately trying to get it back by downgrading.

im able to install dsm on the clean drive ,reboot re-plug one of my original drive but all i get is the recover button.

it never boots back into the clean install of dsm.

not sure what im doing wrong

Link to comment
Share on other sites

  • 4 weeks later...
  • 3 weeks later...
  • 4 weeks later...

Hi,

just as preparation for my upgrade from DSM 6.1.x bare metal to DSM 6.2.x using Jun 1.03b I have some queries.

 

1. I'm going to build a new USB from scratch for 1.03b and keep my DSM 6.1 USB. If my upgrade goes pear shaped can I just reinsert my 6.1 USB and use that to get running again?

2. The loader is just to fool DSM into thinking it's a legit Synology NAS.... so placing the new USB with 1.03b on it should the system just boot and be ready to update DSM from 6.1 to 6.2 online (or using the downloaded .pat file)?

Link to comment
Share on other sites

  • 3 weeks later...
On 1/4/2019 at 2:17 PM, mgrobins said:

1. I'm going to build a new USB from scratch for 1.03b and keep my DSM 6.1 USB. If my upgrade goes pear shaped can I just reinsert my 6.1 USB and use that to get running again?

 

Depends how far through trhe upgrade you went, if you've done the migration steps, i.e. applied the DSM 6.2 .pat & it has done the upgrade, then I think you'll need to follow the steps as per the OP. If it just fails to show up in Syno Assistance or something like that then yes power off, swap USB

 

On 1/4/2019 at 2:17 PM, mgrobins said:

2. The loader is just to fool DSM into thinking it's a legit Synology NAS.... so placing the new USB with 1.03b on it should the system just boot and be ready to update DSM from 6.1 to 6.2 online (or using the downloaded .pat file)?

 

Yes should just boot & then be visible in Synology Assistant with a status of Migratable, you can then apply the pat file for the upgrade.

Link to comment
Share on other sites

  • 3 weeks later...

This is a great tip. I was fiddling around with an old laptop, just exploring the possibilities of Xpenology before I go ahead and build a new box. In the process I bricked it ;-) . Nothing important on there, but I managed to recover the data using the procedure above. Now I feel confident that if I mess up, the data is still safe.

I can proceed with my build !

Thanks to Jun and Porkchop and all who has worked to make Xpenology possible !

 

Bluesman

Link to comment
Share on other sites

 Hello, everybody,

I already posted this in the german forum, but maybe it helps here too.

Unfortunately I also made the mistake to update from 6.2 to 6.2.1 and the system was no longer available. But after some trial and error I found a solution:

 I removed every single one of the hard disks of the raid system and connected them to the PC with a USB adapter. There I used the free partitioning tool Minitool Partition Wizzard to run a wipe on the 2.37GB partition (overwriting the partition with zeros). This is the system partition of the DSM.

Caution: You are not allowed to do anything on the other partitions and the installation order of the hard disks must also be observed.

Then I created a new 1.03 Bootloader USB stick and started the system. The Synology Finder found the DSM and didn't show any installation. I then manually installed the DSM_DS3617xs_23739.pat file.

After the first start the old volume was detected error free. All data was still there. Only the packages had to be reinstalled.

I think the solution will work with all versions.

 

  • Thanks 1
Link to comment
Share on other sites

  • 3 weeks later...

Another failed upgrade attempt and another fall back to 6.1.7....followed the input of SBV3000

1) Disconnect your raid drives

2) Connect spare HDD to SATA port 1

3) Install DSM 6.x from fresh 1.02b loader, create admin user and server name matching live system, DONT create volume/raid. Update DSM to match your pre-beta version

4) Shutdown, reconnect raid drives to SATA ports 2-n

5) Boot, login, repair system partitions on raid, shutdown

6) Remove spare HDD and reconnect raid drives to SATA ports 1-n

7) Boot, repair packages etc

 

However one additional note. I have two volumes, one is Hybrid Raid SHR 5xHDDs and one volume RAID1 BTRFS 2xHDDs.

I unplugged all drives and rebuilt only the BTRFS volume, Shutdown

plug in remaining 5 HDD and allowed rebuild. 

Link to comment
Share on other sites

  • 4 weeks later...
On 7/3/2018 at 12:11 PM, wimpi13 said:

Hello, I've tried your guide.

I get ou my 4 disk (with data) I boot my HP proliant gen 8 with Jun's loader  on key and one blank disk.

I can access the NAS with DSM 6.1.

BUT, when I exchange the blank disk, to put in my 4 disks, the NAS in invisible again on my Net

May I let the blank disk, and put 3 of my 4 disks ?In the case, will I found my data back ?

Thanks

I have the same problem with yours.

 

On 7/4/2018 at 2:47 PM, wimpi13 said:

Thanks a lot, my friend, for your explanations.

I've recover a new Synology, and BTW, all my precious data.  !!!
It's wonderfull

 

How did you managed to get all your data.

 

What is my case?

I had two hdds with data. The one was used to contain both the OS and some data. The other was used to contain personal data.

I'm using as suggested a blank hdd.

When i connect the blank and the hdd with personal data, i can see the contents of the hdd data.

When i connect back the hdd with the old OS, i'm getting the recover process and after that the DSM is not working and is not accessible from the lan.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...