How to downgrade from 6.2.3 to 6.1.7?


Recommended Posts

My NAS is a ASRock J3710-itx with 3 HDDs and 1 Realtek 8111GR NIC. I was using DSM 6.1.7 (DS3615) with Jun's 1.02 loader happily. Last weekend, I upgraded from 6.1.7 to 6.2.3 via DSM UI. Out of luck, I lost connection to this NAS as there was no Realtek NIC driver and the loader was too old. 

 

Is there anyway to either fix the issue on 6.2.3 or downgrade to 6.1.7 without a total reinstallation? I know I can make a fresh 6.1.7 reinstallation. However, this will be quite painful as all the configurations in DSM will be lost. 

 

Thanks a lot for the help in advance. 

 

Edited by fdppi
Link to post
Share on other sites
2 hours ago, fdppi said:

Out of luck, I lost connection to this NAS as there was no Realtek NIC driver and the loader was too old.

 

Luck has nothing to do with it. This is due to a lack of preparation and understanding.  There is a Realtek NIC driver in 6.2.3.  The 1.02b loader is not compatible.

 

2 hours ago, fdppi said:

Is there anyway to either fix the issue on 6.2.3 or downgrade to 6.1.7 without a total reinstallation? I know I can make a fresh 6.1.7 reinstallation. However, this will be quite painful as all the configurations in DSM will be lost. 

 

Properly prepare a 1.04b loader and boot with that.  Install DS918+ platform DSM 6.2.3 with a Migration Install, and your settings should be retained.

 

You are likely not to be able to continue to use the DS3615xs DSM platform because your J3710-ITX is unable to select CSM/Legacy Boot mode for the USB stick; that boot mode is required to use loader 1.03b.

Edited by flyride
Link to post
Share on other sites

Thanks a lot! @flyride

 

I will make a try.

 

You are right, I am lack of preparation for this upgrade. My original installation was a DSM 5.2 and I upgraded to 6.1.7 smoothly.  I never expected such a tough upgrade between minor DSM versions. 

Link to post
Share on other sites

I burned the 1.04b image and selected "migrate" installation. It installed successfully. I SSHed into the host and deleted "/.xpenoboot" and issued "reboot". 

 

But seems my luck ended here. After reboot, I can't connect to NAS any more. Synology Assistant also couldn't find the NAS.

Link to post
Share on other sites

6.1 to 6.2 is a major upgrade, not a minor one.

 

If you were able to connect via SSH, your network is functional.  You ought to be able to reboot and immediately reconnect via SSH.  Perhaps you did not completely delete the .xpenoboot folder.

Link to post
Share on other sites

@flyride  "Normally", if there is normally, the symptom of the xpenoboot problem is that you can get the login page and perform a log in to the desktop. Nothing works from there but you get that far. You can test this on your archive system by just creating the directory (No! don't do that) and rebooting. Is there anything different about the 1.04b loader (as opposed to the 1.03b) that could be relevant?

Link to post
Share on other sites

@billat29 I am quite sure I only deleted the .xpenoboot directory. Now, I partially found the cause.

 

As the migration installation didn't have SSH for access, I tried a vanilla 6.2.3 installation. The steps were:

  1. unplug all the 3 HDDs (2 new 8TB HHD and 1 3TB 7-year old HDD).
  2. install a new SSD
  3. install 6.2.3 with extra.lzma+extra2.lzma on SSD.
  4. After installation, plug in HDD one by one. In DSM, repair the disk and reboot each time.
  5. The first 2 HDDs (2 new 8TB) works smoothly.
  6. unplug the new SSD.
  7. But, when the third 3TB plugged, DSM can't login. Synology Assistant shows it is in status "Checking Quota" (sorry, not exact words, as message shown in Chinese). After 8 hours, Synology Assistant can't find the NAS any more.
  8. I unplug the third 3TB drive, DSM works again. 

So, I guess the issue is about my third 3TB drive. I guess USB select the 3TB drive for boot. And on that drive, there was an old 6.2.3 installation (no network installation). I am not sure whether it is the cause or not. 

 

It seems I have to abandon the 3TB drive. 

Edited by fdppi
Link to post
Share on other sites

DSM tries to use the first enumerated drive for boot (the first drive on the first controller).  If that doesn't match the loader, you crash.  I assume you are unplugging SSD in position #1 and plugging the 3TB disk.  This is the issue.

 

Why don't you plug the third drive when the NAS is running instead of when it is offline?  Or move one of the first two 8TB HDD's to the first position before plugging the 3TB drive?

Edited by flyride
Link to post
Share on other sites

@flyride Some questions related to the HDD order:

  1. what happens if I reorder the HDDs by shuffling the SATA connectors? I mean making the new HDD the first in order. Will it impact DSM? E.g. directory structure or something else?
  2. Where does DSM store its configurations? On each HDD or only on the boot one? I am a little worried about having the old HDD as the first boot one if configurations store only on one HDD. That means I have a high risk of data & configuration losing. 
Link to post
Share on other sites
3 hours ago, fdppi said:

@flyride Some questions related to the HDD order:

  1. what happens if I reorder the HDDs by shuffling the SATA connectors? I mean making the new HDD the first in order. Will it impact DSM? E.g. directory structure or something else?
  2. Where does DSM store its configurations? On each HDD or only on the boot one? I am a little worried about having the old HDD as the first boot one if configurations store only on one HDD. That means I have a high risk of data & configuration losing. 

 

Frankly, these are things you ought to know before you deliberately swap out drives as you have already.  In any case:

 

1. What happens if I reorder HDD's?  You haven't really described your Storage Pool but if your data array is in a healthy state, the order may change with no negative effect.

 

2. Where does DSM store its configurations?  DSM is stored on the first partition of each disk you install in the system (there are at least three, OS, swap and data for the storage pool).  The OS and swap partitions are RAID1's of however many disks you have.  You only need one to boot DSM.

 

A disk which has non-current OS partitions is labeled "System Partition Failed" and clicking the Fix button just resyncs the RAID1 arrays to that drive.

 

3 hours ago, fdppi said:

I never performed a hot plug before. I am not sure whether my J3710-ITX and HDD supports hot-plug or not.

 

You can physically plug in a SATA drive anytime with no negative effect.  I have a J4105-ITX and it's connected to a hotswap backplane, works fine.

Edited by flyride
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.