Migration from synology to xpenology?


Recommended Posts

Have yet to order hardawre for my xpenology build. Hoping to do that soon.. But I have yet another question for everyone.

How does data migration work between units?

Current unit: DS418. 2x 6tb drives. Synology's site indicates this does not support migration, so I'm guessing taking the drives out and sticking them in the xpenology build is a no-go.

I also have a 14tb external drive I was intending to use to backup / transfer if needed - then shuck it and add to the array

Is there a 'seamless' way of doing this? Or would it involve backing up all files / folders to the external drive, then swapping existing drives to the xpenology build, then putting data back on?

I'm not too worried about the data. But trying to get user accounts migrated is a stick point for me. Have a couple friends I let use my stuff and if I could avoid going through permissions, creating new accounts, etc etc, that would be great

Link to post
Share on other sites
1 hour ago, SnowDrifter said:

How does data migration work between units?

yes

 

1 hour ago, SnowDrifter said:

Synology's site indicates this does not support migration

???

https://www.synology.com/en-ph/knowledgebase/DSM/tutorial/Backup/How_to_migrate_between_Synology_NAS_DSM_6_0_and_later#FgL7hWweWr

 

"...

HDD migration

...

You may choose to either migrate between different Synology models or identical models.

..."

 

for the running dsm system there is no difference if its original hardware or a xpenology on custom hardware so you can do it exactly like synology suggests it

Link to post
Share on other sites

Curious. I found a page last night that seemed to indicate the ds418 didn't do it. Carrying on

So how does the process work with xpenology? Lets assume I have a bare metal system built, and have done nothing to it yet. What steps are taken?

Link to post
Share on other sites

The usual ones. Choose your loader (1.03b for DS3615 or DS3617, 1.04b for DS918+), create your boot stick, set up your bare-metal system with usb devices as the primary boot option, plug in your original Synology’s hdds in the new system, start it up and check which IP it received (DHCP needed in LAN). Alternative would be the Synology Assistant. Then connect to your XPenology using your browser. It will offer you migration install (keep all your data, users and apps) or a clean install. Choose the migration and usually it should work after the first reboot.

Link to post
Share on other sites
13 hours ago, jensmander said:

The usual ones. Choose your loader (1.03b for DS3615 or DS3617, 1.04b for DS918+), create your boot stick, set up your bare-metal system with usb devices as the primary boot option, plug in your original Synology’s hdds in the new system, start it up and check which IP it received (DHCP needed in LAN). Alternative would be the Synology Assistant. Then connect to your XPenology using your browser. It will offer you migration install (keep all your data, users and apps) or a clean install. Choose the migration and usually it should work after the first reboot.

Nice that was the answer I was looking for. Thanks!

It was the movement to a non-native DSM build that had me confused. Synology machines have it stored internally but a build like this well... Wouldn't. Just wasn't sure the appropriate processes.

I suppose the only other question I have is, would it be appropriate to move both disks at once, or one at a time and just tell it to rebuild? Trying to balance between failsafe and migration time. Gut says move both since I'd have a backup to an external drive anyway, but I figured I'd check in on that just in case.

Edited by SnowDrifter
Link to post
Share on other sites
21 hours ago, SnowDrifter said:

Curious. I found a page last night that seemed to indicate the ds418 didn't do it. Carrying on

@SnowDrifter you are correct.  The issue is that the ARM versions identified with "X" in that table have a system partition that is too small to accommodate the Intel code.  So a migration install is not possible for those units.

 

4 minutes ago, SnowDrifter said:

I suppose the only other question I have is, would it be appropriate to move both disks at once, or one at a time and just tell it to rebuild? Trying to balance between failsafe and migration time. Gut says move both since I'd have a backup to an external drive anyway, but I figured I'd check in on that just in case.

Whatever you do, do NOT subject your only copy of your data to an upgrade.  This is a good way to increase your stress level at minimum, and data lockout or even loss if you mess up badly enough at worst.

Link to post
Share on other sites
26 minutes ago, flyride said:

@SnowDrifter you are correct.  The issue is that the ARM versions identified with "X" in that table have a system partition that is too small to accommodate the Intel code.  So a migration install is not possible for those units.

 

Whatever you do, do NOT subject your only copy of your data to an upgrade.  This is a good way to increase your stress level at minimum, and data lockout or even loss if you mess up badly enough at worst.

Curious

So then what's the most appropriate manner to get everything moved over then?

To clarify: I have 2 internal drives I want to move over, and one external drive that I would be backing everything up to before the transfer. Once the transfer is complete and validated, I'd shuck said internal drive and add it to the array

Link to post
Share on other sites

Once you are on the Intel platform, you have many options - migration install, btrfs replication, etc.  But coming from ARM, those features are not available.

 

You can keep your settings by saving the config (DSS file) from DSM UI and restoring it to your new system once it is installed.  It's not perfect but it does restore user accounts etc.  You may have to remake permissions etc, so take good notes.

 

Side note: the 14TB drive is too different from the 6TB disks to get extra space from SHR.  You will only be able to use 6TB of the 14TB available, for 12TB usable, because of the space disparity.  That said, you MUST initiate your new array with a 6TB disk if you want it to interoperate with a 14TB drive at all.

 

Assuming you have a 2-disk RAID 1, you already have two copies of your data.  So just pull one of the drives out and use that to build the new system.  Your Synology will complain that the array is "critical" but it's fully functional and your data is intact on the remaining drive. No need to break out the 14TB drive unless you wanted to use it to make another copy of all your data.  If you wanted to do THAT you could pull one of the drives and let the system rebuild the array using the 14TB drive (be sure it makes a RAID 1 and not a SHR - i.e. only 6TB usable storage).

 

Here are a few data migration ideas, somewhat dependent upon your level of technical ability:

  1. Build up a new XPe DSM with the removed 6TB drive, and just copy folders from your old Synology using your PC client
  2. Copy all your data off to the external disk, build up a new XPe DSM with the removed 6TB drive, then attach the external and copy folders using File Station
  3. Build up a new XPe DSM with the removed 6TB drive, and rsync folders from your old Synology (via UI or command line)
  4. Build up a new XPe DSM with the removed 6TB drive, then connect your Synology 6TB drive (or the 14TB copy if you did that), manually mount the filesystem from the command line, and move the folders from the Linux command line
Link to post
Share on other sites

wouldn't a  backup with hyperbackup (incl. configuration) restore users and shares?

 

18 hours ago, SnowDrifter said:

Once the transfer is complete and validated, I'd shuck said internal drive and add it to the array

 

the simplyfied  calculation for "resulting size" is ti sum up the capacity of all drives and remove the size of the biggest drive, there would be a huge portion wasted if you dont add a 2nd or 3rd big hdd

synology has a lot material online that you can use

https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/What_is_Synology_Hybrid_RAID_SHR

 

Edited by IG-88
Link to post
Share on other sites

Ok so: I verified w/ synology support

If I built the SHR pool out starting w/ the 14tb drive, I wouldn't be able to add the 6tb drives

SO, the plan is this:

Hyper backup to USB drive

Put one 6tb drive into new unit, restore from usb

Validate data, users, programs, etc. If all is good, then move second 6tb drive to new unit. build SHR array. Once that's done, shuck the 14tb drive and add that.

And theoretically, that should do it? I'm aware that the drive config will waste a LOT of space. The goal wasn't to get a single drive and add that. That's just what was the best price/gb was for expansion. I'll add more down the road when needed. I just didn't want to have a bunch of 6tb disks because that's what I started with. If I were going that route, no point to shr. Just get what's the best value at the time and roll with it

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.