Migrate hdd array from one 918+ to another 918+ with ssd array. Keep OS Only on SSD Volume/array/pool for performance


Recommended Posts

I have 2 918+ units

1-918 is all ssd array raid 0 for movies-need performance/speed

1-918 is all hdd array holding software and all family photos. 

 

Can I migrate 918 hdd array to the 918 ssd nas keeping the OS Only on the SSD Volume? 

I do not was OS on hdd, only ssd

 

Is this possible? 

 

Link to post
Share on other sites

You are asking about two different things:

 

1.  How to migrate a fully intact array from one DSM instance to another?

 

The easiest way is probably to reinstall DSM with both array sets connected.

 

There is also a way to import into a running system, but it's complex. It's a lot easier if the array to be imported is RAID and not a SHR.  You need to be able to hotswap the disks into the running system, as Bad Things can happen if a conflicting array suddenly is present on boot. This procedure is from my notes importing RAID arrays (not SHR).  If the array to be migrated is a SHR, don't attempt this procedure as written, as there are additional steps including extracting a volume group reserved area that I don't quite understand. 

 

It is nondestructive to your data, but it might leave your array inaccessible and require a more in-depth recovery operation if something went wrong. I don't know all the things that can go wrong with it. You have to decide if that risk justifies avoiding the manual copy of your files from one system to the other.

 

Spoiler

1. Install the foreign DSM's RAID (not SHR) array disks into a booted, running system. Never install them into the first/lowest numbered slots. Don't have any other system operation running (resync, volume expansion, etc).

2. Elevate to root access on ssh. Verify that all the foreign array disks are recognized using ls /dev/sd* and hdparm -I  /dev/sdx as appropriate

3. Start the foreign arrays with mdadm --assemble --scan

4. Use cat /proc/mdstat to identify three foreign arrays that should have started (DSM, swap and data). They should be in the 100's (ex. md126, md127, md128)

5. Identify the foreign array that contains the volume data (usually the largest mdxxx, also identified with the sdx3 members, where x is the set of disks you identified in step #2)

6. Pick a free volume name (volume2 for example), create the empty directory mkdir /volume2

7. Now, mount the data array mount /volume2 /dev/mdxxx

8. Verify you can see the expected shares on the volume ls /volume2

9. Shutdown the remaining foreign arrays (for DSM OS and swap) mdadm --stop /dev/mdxxx

10. Run the following commands in sequence:

# synocheckiscsitrg

(this makes sure that the iscsi module has current information, must return "pass"  otherwise repeat the command)

# synocheckshare

(this makes sure the share information in DSM is current and refreshed from tables, must return "pass" otherwise repeat the command)

# spacetool --synoblock-enum

(this tells DSM to go out and reassess what storage structures exist, and update the Syno superblocks of those structures)

# synospace --map-file -d

(this tells DSM to update its internal Storage Manager tables from all the known Syno superblocks)

# synocheckshare

(this makes sure the share information in DSM is current and refreshed from tables, must return "pass" otherwise repeat the command)

# synocheckiscsitrg

(this makes sure that the iscsi module has current information, must return "pass"  otherwise repeat the command)

 

If all went well, you should now see new Storage Pool and a Volume entry in Storage Manager.

It will probably show the System Partition crash error and offer to fix, go ahead and do that.

Then reboot and make sure everything restarts correctly.

 

2. Can the DSM OS be installed only on some drives and not others (in your case, SSD and not have it on HDD)?

 

Sort of. You can disable the disk I/O to the DSM and swap partitions on the HDD drives via procedure detailed in the link below.  It cannot reclaim the space; the DSM and swap partitions must continue to exist on the HDDs. They become hot spares that are activated if a SSD goes offline.  Please note this is not supported in any way by Synology and they have no intention for this to work - it's just something I figured out a long time ago.

 

https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report

Edited by flyride
  • Thanks 1
Link to post
Share on other sites
2 hours ago, flyride said:

 

 

2. Can the DSM OS be installed only on some drives and not others (in your case, SSD and not have it on HDD)?

 

Sort of. You can disable the disk I/O to the DSM and swap partitions on the HDD drives via procedure detailed in the link below.  It cannot reclaim the space; the DSM and swap partitions must continue to exist on the HDDs. They become hot spares that are activated if a SSD goes offline.  Please note this is not supported in any way by Synology and they have no intention for this to work - it's just something I figured out a long time ago.

 

https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report

 

So i am definitely referring to this Flyride. And thanks for the quick reply! 

 

I am in the process of copying/backing up all the data from the HDD's to a large 8tb drive on windows, then I'll format the HDD's, insert into the 918 with the SSD's.

 

1. I see in the link you provided it says nvme and esxi. Will this work even though i am not on ESXI (i'm on baremetal) and not using NVME?

 

2. And this question is simply out of curiosity: Why the Disclaimer about Not being supported by Synology? i assume most of what we do here does not exactly fill Synology with joy...

 

 

 

Edited by Captainfingerbang
Link to post
Share on other sites
3 minutes ago, Captainfingerbang said:

I see in the link you provided it says nvme and esxi. Will this work even though i am not on ESXI (i'm on baremetal) and not using NVME?

 

The array I/O modification part has nothing to do with ESXi or NVMe.

 

3 minutes ago, Captainfingerbang said:

Why the Disclaimer about Not being supported by Synology? i assume much of what we do isnt exactly the best news to their ears...

 

Jun's loader is also a hack but is leaving the DSM system in a state it expects to see itself in.  Shutting down /md0 and /md1 replicas leaves DSM in a state it never expected to be in, so there is no guarantee there won't be a problem in the future.  However, the worst thing I've encountered in several years of running partial /md0 and /md1 arrays is unexpected resumption of replicas on drives I was attempting to omit.

Link to post
Share on other sites
2 minutes ago, flyride said:

 

The array I/O modification part has nothing to do with ESXi or NVMe.

 

 

Jun's loader is also a hack but is leaving the DSM system in a state it expects to see itself in.  Shutting down /md0 and /md1 replicas leaves DSM in a state it never expected to be in, so there is no guarantee there won't be a problem in the future.  However, the worst thing I've encountered in several years of running partial /md0 and /md1 arrays is unexpected resumption of replicas on drives I was attempting to omit.

 

Ok this makes sense. Sheesh, looks a little complicated for my level of skill.

 

Here's my issue. I have two HUGE pc cases both with z390/i3-8100 both 918+ running and its taking up a lot of space and the wiring is annoying i guess.

On top of that i have another big windows tower i use for downloading/editing/managing the DS918's.

I cant afford to go and buy 7-10tb of SSD storage again...  The reason for the all ssd 918 array is because on regular hdd spin drives my serviio bogs down sending out 50-100gb Hevc blu ray rips over my network. Its like the HDD's are too slow or something. And trust me I've ruled out network issues.

 

 

I'm almost curious if i could buy a rack and install all these in that?  Dont know if server racks allow multiple motherboards/systems in one, never tried it.

 

Just trying to think of the best solution for space, without compromising speed on DS918 #1 (ssd nas) I absolutely LOVE the speed of 10tb of SSD's in Raid 0

 

 

Link to post
Share on other sites
6 minutes ago, Captainfingerbang said:

Ok this makes sense. Sheesh, looks a little complicated for my level of skill.

 

Having a test platform makes things less intimidating.  Virtually everything I do on XPEnology happens on a test system first. If all you have is your production data, I get your trepidation.

 

Maybe you just need to migrate to more compact cases.  I use the U-NAS 810 and 410 cases with my own motherboard and power supplies.  These are frequently out of stock and a bit of a pain to work with, but extremely small and quiet.  But there are a lot of NAS-tuned case options out there that might be an ideal fit for your motherboard and drive counts.

 

I question running 10TB of SSD's in RAID 0, however.  They can and do fail.  You might consider switching to DS3617xs on that system and running RAIDF1.  But maybe you have everything backed up somewhere.

Link to post
Share on other sites

 

 

Great advice flyride! 

 

Haha i knew that talking about Raid 0 here might get me in trouble 😆  

Literally all it contains is downloaded movies and TV shows on the Raid 0. If it failed i wouldn't care too much.

If it were my other 918 with family photos and vids collection, yes i would be in tears if it failed. I back this one up via usb btw.

But i guess i didnt think about the fact that if i add my imortant HDD's to my Raid 0 array, that could really screw me up.

 

I will check out the  U-NAS 810.

 

 

 

One last question?

 

I'm pretty sure i've read it might be frowned upon here, i dont remember, but i always get nervous when someone suggest a reinstall of the os because A. i searched long and hard for my Legit 918 serial/mac combos and don't want to lose them B. more importantly, i often go years between creating the usb boot drives and because i'm not up to code/date i end up screwing them up.

 

Do you know of an easy foolproof way to clone the flash drives we use to boot these Xpenologies?

 

 

Link to post
Share on other sites

If you need to reinstall for whatever reason, you can just initiate that from Synology Assistant and it will reuse the existing loader USB.  You don't have to reburn it.

 

At any time, you can burn a new loader USB and install it to a running system, and it will update the loader automatically.  So just save the .IMG file that you originally built with your serial and other loader configs in a safe place (helps to identify the platform and loader version so that if you experiment with others it won't get crossed up).  If you ever need to install from scratch (assuming that you do not change platforms), just reburn that file.

 

There is no reason to save or clone a running loader outside of the above.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.