Jump to content
XPEnology Community
  • 0

Is consolidation possible without data loss?



Hello everyone,,  Thanks for taking the time to read this.

I have a question that i hope someone can assist with  ( and hopefully has achieved themselves )


My situation,  i have 5   4/5  disk  hp micro servers ( HP 54nl g7's )  and recently picked up a 24 bay storage case.

I have a spare mobo etc to bring this life but im wondering if i can bring across the drives without data loss.

I dont care if the disks stay in the same 'groups' that they currently are in.


My aim is to have 1 large box in place of the 5 small.


How would the dsm manage importing all these disks?

i appreciate i could start with one group moved over but what would happen if i bring another batch of disks into the "large" box.

They are currently running same same version of dsm if that helps.?


Failing that idea, what would be the best way short of purchasing new drives?    ( i dont have enough spare space to shuffle all the data around which is a real problem :( )


Is there any way i can setup a test case to test this?


any ideas anyone?


Link to comment
Share on other sites

4 answers to this question

Recommended Posts

  • 0

you know about dsm workings like having a raid 1 over all disks in the system?

so yes it is possible what you want to do but you can only have the system/settings from ONE of the old systems, the 1st set of disks in the new hardware will be "migrated" to what you use as usb boot (if its a new usb drive, but you can also use the usb device from the system the 1st set of disks comes from) but this 1st set of disk will bring the system settings you will use in the new system


2nd is about disk limits of dsm and that it is stored in the patch of the loader or in the default config from synology (depends of if you use 918+ then its patched on boot to 16, when 3615/17 then its plain default from synology as 12 disks - in both cases you and up "loosing" disks after a update breaking raid sets on 1st boot after a update when not treated accordingly)

so you get new headaches when using a system >12 disks and making bigger updates like a 6.2.1 -> 6.2.2 usually updates that come with a full ~250MB *.pat file will bring a new default config to you system

easiest way would be using 918+ loader/image and modding the patch inside the extra/extra2 lzma to have 24 instead of 16 disks, that way it wil be corrected on every boot if needed (like after a bigger update)

also to note that the 918+ image comes with a limit of 2 nic's (instead of 8 in 3615/17) that can be corrected manually and i documented this here


i was planing to do both (disk and nic patch mod) later when having "finished" the drivers (extra.lzma) but that can be until end of year or never

but "breaking" the number of nics is not that critical you still have access over network on 2 nics and can mod the conf manually after the update, reboot and you are back to 8 (or more depending of what you need)

the disk limit is usually the thing that brings trouble a it might result in raids being not available or being broken and wanting to rebuild disks (like a raid 6 set looses to disks on 1st boot, fixing the config brings back the to disks and then it starts to rebuild the two disks and might having a good impact on performance of the system and a time windows whee you don't have protection by raid)



Link to comment
Share on other sites

  • 0

Thanks for this,  It looks like im gong to have to digest your answer before starting on this.

Things i should have pointed out earlier.


1.  All disks are currently in SHR  and a general mix of sizes and happy for same config over all disks

2.  Happy to have just 1 nic working.    ( generally only using one so no drama. )

3.  This is used for storage and not anything i need to run all the time.  if it has low performance while rebuilding then so be it.


So if i have a 5 disk set in shr and its running fine,  then add what was another set of 5 how will it react?  will the additional disks just be picked up and all files and folders added to the 1st set and it all be managed through the 1st config?  Also what happens to the extra disk that was used for redundancy?   



Link to comment
Share on other sites

  • 0

@Acidmank Just a few Q's from the top of my head

What DSM version are you using, and are you planning to keep the same version?

In your current setup, are all the systems using SHR-1?

Are all of them set up with just 1 volume?

If so, do you plan to add all the drives into a single volume, or would you keep them as separate volumes?


Link to comment
Share on other sites

  • 0

DSM Version - 6.1.7 - 15264 update 3  ( baremetal )  


All using SHR-1

All with one volume only

I would prefer one large volume i think if that's possible?  (Better utilisation of Disks.)

or maybe 2 12-Disk Shr-2 volumes?

Edited by Acidmank
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...