Physical drive limits of an emulated Synology machine?


Recommended Posts

9 минут назад, flyride сказал:

I'm not clear if your experience is due to the controller you are using or the unusual configuration >26 disks

The number of disks does not matter: in the base load of DSM 6.1.7 / 6.2 / 6.2.3 for 12 disks, there is already a leapfrog effect in their positions. The controllers and backplanes that I used are listed above.

Link to post
Share on other sites

 

still valid, if anyone want to venture into the >26 drive count then  have a look into quicknicks loader it seems to be pure script, no binary's or obfuscations

its a field only very few people need and therefore not many have even looked (let alone noise and power consumption will keep off most people from >26 drives)

https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/?do=findComment&comment=159301

 

if its about what special "moves" can be done to drives in dsm then this was meant to be there as help (including links to people that have tried one or the other thing)

https://xpenology.com/forum/topic/32867-sata-and-sas-config-commands-in-grubcfg-and-what-they-do/

 

  • Like 1
Link to post
Share on other sites
  • 5 weeks later...

If you want to have >26 drives there are really two key things that need to happen.  On *INITIAL INSTALL*, you need to have "maxdisks" set to less than 26, I would suggest just doing 12.  Once you have gotten through the initial install and first full boot, you then go back and modify maxdisks to a larger number and create your volumes/raid arrays.  This is because on first boot, DSM takes a slice off every disk to create a RAID-1 array for DSM itself to live on.  Whatever binary they use to create this partition is passed the "maxdisks" variable, and if that variable is >26 the binary will crash.  After a system has been installed this script/binary is never called again that I've seen unless you're trying to do a "controller swap" - IE If you went from a 3615 to 3617 it would be called again.

Link to post
Share on other sites
  • 1 month later...
On 1/12/2022 at 5:09 PM, tcs said:

If you want to have >26 drives there are really two key things that need to happen.  On *INITIAL INSTALL*, you need to have "maxdisks" set to less than 26, I would suggest just doing 12.  Once you have gotten through the initial install and first full boot, you then go back and modify maxdisks to a larger number and create your volumes/raid arrays.  This is because on first boot, DSM takes a slice off every disk to create a RAID-1 array for DSM itself to live on.  Whatever binary they use to create this partition is passed the "maxdisks" variable, and if that variable is >26 the binary will crash.  After a system has been installed this script/binary is never called again that I've seen unless you're trying to do a "controller swap" - IE If you went from a 3615 to 3617 it would be called again.

 

interesting and i do remember something about that on original units it only uses internal disk (main unit)  to mirror dsm system partitons (older units with 3.5" disks it was usually max 12 but there are 2.5" units now that come with up to 24 internal drives like the FS6400)

maybe /dev/sdXX is always used for added eclosures in dsm and should never be part of the raid1 for system and swap

but that does not explain why quicknick came to that  list of numbers of disks that should be used as max. disks as safe numbers

https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/?do=findComment&comment=159301

 

it would also need testing what happens when dsm try's to repair system disks, if i think about the odd behavior of the mp2sas/mpt3sas driver to change positions of disks when they are missing on boot and then adding them again or changing the hardware/controller arrangement (does not happen in original units as the hardware/board is fix)

and updates to a new loader like redpill would trigger the problem too

there would be a lot of if's attached to a system that had maxdisk changed after the initial setup to >26

 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.