Physical drive limits of an emulated Synology machine?


Recommended Posts

  • 1 year later...
  • 4 weeks later...

you should have followed him in his attempts ;-)

he gave up on that not knowing what went wrong, he was to optimistic on a lot of things and ran intro one problem after another, xpenology is not as easy to handle when you want something out of the ordinary, even just a kerneldriver can be a problem

 

 

quicknic's statement was interesting but he did not explained anything about how he circumvented the limits

maybe it  does not need any special sauce to get it working, maybe you just have to use the info he gave about the numbers of disks?

it would need just some time to create a vm, tweak the config, add 28 virtual disks and look if a raid set can be created/degrades and rebuild (for a complete test), if that does not work out then qicknicks 3.0 loader will tell, its all script, no compiled source code so anyone can read it

at 1st glance there was nothing special, just the normal variables for synoinfo.conf and a check for the right max drive numbers (but there might be more to it somewhere)

##########################################################################################
### Max Supported Disks ##################################################################
### set maxdisks=48 will mean that XPEnology will now see 48 drives in DSM. ##############
### No changes to /etc.defaults/synoifno.conf needed.  Changes are made during boot. #####
### Acceptable maxdisk values: 12,16,20,24,25,26,28,30,32,35,40,45,48,50,55,58,60,64 #####
### default value is 12.  leave blank for 12 disks. ######################################
##########################################################################################

 

 

  • Haha 1
Link to post
Share on other sites
  • 2 weeks later...
On 10/6/2019 at 3:06 AM, IG-88 said:

you should have followed him in his attempts ;-)

he gave up on that not knowing what went wrong, he was to optimistic on a lot of things and ran intro one problem after another, xpenology is not as easy to handle when you want something out of the ordinary, even just a kerneldriver can be a problem

 

quicknic's statement was interesting but he did not explained anything about how he circumvented the limits

maybe it  does not need any special sauce to get it working, maybe you just have to use the info he gave about the numbers of disks?

it would need just some time to create a vm, tweak the config, add 28 virtual disks and look if a raid set can be created/degrades and rebuild (for a complete test), if that does not work out then qicknicks 3.0 loader will tell, its all script, no compiled source code so anyone can read it

at 1st glance there was nothing special, just the normal variables for synoinfo.conf and a check for the right max drive numbers (but there might be more to it somewhere)

 

Yeah after I posted my question, I did follow up with him personally and he confirmed that he had tons of issues with his setup.

 

About quicknick, AFAIK he pulled back his loader so I suppose only those who got his loader earlier would know more I guess. Oh well. One can only dream.

Link to post
Share on other sites
  • 11 months later...

For what it's worth I did more digging on this.  The issue is that on setup, the system does a "raidtool initsetup" - this utility calls scemd which is a binary blob.  That binary blob has a hard call to read synoinfo.conf to see what maxdisks is set to, it then uses the value in maxdisks to try to create the initial md0 (root volume) and md1 (swap).  It will look at what your active drives are, and create logical placeholders for the rest called "missing".  Unfortunately it's also hard-coded to use mdadm metadata format 0.9, which limits an md array to 27 devices.  So - at least for initial setup, you cannot have maxdisks set to >27 devices, or the raidtool setup will fail.  AFTER initial setup remains to be seen... it looks like the upgrade utilities should handle more devices just fine, but I haven't done any testing.

Link to post
Share on other sites

i did not look really deep into it, my last assumption was this

https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/?do=findComment&comment=159301

 

On 8/9/2020 at 2:05 PM, IG-88 said:

i also noticed a piece of code that might be important as it relates to >26 disks, that one would be needed to check further, might be important to use more then 24 disks


if [ "${current_maxdisks}" -gt 26 ] && [ "${current_no_disk_swap}" != "yes" ]; then
	xpenoUpdateSynoinfo ${synocfg_root} no_disk_swap yes
	xpenoUpdateSynoinfo ${synocfg_root_def} no_disk_swap yes
elif [ "${current_maxdisks}" -lt 26 ] && [ "${current_no_disk_swap}" = "yes" ]; then
	xpenoUpdateSynoinfo ${synocfg_root} no_disk_swap no
	xpenoUpdateSynoinfo ${synocfg_root_def} no_disk_swap no
fi
if [ "${current_no_disk_swap1}" -eq 1 ]; then
	echo "no_disk_swap=${current_no_disk_swap}"
fi

 

 

anything you can say about this?

 

synology at least sells units with extension to go above 27 but i never studied the documentation of that units to see how big a single raid set can be, maybe a single set of disks is limited to 26?

 

its still a exotic thing to have a drive count above 24 (and i guess it will keep that way with even  the ssd's leaving the single digit TB's and private hosted data seem not to grow that fast, 18TB HDD drives are on the market and having a bunch of these makes up a pretty nice number)

beside the number of TB's its also the port count, increases the cost to have 2-3 controllers and a enclosure to keep all these drives and even when bought cheap, a high drive count (with smaller drives) consumes more energy (noise might be a factor too) and might make a cooling solution for the room necessary (again increasing the power consumption)

beside this also the normal mdadm raid (and DSM) might not be the best solution for such big systems (i do remember that backblaze is using something more application oriented for the storage systems), just having two redundant disk (raid6) with 40-60 drives sounds risky, so it seems logical to limit the drive count of a single raid set to 24 or 26?

for most people when building/buying a system the sweet spot is lower then 24, having bigger disks in a lower number is more efficient (taking cost per port and power consumption into account)

Link to post
Share on other sites
Quote

anything you can say about this?

 

Where specifically was it at in Nick's loader?  What file/folder?  I can tell you that the system pretends like it's going to use rc.subr (and I'm guessing it did once upon a time) but at this point, as best I can tell, completely ignores it.

 

 

 

Quote

synology at least sells units with extension to go above 27 but i never studied the documentation of that units to see how big a single raid set can be, maybe a single set of disks is limited to 26?

 

Well in excess, but if you look at those systems they still only have 12-16 as "internal" disks, everything else is in an external enclosure.

 

 

 

Quote

beside this also the normal mdadm raid (and DSM) might not be the best solution for such big systems (i do remember that backblaze is using something more application oriented for the storage systems), just having two redundant disk (raid6) with 40-60 drives sounds risky, so it seems logical to limit the drive count of a single raid set to 24 or 26?

 

 

Keep in mind, the root volume and swap volume are RAID-1.  Having 100 drives isn't anymore risky than 2 (quite the opposite), you've just got more copies of data.

Edited by tcs
Link to post
Share on other sites
1 hour ago, tcs said:

Keep in mind, the root volume and swap volume are RAID-1.  Having 100 drives isn't anymore risky than 2 (quite the opposite), you've just got more copies of data.

somehow i do remember that its always only the 1st 12 disks in that raid1 (never looked into this as it seems not so important)

my concern about redundancy is about the raid6 volume with my data, i don't mind DSM , that can be reinstalled but loosing a big xxx TB raid volume ... yeah you should have a backup but even restore needs time, its not meant to break because of failing disks (or even controller/cpu/ram in SAN environments) - oh what fun where these server ssd'd failing all at the same time because of internal "counter" disabling them

 

1 hour ago, tcs said:

Where specifically was it at in Nick's loader?  What file/folder?

 

XPEnology_DSM_6.1.x-quicknick-3.0.img

quicknick.img\.quicknick\quicknick.lzma\.quicknick\etc\rc.d\post

Link to post
Share on other sites

Very few enterprise storage solutions recommend more than 24 drives per array.

 

Synology has a 72-drive unit now - FS6400 - and it can support more than 24 disks per array if you choose the "flexibility" instead of the "performance" storage pool option, but I don't know if there is a limit before 72 drives.

 

However, 108TB per volume limit applies to our XPE-enabled platforms, so that may start a practical limit to the number of drives that can be supported as mean capacity per drive increases (yes, we can deploy multiple volumes per storage pool but that will create performance contention).

 

https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/Why_does_my_Synology_NAS_have_a_single_volume_size_limitation

 

Link to post
Share on other sites
41 minutes ago, IG-88 said:

somehow i do remember that its always only the 1st 12 disks in that raid1 (never looked into this as it seems not so important)

my concern about redundancy is about the raid6 volume with my data, i don't mind DSM , that can be reinstalled but loosing a big xxx TB raid volume ... yeah you should have a backup but even restore needs time, its not meant to break because of failing disks (or even controller/cpu/ram in SAN environments) - oh what fun where these server ssd'd failing all at the same time because of internal "counter" disabling them

 

 

Well sure - for your data volume you wouldn't put everything into one large array.   When I had this working in the past with 40 drives, I did 2x16+2 with a hot spare for the data volume.  If I get it working again, that'll likely be the case.

Link to post
Share on other sites

must be a very reliable hardware, when using older and used components (or even new consumer grade) then i would not feel comfortable with a 34 drive raid6 volume

i have 14 drive raid 6 and seen drives failing in short succession (cable problems with a lsi sas controller) and even 2 drives failing at a power loss in the house and the simple consumer grade usv did not cover it completely

(i now use all new components for the main nas and the usv was replaced)

Link to post
Share on other sites
  • 4 weeks later...
  • 3 months later...
On 10/20/2020 at 10:38 AM, Warlock928 said:

is anyone using quicknicks loader?? i see nothing has been posted in nearly 3 years

where u able to use and use more than 24 drives ? 

Link to post
Share on other sites
  • 3 months later...
В 23.09.2020 в 01:39, IG-88 сказал:

must be a very reliable hardware

I'm read this topic. Do I understand correctly that on the Jun loader with 3617, you can connect ad stably working 26 disks due to sda-sdz.
Are my conclusions correct ?:
- when installing DSM, no more than 12 disks must be connected
- Then you can connect up to 26 disks and create a RAID
- what if the rest of the disks are counted as eSata? After all, they will also be available for creating a RAID, but they will not install DSM and SWAP on them?
Maybe this is the way out to use> 26 disks as eSATA?

Etc - 12HDD as internal, other as eSATA?

Link to post
Share on other sites
5 hours ago, -iliya- said:

- Then you can connect up to 26 disks and create a RAID

not many people test that afaik bur 26 did work here

https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=78160

 

5 hours ago, -iliya- said:

what if the rest of the disks are counted as eSata? After all, they will also be available for creating a RAID

no any extranel (usb or sata) can't be made into a raid in DSM

imho the limit is not hard at  just single digit sdX, i've had working config with a asm1166 where the controller "blocked" 32 ports and the disks after that got sdXX and where part of a max 16 disk array (918+ has 16 as default), as long as real number kept below the max 16 it worked even with two digit sdXX numbers

imho the way beyond 24/26 is to follow quicknicks lead (see link to the thread above)

Link to post
Share on other sites
Posted (edited)

Yesterday I try to add 45 hdd on VMware virtual dsm, but after edit files and no hdd add, after restart reset to default.

I would give up dsm in favor of some other NAS, but I need such features as: a network basket, symlinks over the network, automatic sharing of connected USB drives is highly desirable. Actually because of this and hold on to DSM

Edited by -iliya-
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.