Jump to content
XPEnology Community

Physical drive limits of an emulated Synology machine?


SteinerKD

Recommended Posts

  • 1 year later...
  • 4 weeks later...

you should have followed him in his attempts ;-)

he gave up on that not knowing what went wrong, he was to optimistic on a lot of things and ran intro one problem after another, xpenology is not as easy to handle when you want something out of the ordinary, even just a kerneldriver can be a problem

 

 

quicknic's statement was interesting but he did not explained anything about how he circumvented the limits

maybe it  does not need any special sauce to get it working, maybe you just have to use the info he gave about the numbers of disks?

it would need just some time to create a vm, tweak the config, add 28 virtual disks and look if a raid set can be created/degrades and rebuild (for a complete test), if that does not work out then qicknicks 3.0 loader will tell, its all script, no compiled source code so anyone can read it

at 1st glance there was nothing special, just the normal variables for synoinfo.conf and a check for the right max drive numbers (but there might be more to it somewhere)

##########################################################################################
### Max Supported Disks ##################################################################
### set maxdisks=48 will mean that XPEnology will now see 48 drives in DSM. ##############
### No changes to /etc.defaults/synoifno.conf needed.  Changes are made during boot. #####
### Acceptable maxdisk values: 12,16,20,24,25,26,28,30,32,35,40,45,48,50,55,58,60,64 #####
### default value is 12.  leave blank for 12 disks. ######################################
##########################################################################################

 

 

  • Haha 1
Link to comment
Share on other sites

  • 2 weeks later...
On 10/6/2019 at 3:06 AM, IG-88 said:

you should have followed him in his attempts ;-)

he gave up on that not knowing what went wrong, he was to optimistic on a lot of things and ran intro one problem after another, xpenology is not as easy to handle when you want something out of the ordinary, even just a kerneldriver can be a problem

 

quicknic's statement was interesting but he did not explained anything about how he circumvented the limits

maybe it  does not need any special sauce to get it working, maybe you just have to use the info he gave about the numbers of disks?

it would need just some time to create a vm, tweak the config, add 28 virtual disks and look if a raid set can be created/degrades and rebuild (for a complete test), if that does not work out then qicknicks 3.0 loader will tell, its all script, no compiled source code so anyone can read it

at 1st glance there was nothing special, just the normal variables for synoinfo.conf and a check for the right max drive numbers (but there might be more to it somewhere)

 

Yeah after I posted my question, I did follow up with him personally and he confirmed that he had tons of issues with his setup.

 

About quicknick, AFAIK he pulled back his loader so I suppose only those who got his loader earlier would know more I guess. Oh well. One can only dream.

Link to comment
Share on other sites

  • 11 months later...

For what it's worth I did more digging on this.  The issue is that on setup, the system does a "raidtool initsetup" - this utility calls scemd which is a binary blob.  That binary blob has a hard call to read synoinfo.conf to see what maxdisks is set to, it then uses the value in maxdisks to try to create the initial md0 (root volume) and md1 (swap).  It will look at what your active drives are, and create logical placeholders for the rest called "missing".  Unfortunately it's also hard-coded to use mdadm metadata format 0.9, which limits an md array to 27 devices.  So - at least for initial setup, you cannot have maxdisks set to >27 devices, or the raidtool setup will fail.  AFTER initial setup remains to be seen... it looks like the upgrade utilities should handle more devices just fine, but I haven't done any testing.

Link to comment
Share on other sites

i did not look really deep into it, my last assumption was this

https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/?do=findComment&comment=159301

 

On 8/9/2020 at 2:05 PM, IG-88 said:

i also noticed a piece of code that might be important as it relates to >26 disks, that one would be needed to check further, might be important to use more then 24 disks


if [ "${current_maxdisks}" -gt 26 ] && [ "${current_no_disk_swap}" != "yes" ]; then
	xpenoUpdateSynoinfo ${synocfg_root} no_disk_swap yes
	xpenoUpdateSynoinfo ${synocfg_root_def} no_disk_swap yes
elif [ "${current_maxdisks}" -lt 26 ] && [ "${current_no_disk_swap}" = "yes" ]; then
	xpenoUpdateSynoinfo ${synocfg_root} no_disk_swap no
	xpenoUpdateSynoinfo ${synocfg_root_def} no_disk_swap no
fi
if [ "${current_no_disk_swap1}" -eq 1 ]; then
	echo "no_disk_swap=${current_no_disk_swap}"
fi

 

 

anything you can say about this?

 

synology at least sells units with extension to go above 27 but i never studied the documentation of that units to see how big a single raid set can be, maybe a single set of disks is limited to 26?

 

its still a exotic thing to have a drive count above 24 (and i guess it will keep that way with even  the ssd's leaving the single digit TB's and private hosted data seem not to grow that fast, 18TB HDD drives are on the market and having a bunch of these makes up a pretty nice number)

beside the number of TB's its also the port count, increases the cost to have 2-3 controllers and a enclosure to keep all these drives and even when bought cheap, a high drive count (with smaller drives) consumes more energy (noise might be a factor too) and might make a cooling solution for the room necessary (again increasing the power consumption)

beside this also the normal mdadm raid (and DSM) might not be the best solution for such big systems (i do remember that backblaze is using something more application oriented for the storage systems), just having two redundant disk (raid6) with 40-60 drives sounds risky, so it seems logical to limit the drive count of a single raid set to 24 or 26?

for most people when building/buying a system the sweet spot is lower then 24, having bigger disks in a lower number is more efficient (taking cost per port and power consumption into account)

Link to comment
Share on other sites

Quote

anything you can say about this?

 

Where specifically was it at in Nick's loader?  What file/folder?  I can tell you that the system pretends like it's going to use rc.subr (and I'm guessing it did once upon a time) but at this point, as best I can tell, completely ignores it.

 

 

 

Quote

synology at least sells units with extension to go above 27 but i never studied the documentation of that units to see how big a single raid set can be, maybe a single set of disks is limited to 26?

 

Well in excess, but if you look at those systems they still only have 12-16 as "internal" disks, everything else is in an external enclosure.

 

 

 

Quote

beside this also the normal mdadm raid (and DSM) might not be the best solution for such big systems (i do remember that backblaze is using something more application oriented for the storage systems), just having two redundant disk (raid6) with 40-60 drives sounds risky, so it seems logical to limit the drive count of a single raid set to 24 or 26?

 

 

Keep in mind, the root volume and swap volume are RAID-1.  Having 100 drives isn't anymore risky than 2 (quite the opposite), you've just got more copies of data.

Edited by tcs
Link to comment
Share on other sites

1 hour ago, tcs said:

Keep in mind, the root volume and swap volume are RAID-1.  Having 100 drives isn't anymore risky than 2 (quite the opposite), you've just got more copies of data.

somehow i do remember that its always only the 1st 12 disks in that raid1 (never looked into this as it seems not so important)

my concern about redundancy is about the raid6 volume with my data, i don't mind DSM , that can be reinstalled but loosing a big xxx TB raid volume ... yeah you should have a backup but even restore needs time, its not meant to break because of failing disks (or even controller/cpu/ram in SAN environments) - oh what fun where these server ssd'd failing all at the same time because of internal "counter" disabling them

 

1 hour ago, tcs said:

Where specifically was it at in Nick's loader?  What file/folder?

 

XPEnology_DSM_6.1.x-quicknick-3.0.img

quicknick.img\.quicknick\quicknick.lzma\.quicknick\etc\rc.d\post

Link to comment
Share on other sites

Very few enterprise storage solutions recommend more than 24 drives per array.

 

Synology has a 72-drive unit now - FS6400 - and it can support more than 24 disks per array if you choose the "flexibility" instead of the "performance" storage pool option, but I don't know if there is a limit before 72 drives.

 

However, 108TB per volume limit applies to our XPE-enabled platforms, so that may start a practical limit to the number of drives that can be supported as mean capacity per drive increases (yes, we can deploy multiple volumes per storage pool but that will create performance contention).

 

https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/Why_does_my_Synology_NAS_have_a_single_volume_size_limitation

 

Link to comment
Share on other sites

41 minutes ago, IG-88 said:

somehow i do remember that its always only the 1st 12 disks in that raid1 (never looked into this as it seems not so important)

my concern about redundancy is about the raid6 volume with my data, i don't mind DSM , that can be reinstalled but loosing a big xxx TB raid volume ... yeah you should have a backup but even restore needs time, its not meant to break because of failing disks (or even controller/cpu/ram in SAN environments) - oh what fun where these server ssd'd failing all at the same time because of internal "counter" disabling them

 

 

Well sure - for your data volume you wouldn't put everything into one large array.   When I had this working in the past with 40 drives, I did 2x16+2 with a hot spare for the data volume.  If I get it working again, that'll likely be the case.

Link to comment
Share on other sites

must be a very reliable hardware, when using older and used components (or even new consumer grade) then i would not feel comfortable with a 34 drive raid6 volume

i have 14 drive raid 6 and seen drives failing in short succession (cable problems with a lsi sas controller) and even 2 drives failing at a power loss in the house and the simple consumer grade usv did not cover it completely

(i now use all new components for the main nas and the usv was replaced)

Link to comment
Share on other sites

  • 4 weeks later...
  • 3 months later...
  • 3 months later...
В 23.09.2020 в 01:39, IG-88 сказал:

must be a very reliable hardware

I'm read this topic. Do I understand correctly that on the Jun loader with 3617, you can connect ad stably working 26 disks due to sda-sdz.
Are my conclusions correct ?:
- when installing DSM, no more than 12 disks must be connected
- Then you can connect up to 26 disks and create a RAID
- what if the rest of the disks are counted as eSata? After all, they will also be available for creating a RAID, but they will not install DSM and SWAP on them?
Maybe this is the way out to use> 26 disks as eSATA?

Etc - 12HDD as internal, other as eSATA?

Link to comment
Share on other sites

5 hours ago, -iliya- said:

- Then you can connect up to 26 disks and create a RAID

not many people test that afaik bur 26 did work here

https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=78160

 

5 hours ago, -iliya- said:

what if the rest of the disks are counted as eSata? After all, they will also be available for creating a RAID

no any extranel (usb or sata) can't be made into a raid in DSM

imho the limit is not hard at  just single digit sdX, i've had working config with a asm1166 where the controller "blocked" 32 ports and the disks after that got sdXX and where part of a max 16 disk array (918+ has 16 as default), as long as real number kept below the max 16 it worked even with two digit sdXX numbers

imho the way beyond 24/26 is to follow quicknicks lead (see link to the thread above)

Link to comment
Share on other sites

Yesterday I try to add 45 hdd on VMware virtual dsm, but after edit files and no hdd add, after restart reset to default.

I would give up dsm in favor of some other NAS, but I need such features as: a network basket, symlinks over the network, automatic sharing of connected USB drives is highly desirable. Actually because of this and hold on to DSM

Edited by -iliya-
Link to comment
Share on other sites

  • 6 months later...

Hi.

I want to implement two versions of XPenology: 24 HDD (DSM 6.2.3) and 60+ HDD (any DSM).

As a basis, I took a 24-disk SuperMicro server (X9DRD-7LN4F, Backplane: SuperMicro BPN-SAS2-846EL1 REV: 1.10 or 1.11) and HUAWEI OceanStor S2300 expansion shelves (24 HDD).

The main problem - not in any system (I tested DS3615 Jun's 6.1.7, Jun's 6.2.3, quicknick-3.0 6.1.7) and not on any controller (I tested on LSI SAS2 2308, LSI 9211-4i, LSI 9217-4i4e , LSI 9200-8e, LSI 9201-16i, LSI 9207-8i) cannot secure permanent positions to hard drives in either the Backplane or the shelf.

If you put the disks sequentially into the server, for example, in slots 7, 15, 2, 21 ..., then in DSM they will appear in order: 1,2,3,4 ...

 

Скрытый текст

1959465123_.thumb.png.88532dcc0daef89354e54483d71234d4.png

 

Moreover, right in this topic

Скрытый текст

 

 

Quicknick showed the disk in slot 31:

Скрытый текст

2060747387_Quicknick.thumb.png.b4b82843857b2e03f207254275b0a291.png

 

How did he do it ???

What is responsible for freezing disk positions in the system?

 

 

 

Edited by Skyinfire
Link to comment
Share on other sites

On 12/6/2021 at 11:54 PM, Skyinfire said:

How did he do it ???

What is responsible for freezing disk positions in the system?

setting it to a position might be possible with same synology added special kernel parameters but i guess its more likely he used a vm (or maybe a real hardware, but seems over complicated) to have that man ports in dsm and then connected a disk an a upper controller on a special port, when doing this with sata it will be stable, the moving thing is a special behavior with the lsi sas controller (gap's are closed like when having one disk on 1 and one on 8 it will look in dsm as if 1 and 2 are used - and it gets funky if you add disks, positions in the raid set will change, that will produce a lot of extra difficulties when doing manual raid rebuilds)

 

if you want to test it the only way would be quicknicks (unauthorized) published 3.0 loader that is mentioned above, he claimed to have solved the 26 disk barrier problem but i never checked myself and i'm not aware that anyone here did take the effort to even try if his loader works with >26 disks

 

imho you best bet is to try out his loader and before that you might take into account that as long as you not find out how he did it you would need to use the old dsm version the loader was made for

Link to comment
Share on other sites

В 11.12.2021 в 16:20, IG-88 сказал:

he claimed to have solved the 26 disk barrier problem but i never checked myself and i'm not aware that anyone here did take the effort to even try if his loader works with >26 disks

 

And he solved it on DSM 6.1.7:

 

48HDD.thumb.png.9bf9bda32f08ca1f8c5702f267431a3f.png

 

I'm sure 60 disks will work as stable as these 48 - I created multiple partitions (and migrated ready-made RAID5 partitions from the original Synology) and rebooted / shutdown the system several times - it works stably.

 

The problem with hard positioning of disks remains: now they are moved within the system in a random order, although RAID5 / 6 is loaded every time (as long as all disks are included in the pool).

I could continue the tests, but I do not understand Linux and I have no idea what to do next: I need to tell me exactly what and where to change / iterate over in order to achieve stabilization of positions.

Edited by Skyinfire
Link to comment
Share on other sites

On 12/12/2021 at 5:52 AM, Skyinfire said:

I could continue the tests, but I do not understand Linux and I have no idea what to do next: I need to tell me exactly what and where to change / iterate over in order to achieve stabilization of positions.

 

You are doing something that few, if any other people have done successfully.  You will not receive anything other than general advice for such an undertaking, otherwise it would be purely speculative, which is not very helpful.

 

To expect that you will be successful without gaining some deep understanding of Linux, Linux devices, md and other Linux open source toolsets is a bit audacious.

Link to comment
Share on other sites

On 12/12/2021 at 5:52 AM, Skyinfire said:

The problem with hard positioning of disks remains: now they are moved within the system in a random order, although RAID5 / 6 is loaded every time (as long as all disks are included in the pool).

 

As @IG-88 indicated certain controllers will move connected devices with impunity.  Because md writes a UUID in a superblock location to each device, it can start and operate an array without concern to physical/logical disk slot mapping.  I'm not clear if your experience is due to the controller you are using or the unusual configuration >26 disks, but I don't think you should expect to find a resolution to this, and it very well may not actually be important.

Edited by flyride
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...