Jump to content
XPEnology Community
  • 0

1.04b/918+ - Slots available = 16?


indyslim

Question

5 answers to this question

Recommended Posts

  • 0

Probably the config file is set to 16 slots. Not sure why your config file is like that, could be because of a non original compiled dsm version or if you had a previous installation maybe somehow your cache has written it.

 

Or even the grub.cfg could have a config line to edit before it boots the dsm. Could be many things really.

 

No 16. Is not the maximum you can use up to 60+ on ANY dsm. Aslong as you edited the according files.

 

Dont worry though even if it shows 16, everything is fine.

  • Like 1
Link to comment
Share on other sites

  • 0
5 minutes ago, CreerNLD said:

Probably the config file is set to 16 slots. Not sure why your config file is like that, could be because of a non original compiled dsm version or if you had a previous installation maybe somehow your cache has written it.

 

thats one of jun's tweaks to have have a much more usable system, nothing special, it's standard (and 12 is normal for 3615/17)

the additional "slots" dont hurt in any way, just the option to expand without and hassle

 

8 minutes ago, CreerNLD said:

Or even the grub.cfg could have a config line to edit before it boots the dsm. Could be many things really.

no the grub.cfg has nothing to do with it, its about a patch in extra.lzma and the synoinfo.conf

 

9 minutes ago, CreerNLD said:

No 16. Is not the maximum you can use up to 60+ on ANY dsm. Aslong as you edited the according files.

also not correct, the proven safe is 24, there is a nice series of 3 videos of a guy trying to have it about 40-60, he found out in video two that he had problems not noticed in the 1st video and in the later 3rd video he gave up completely (some people just see the 1st video and think it's not problem to have many drives

quicknick had a list of bigger safe numbers of disks in his loader but i've never seen someone trying it and quicknick never commented or documented how he came to this numbers

 

the only proof i'm sure about is 24 but if you can point out where your confidence about 60+ comes from i'd like to check it out

 

 

  • Like 1
Link to comment
Share on other sites

  • 0
1 hour ago, IG-88 said:

 

thats one of jun's tweaks to have have a much more usable system, nothing special, it's standard (and 12 is normal for 3615/17)

the additional "slots" dont hurt in any way, just the option to expand without and hassle

 

no the grub.cfg has nothing to do with it, its about a patch in extra.lzma and the synoinfo.conf

 

also not correct, the proven safe is 24, there is a nice series of 3 videos of a guy trying to have it about 40-60, he found out in video two that he had problems not noticed in the 1st video and in the later 3rd video he gave up completely (some people just see the 1st video and think it's not problem to have many drives

quicknick had a list of bigger safe numbers of disks in his loader but i've never seen someone trying it and quicknick never commented or documented how he came to this numbers

 

the only proof i'm sure about is 24 but if you can point out where your confidence about 60+ comes from i'd like to check it out

 

 

Great insight, well i don't have that many drives to make a test for you but it's perfectly possible to do so, in the end synology is still just linux and doesn't relay on the custom build synology software, yes it may take time for a avarage joe to make things supporting and communicating well but its deffinitly not impossible. 

 

With that, I'd say it would easily support 60+ on linux, probably would need to write ur own spk to do some user friendly magic but its deffinitly not impossible.

 

I would, if i had, such a big array of drives. Im honestly thinking its possible. I have my nas set to 36 slots just because i think its nice to have some room for usb attachments and so forth. If it works, I don't know for now. But deffinitly i am willing to give it a try. 

 

What i meant with a grub cfg is that you could on bootup of the loader decrement or increment the number of drives before mounting into linux, this would need lzma to unpack, search file, edit it, save it, repack lzma, and bootup on every boot. Yes its a workaround but in theory u could do it.

 

So please dont pin me on my words but i can say, its linux. And linux has no limitations. Where hardware does.

 

Thanks for your insight though. Good to know info is hard to find on the forum. Perhaps you could add me on Skype or we could communicate somewhere @IG-88

Link to comment
Share on other sites

  • 0
36 minutes ago, CreerNLD said:

Great insight, well i don't have that many drives to make a test for

you can test it in a vm, esxi or virtualbox, virtual disk can be "thin" so it will not take much space to have 60 disks

 

39 minutes ago, CreerNLD said:

synology is still just linux and doesn't relay on the custom build synology software

ohh, it does, they have modded there kernel and also have there own kernel modules they dont publish so even with the recently published kernel source from 6.2.2 (v24992) you cant build your own kernel, the way it is now we use synologys original kernel that comes with dsm and add kernel modules but if modules need support in the kernel it is not present in synologys binary (like with hyper-v or amd) then we cant do much, only a custom kernel could do this (like it was the case with 5.x)

 

44 minutes ago, CreerNLD said:

With that, I'd say it would easily support 60+ on linux, probably would need to write ur own spk to do some user friendly magic but its deffinitly not impossible.

but they change parts of the kernel in there way if needed and as there stuff is part of the kernel we use you would need to read kernel source and maybe some scripts of dsm to find out how its done and would need to adapt or extend scripts to change or improve this, not sure if this is even worth the effort, there are only very few people needing support for >24 drives and with 18TB disks around you can have more the 400TB in one system

55 minutes ago, CreerNLD said:

What i meant with a grub cfg is that you could on bootup of the loader decrement or increment the number of drives before mounting into linux, this would need lzma to unpack, search file, edit it, save it, repack lzma, and bootup on every boot. Yes its a workaround but in theory u could do it.

the grub cfg would be just a plain file on the 1st partition of the usb, the extra.lzma containing the path is on the 2nd

the patch try's to apply on boot and when already done is skipped and when doing a update and the original file (synoinfo.conf) is restored it will be reapplied, its easy to mod the patch for 918+ (like from 16 to 24) as all the code to patch is in the patch because the original unit only has 4 drives and to make it usable it needed to be extended, as the default for 3615 and 3617 is 12 drives it was newer needed to change and there is no code in the patch to mod, so you would need to make you own extended patch with manually doing it an then use diff to make a new patch that would be integrated into extra.lzma

its not that difficult as we have 918+ so see how its done and would need to do the same in 3615/17

 

1 hour ago, CreerNLD said:

So please dont pin me on my words but i can say, its linux. And linux has no limitations. Where hardware does.

not if you have no control over the code, like in this case, xpenology is a hacked appliance and synology try's to keep freeloaders out - not really that hard but there are changes on on new major versions and there is code signing of kernel modules for a while, its to expect they that with 7.0 they will do some more protection, but they are not putting in much effort to shut us out, there would be a lot of things that could be done pretty easy but we have not seen this kind of changes inside a running dsm line like 6.0, 6.1, 6.2

 

 

  • Like 1
Link to comment
Share on other sites

  • 0

Even if DSM is Linux, the Synology utilities that make DSM what it is definitely not Linux standard.  udev is completely customized. smartd is completely customized. The disk cache is customized. RAID5 is customized. And all the configuration that you can do through the DSM UI is through Synology's own binaries, not the Linux standard utilities.  So if you want to configure a 60-drive array (or whatever) you might be able to get the Linux side of DSM to mount it, but Synology's utilities won't touch or will immediately scramble it.

 

Not too long ago I tried to help a guy with an array recovery and the system was left in a Linux functional state where his data was all accessible, but then he decided to use DSM utilities to "fix" something and poof- all gone.

 

Synology wants things the way they want them.  They don't code their stuff to be fully interoperable with standard Linux.  That's why the loaders and all the core scripts, patches etc are trying to keep DSM in a completely pristine, unadulterated state.

Edited by flyride
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...