Jump to content
XPEnology Community

Physical drive limits of an emulated Synology machine?


SteinerKD

Recommended Posts

In my lab I had disabled usb and esata (0x00000 etc in the conf file) and it made no difference, >26 was 'unstable'. In your setup it looks like your edited synoinfo.conf is being reset back to default, have a look at editing the default version of the file to your drive number and see if it 'holds'.  Interesting why/how the defaults are reset, maybe try a serial connection and see if there are any logs or warnings? 

Link to comment
Share on other sites

I had a look at the dmesg output from my 1812/513 setup. I've 13 drives, 1-4 SHR1 Volume2 and 5-8+Expansion 1-5 are SHR1 Volume1

 

Interesting that I would have expected to see drives sda-sdm, however what I found is drives in the 1812 chassis are sda-sdh, but the expansion unit drives are sdia-sdie.

Elsewhere in the file are references to SIL3132 devices and a 5 port multplier with a SIL vendor ID. 

So perhaps the 'trick' that is part of the Synology architecture is 'chassis' disks will be sda-sd- but anything in an expansion chassis gets a new reference. Looking at the logic here it makes sense that the port multiplier in the 513 would take the next available drive reference and 'expand' it. Presumably if I added another 513 that would be sdja-sdje. 

  • Like 1
Link to comment
Share on other sites

14 hours ago, sbv3000 said:

In my lab I had disabled usb and esata (0x00000 etc in the conf file) and it made no difference, >26 was 'unstable'. In your setup it looks like your edited synoinfo.conf is being reset back to default, have a look at editing the default version of the file to your drive number and see if it 'holds'.  Interesting why/how the defaults are reset, maybe try a serial connection and see if there are any logs or warnings? 

I always edit both /etc/synoinfo.conf and /etc.defaults/synoinfo.com with the same values when I do this, it seems the defaults one is being ignored and values reset back to 12 drives using some other template.

Link to comment
Share on other sites

4 hours ago, sbv3000 said:

I had a look at the dmesg output from my 1812/513 setup. I've 13 drives, 1-4 SHR1 Volume2 and 5-8+Expansion 1-5 are SHR1 Volume1

 

Interesting that I would have expected to see drives sda-sdm, however what I found is drives in the 1812 chassis are sda-sdh, but the expansion unit drives are sdia-sdie.

Elsewhere in the file are references to SIL3132 devices and a 5 port multplier with a SIL vendor ID. 

So perhaps the 'trick' that is part of the Synology architecture is 'chassis' disks will be sda-sd- but anything in an expansion chassis gets a new reference. Looking at the logic here it makes sense that the port multiplier in the 513 would take the next available drive reference and 'expand' it. Presumably if I added another 513 that would be sdja-sdje. 

Sounds very plausible and quite a lot as the theory I had and discussed a bit. Basically the system can handle batches of 26 (but it only uses batches of 24) drives, but (at least currently) there's no way to emulate chassis and assign ranges of disks outside of the default batch. But for now 24-26 should be stable and quite likely enough for most people.

Link to comment
Share on other sites

I did a bit more research into the port multipler ID and its the SIL 3726, five port multiplier.  It has various chassis to external enclosure comms features using I2C bus, so I suspect thats implemented by Synology somehow, knowing that their esata interface to external enclosures is 'non standard' it makes sense.  Unless someone has a lot of spare time and the motivation to try and emulate the Syno setup I think the >26 drive scenario is a 'No' and as you say 24/26 is going to be more than enough. Also, if someone was desperate for a larger array, probably just as easy to setup a second NAS build and use a mounted folder or similar  

Link to comment
Share on other sites

On 28/09/2017 at 9:34 AM, IG-88 said:

would be interesting to know how a original 3615/3617 box handles the two optional 12slot extensions

they wil be attached by esata and multiplexer but the config has max.12 drives in it

(maybe a process will change that if a external unit is detected?)

and as the design is for 36 drives already as the name "36xx" implies they must be capale of more then 24 drives

I believe the ports on this are infiniband connections which will be attached differently to sata, but I'm not sure how. 

Link to comment
Share on other sites

9 hours ago, Benoire said:

I believe the ports on this are infiniband connections which will be attached differently to sata, but I'm not sure how. 

 

as there are chips in the synology boxes like sil3627 and this are sata portmultipliers and there are also eSATA port connections to the outside, i don't see how infiniband comes into play

Link to comment
Share on other sites

The 36xx expand with the 12 bay ds1215 through the proprietary esata so, if our theories are right,  in real syno land the sda-sdz limit isnt reached, disks would be sda-sdl in the chassis and sdma-sdml etc in the expansion boxes. I'd suspect that there are newer sata chipsets than my old 1812/513 setup doing the expansion too. None of the people I work with have 3615's expanded that I could get an output from but I suspect there is a hardware teardown site somewhere that has the info. What is more interesting though might be how the Syno bios actually sets up the drive letter allocation, thats far outside my skill area.

Link to comment
Share on other sites

thats not bios, they changed the kernel/drivers to do so

same reason why a standard sata multiplier does not work on dsm like it does on a normal linux, code changes by synology, they activate that only if they detect there own multiplier with special id/bios in it

https://www.synology.com/en-global/knowledgebase/DSM/tutorial/Storage/Does_the_eSATA_port_of_Synology_products_support_eSATA_disk_enclosures_with_port_multipliers

 

a few years ago  someone created a kernel patch to (re-?) enable the multiplier funtionality, cant find it atm

  • Like 1
Link to comment
Share on other sites

  • 4 weeks later...
  • 1 month later...
On 9/25/2017 at 5:19 PM, haldi said:

This guy here tried with 58 Drives.... maybe he knows more?

 

 

So - I'm back at it again.  This guy used my tutorial I had posted here: http://thomaskayblog.com/initial-configuration-of-xpenology-on-45-drives-nas-unit/ to get his pod to 58 drives.  Here's what I don't understand.  I thought there was a limit somewhere to 26 drives based on how Linux creates the drive lettering.  Apparently, that's not exactly the case.  

 

Here's what I've tried - I have a 45 Drives Storinator and a 60 Drive Storinator.  In my 45 drives storinator, I had originally had 3xLSI9201-16i cards.  I modded the synoinfo.conf file to account for all 48 drives + 10 drives onboard.  Matching this guy's post to the letter.  Reboot, and it reboots on its own back to the install screen.  So I thought well, maybe it was because he daisy chained the SAS ports.  So I went out and bought 2 HP SAS Expanders and internally cabled it to be daisy chained off of one LSI9201-16i port. Got to 24 drives, went to 48, reboot, and it reboots to the install wizard again. 

 

I just have NO idea how in the world he's at 58 drives without the machine going back to the install wizard.  I really want this to work, but I'm really lost as to what is so different between my article and his changes.

Link to comment
Share on other sites

 
So - I'm back at it again.  This guy used my tutorial I had posted here: http://thomaskayblog.com/initial-configuration-of-xpenology-on-45-drives-nas-unit/ to get his pod to 58 drives.  Here's what I don't understand.  I thought there was a limit somewhere to 26 drives based on how Linux creates the drive lettering.  Apparently, that's not exactly the case.  
 
Here's what I've tried - I have a 45 Drives Storinator and a 60 Drive Storinator.  In my 45 drives storinator, I had originally had 3xLSI9201-16i cards.  I modded the synoinfo.conf file to account for all 48 drives + 10 drives onboard.  Matching this guy's post to the letter.  Reboot, and it reboots on its own back to the install screen.  So I thought well, maybe it was because he daisy chained the SAS ports.  So I went out and bought 2 HP SAS Expanders and internally cabled it to be daisy chained off of one LSI9201-16i port. Got to 24 drives, went to 48, reboot, and it reboots to the install wizard again. 
 
I just have NO idea how in the world he's at 58 drives without the machine going back to the install wizard.  I really want this to work, but I'm really lost as to what is so different between my article and his changes.
Same for me, regardless of what I setup in Synoinfo.conf, even configs which should work like in the above video. Dsm always resets.
Are you on virtual environment like me? Which dsm version do you use, maybe synology changed something in the latest updates? I am on 6.1.4

Gesendet von meinem ONE A2003 mit Tapatalk

Link to comment
Share on other sites

I'm currently running bare metal on 6.1.3.  I was thinking about doing virtual, but I'm not sure what that would do to the performance of apps like Plex.
Since plex does not support any hardware acceleration like gpu encoding, at least to my knowledge (and if so very experimental and on dsm only the hardware which is synology own and only used in specific synology boxes) I would say performance is, depending on your used hardware of course, same as bare metal. CPU power is what you are left with anyways and if you are able to do vt-d and vt-x or the AMD pendants virtualization is your friend.
I pass the hba controller through to the xpenology vm and for dsm those drives are natively recognized with full speed of the controller (I think dsm limits here anyways - but dsm Speedtest shows maximum throughput of the hdd)

But back to topic - it surprises me that the 58 drive config of the above video does not work for you.
I am also wondering about SataPortMap in grub config file. I was under the impression that you bare metal guys have to tinker with that setting depending on the amount of controllers you use.
For virtual environment so far I never touched this. Could there be a reason why this keeps breaking dsm?

Gesendet von meinem ONE A2003 mit Tapatalk

Link to comment
Share on other sites

I honestly haven't seen much about the sataportmap option so I haven't been able to play with it.  Supposedly it's there to tell how many ports are on your cards, but when you have 24 port cards, I'm not sure how you denote that.  Typically its designed for people with 8 and port cards.  So SATAPORTMAP=84 but ... to do double digits -- not sure.

 

Link to comment
Share on other sites

Well gentlemen, I'm going to say after much gnashing of teeth I've found a solution. 

 

The source of the issue seems to be something in /etc/rc.subr but I can't quite pin it down.  HOWEVER, what I *DID* find is that you can disable swap entirely, which is where the actual problem lies.  Normally when you have external shelves, the system identifies drives in them with unique IDs per shelf.  So the system is never expecting a drive with more than 3 letters in the device ID (IE: it expects an sda2, but not an sdaa2).  First I tried getting it to identify my external trays, but I quickly realized that they are expecting certain SES (SCSI enclosure services) responses that we can't really mimic.  Maybe if you purchased a shelf from the same vendor they buy theirs through, but that's kind of a lost cause.  HOWEVER... drumroll (sorry for the long-winded response) - I did find that you can disable swap entirely.  Simply add no_disk_swap="yes" to your synoinfo.conf.

 

Once that's done, you should be good to go.

 

TL;dr - add 

no_disk_swap="yes"

to your synoinfo.conf 

???

profit

  • Thanks 1
Link to comment
Share on other sites

On 12/5/2017 at 9:19 AM, tcs said:

Well gentlemen, I'm going to say after much gnashing of teeth I've found a solution. 

 

The source of the issue seems to be something in /etc/rc.subr but I can't quite pin it down.  HOWEVER, what I *DID* find is that you can disable swap entirely, which is where the actual problem lies.  Normally when you have external shelves, the system identifies drives in them with unique IDs per shelf.  So the system is never expecting a drive with more than 3 letters in the device ID (IE: it expects an sda2, but not an sdaa2).  First I tried getting it to identify my external trays, but I quickly realized that they are expecting certain SES (SCSI enclosure services) responses that we can't really mimic.  Maybe if you purchased a shelf from the same vendor they buy theirs through, but that's kind of a lost cause.  HOWEVER... drumroll (sorry for the long-winded response) - I did find that you can disable swap entirely.  Simply add no_disk_swap="yes" to your synoinfo.conf.

 

Once that's done, you should be good to go.

 

 

TL;dr - add 


no_disk_swap="yes"

to your synoinfo.conf 

???

profit

 

Thats an interesting find - well done, lets see what beasts it lets loose :)

From my earlier tests, I suspect that your analysis of the SES services is what Synology use with real systems so it allows the 4 letter allocation as with my DX unit.

The next challenge will be finding a solution to retaining the extensive synoinfo.conf changes that these setups need to avoid raid crashes during upgrades.

Link to comment
Share on other sites

On 12/5/2017 at 3:10 PM, sbv3000 said:

Thats an interesting find - well done, lets see what beasts it lets loose :)

From my earlier tests, I suspect that your analysis of the SES services is what Synology use with real systems so it allows the 4 letter allocation as with my DX unit.

The next challenge will be finding a solution to retaining the extensive synoinfo.conf changes that these setups need to avoid raid crashes during upgrades.

 

I may have found a fix.  In order for that to happen I'd need Jun to build us a custom bootloader.  If he's not willing to build us a one-off, I can probably do it myself but it'll be a while before I have time to build out that environment myself.  There's a possibility the change I want to make isn't possible without the full source, but I'm not sure at this point.

Link to comment
Share on other sites

  • 2 months later...
 
So - I'm back at it again.  This guy used my tutorial I had posted here: http://thomaskayblog.com/initial-configuration-of-xpenology-on-45-drives-nas-unit/ to get his pod to 58 drives.  Here's what I don't understand.  I thought there was a limit somewhere to 26 drives based on how Linux creates the drive lettering.  Apparently, that's not exactly the case.  
 
Here's what I've tried - I have a 45 Drives Storinator and a 60 Drive Storinator.  In my 45 drives storinator, I had originally had 3xLSI9201-16i cards.  I modded the synoinfo.conf file to account for all 48 drives + 10 drives onboard.  Matching this guy's post to the letter.  Reboot, and it reboots on its own back to the install screen.  So I thought well, maybe it was because he daisy chained the SAS ports.  So I went out and bought 2 HP SAS Expanders and internally cabled it to be daisy chained off of one LSI9201-16i port. Got to 24 drives, went to 48, reboot, and it reboots to the install wizard again. 
 
I just have NO idea how in the world he's at 58 drives without the machine going back to the install wizard.  I really want this to work, but I'm really lost as to what is so different between my article and his changes.
What everyone has failed to notice, is that this guy was using xpenoboot 5.2 which uses heavily modded kernel, not the oem one we currently used in 6.x.

That being said, I can make this a permanent solution for the few that need it. My loader will be out in a few days and I can certainly add this from the boot menu.

What you should know though, is that even with 12+ drives, DSM will only install to the first 12 drives. You can certainly rebuild the software raid to all attached drives, but it is a manual process.

My configuration tool, in the next release, allows you to set to 45 drives, custom amount (12-64) or back to default value of 12 drives automatically.

Another cheat way to get more drives, is by using RAID cards. instead using HBAs or JBODs on RAID cards, you can do HW raid and just provision virtual drives to DSM. and do basic no protection 1 drive volumes. Depending on your RAID controller, it could be faster than software RAID, and still have prtectection.



Sent from my SM-N950U using Tapatalk

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Also, what we are seeing is simply a modded synoinfo.conf setup, I've done that myself up to 48 drives and there is never a problem, until I added more than 26 drives. What I'd be interested to see is a working >26 drive unit with that or more number of HDDs installed and running stably. I'm sure that Toms (TCS) changes to swap file settings is all good and makes a difference but I'd love to see a real system beating that limit, if only for fun. Looking forward to quicknicks loader, sounds like the ideal. 

Link to comment
Share on other sites

On 2/23/2018 at 11:59 PM, quicknick said:

What everyone has failed to notice, is that this guy was using xpenoboot 5.2 which uses heavily modded kernel, not the oem one we currently used in 6.x.

That being said, I can make this a permanent solution for the few that need it. My loader will be out in a few days and I can certainly add this from the boot menu.

What you should know though, is that even with 12+ drives, DSM will only install to the first 12 drives. You can certainly rebuild the software raid to all attached drives, but it is a manual process.

My configuration tool, in the next release, allows you to set to 45 drives, custom amount (12-64) or back to default value of 12 drives automatically.

Another cheat way to get more drives, is by using RAID cards. instead using HBAs or JBODs on RAID cards, you can do HW raid and just provision virtual drives to DSM. and do basic no protection 1 drive volumes. Depending on your RAID controller, it could be faster than software RAID, and still have prtectection.



Sent from my SM-N950U using Tapatalk
 

 

 

Still planning on releasing a new loader this week?

 

 

Also, one word of caution on the raid-card front - be very careful which RAID adapter you pick.  I had some 3ware adapters that would go completely out to lunch and the only way to recover was a hard power cycle.  Then occassionally they would mark drives bad under heavy load even though the drives were perfectly fine... it was the ASIC on the RAID adapter losing its mind.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...