SteinerKD

Physical drive limits of an emulated Synology machine?

Recommended Posts

Just out of curiosity if you wanted to go all out and create a Frankenstein's NAS based on XPEnology, is there any physical limitations as to how many disks you could add to a system? Judging by the real steel hardware there are some limits as to how many expansion chassis can be linked and that physically limits your theoretical number of drives, but is this limitation also hard-coded in the software? 
Say you used one, or even multiple raid cards with expanders, maybe also mixed those with all the motherboard ports, you could quickly go far beyond the emulated 3615/17's maximum drives even with expansion chassis. Would the software still be able to handle this or would it put a definite stop at some set point?
I assume that maximum volume size listed in Synology documentation would still apply as a software limit rather than hardware one?

Share this post


Link to post
Share on other sites

there are different tutorials describing how to change dsm in a way to support more drives, i dont know where the limit is

in the linked video its shown with 45 but that might not be the limit, you can try in a vm and tweak the config to a point where dsm might fail

as it is for smaller enviroments and private use everything >100 drives is not important, when you reach such a high count your prioritys might shift as there are things like power, heat to look after and also braking down the unit size is an option or unsing different drive types for different needs (ssd for fast access data, ...) there are a lot of solutions on the market that might be a betrer fit than dsm

backblaze has a nice blog about there storage stuff and reading there (for a few hours) will give you an idea about shifting prioritys depending on size of the enviroment

 

 

 

  • Like 1

Share this post


Link to post
Share on other sites

I built a couple of lab machines with 20-24 drives with XPE5.2/DSM5.2 and they worked reliably, not tested with XPE6x though.

I did encounter a problem with >26 drives that the system was unstable.

I didn't spend to long on this but I came to the conclusion that XPE/linux was allocating drives sda/sdz but could not go beyond that. Although linux should do sdaa  etc for additional drives XPE did not seem to. 

Share this post


Link to post
Share on other sites

well he just defined it in the config and only had one drive in

maybe he will be suprised if he try's to add the physical drives he had in store

lets see his next vidoe ...

  • Like 1

Share this post


Link to post
Share on other sites

Synology systems are upgradeable by expansion chassis up to 180 drives (RS18017xs+ with RX2417sas expansions) presumably using the same DiskStation software so there seems to be quite a bit of potential for growth. Max volume size remains 200 TB RAID5/6, requiring 32 GB of ram.

On a side note, is there any particular reason the XPEnology emulations seems to focus on the 12 bay DeskStations instead of any of the RackStations?

Share this post


Link to post
Share on other sites
6 hours ago, SteinerKD said:

Synology systems are upgradeable by expansion chassis up to 180 drives (RS18017xs+ with RX2417sas expansions) presumably using the same DiskStation software so there seems to be quite a bit of potential for growth. Max volume size remains 200 TB RAID5/6, requiring 32 GB of ram.

On a side note, is there any particular reason the XPEnology emulations seems to focus on the 12 bay DeskStations instead of any of the RackStations?

 

I would say because that was their flagship model when XPEnology started and probably the model that relates best to home builds. I don't think many people use rackstations at home and need more than 12 bays. At least not the great majority.

 

There are now 2 additional bootloader models available since @jun released his loader: DS3617xs and DS916+.

  • Like 1

Share this post


Link to post
Share on other sites
8 hours ago, IG-88 said:

well he just defined it in the config and only had one drive in

maybe he will be suprised if he try's to add the physical drives he had in store

lets see his next vidoe ...

I guess he will be using his Fusion Reactor to power it too :smile:

 

There are a few guys who have tried with 24+ actual drives and their posts say the system crashes. I'm not a Linux expert but the 'coincidence' in my tests of > 26 drives and sda/sdz etc seems a logical thought. I no longer have 26 drives laying around to play so I'm not planning to test 6.x !

Share this post


Link to post
Share on other sites

16 HDD on an LSI SAS in IT mode are running fine.

The first six slots are SATA on mainboard for late use with SSDs.

59ca4ee41a257_Screenshot2017-09-2614_57_14.thumb.png.7e42a05fd4acaa6f65f607971d4aa23a.png

  • Like 1

Share this post


Link to post
Share on other sites

well if anyone realy wants to know, it should be possible to do this in a vm  with thin disks

does'nt take much resources, just 1-2h

 

 

Share this post


Link to post
Share on other sites
3 hours ago, IG-88 said:

well if anyone realy wants to know, it should be possible to do this in a vm  with thin disks

does'nt take much resources, just 1-2h

 

 

I'll do it tonight, all in the name of fun and science :smile:

Share this post


Link to post
Share on other sites

24 disks added so far. Disk 1 and 2 singles, disk 3-18 RAID 6, disk 19-24 yet unused (planning to add to the RAID 6 until it's at the max 24.
No noticeable issues so far.
Will keep adding drives tomorrow, now I need some sleep.
 

24d.png

  • Like 1

Share this post


Link to post
Share on other sites

Generally the rackstations which have larger expansion capacity use SAS HBAs and therefore are not limited by the sd(x) issue; as rackstations have SAS HBas this is one of the reasons why we do not have a rackstation image... For what its worth, a rackstation image with native LSI SAS drivers would probably allow for proper drive sequencing on LSI cards as well as large expansion potential.

  • Like 2

Share this post


Link to post
Share on other sites

Ok, so here's my results so far.
24 disks ran just fine no issues.
Added 2 more disks, nothing happened as this was above the supported disk.
SSHed in and edited configs to support 28 disks and rebooted
No diskstation, then message I have to reinstall it (finds old settings ask if I want to migrate), getting in I have a crashed raid and settings seems to have reset themselves, only 12 drives showing.
Again editing the config, this time to support 26 instead of 28 drives, and reboots.
System starts up fine, 26 drives showing and the crashed raid have reassembled itself and is now being verified/scrubbed.
Once system is stable again I will keep pushing and add 1 drive at the time, but currently it looks like 26 disks could be a "hard limit" (just have to be thorough and make sure I didn't make a editing mistake that crashed my machine).

 

26d.png

Share this post


Link to post
Share on other sites
3 hours ago, Benoire said:

Generally the rackstations which have larger expansion capacity use SAS HBAs and therefore are not limited by the sd(x) issue; as rackstations have SAS HBas this is one of the reasons why we do not have a rackstation image... For what its worth, a rackstation image with native LSI SAS drivers would probably allow for proper drive sequencing on LSI cards as well as large expansion potential.

Sounds like you are on to something. One thought though, the DS3617xs is expandable to 36 disks via expansion unit so logically the software for it should be able to handle that many drives per default?

Share this post


Link to post
Share on other sites

@SteinerKD

 

you do it more complicated then i would have done it

i would have changed the config once to support 48 disks (as long as there are not more then 26 disks plugged in nothing strange will happen)

and would create and destroy simple raid units (basic, raid 0) for every try (24, 26, 27, 36, 48), expanding might take to long, when the problem is about sda-sdz then the raid type does'nt matter, if 27 works then 48 should work too

 

Share this post


Link to post
Share on other sites
57 minutes ago, IG-88 said:

@SteinerKD

 

you do it more complicated then i would have done it

i would have changed the config once to support 48 disks (as long as there are not more then 26 disks plugged in nothing strange will happen)

and would create and destroy simple raid units (basic, raid 0) for every try (24, 26, 27, 36, 48), expanding might take to long, when the problem is about sda-sdz then the raid type does'nt matter, if 27 works then 48 should work too

 

I'm sure it can be done quicker and simpler, but nothing wrong with being thorough either, right?
It seems though that configing a high number doesn't crash you, neither does adding disks above a certain point. The crash occurred when you fully populated the higher config (without even using the disks).
I've also noticed with more disks that I get more frequent failures adding disk to the raid where I have to repair and retry to add the disk (and it will add fine eventually).

Share this post


Link to post
Share on other sites

as it was your question, its up to you

the result seems mixed, but does not sound promising, crashes when doing a raid extensions are never a good thing

 

but i guess its kind of academic, in 99.99% of the xpenology cases max 24 disks does the job

bigger systems are realy noisy and consume a lot of power and most of the privat use cases go for size and not for speed (more disks, more speed), with affordable 8 and 10 TB disks tere is only a slim possebility that you need more then 24 disks

 

btw. if someone realy wants to build a system with a lot of disks in one case (like 45 to 60), have a look at backblaze's blog, nice technical details, open hardware design (storage ppd's) and fun to read, also intresting to see the changes of the hardware design (like multiplexer vs. full speed ports)

 

 

  • Like 1

Share this post


Link to post
Share on other sites

I suspect the sda-sdz limit,  if correct, is somehow a limit within the XPE environment (it would be interesting to say build a debian/ubuntu 30 disk vm and see what happens)

As an aside I have a DS1812 and DX512 expansion unit, I'll have a look to see if there is anything that shows how disks are referenced that might be useful for this exercise

Share this post


Link to post
Share on other sites

would be interesting to know how a original 3615/3617 box handles the two optional 12slot extensions

they wil be attached by esata and multiplexer but the config has max.12 drives in it

(maybe a process will change that if a external unit is detected?)

and as the design is for 36 drives already as the name "36xx" implies they must be capale of more then 24 drives

Share this post


Link to post
Share on other sites
1 hour ago, IG-88 said:

as it was your question, its up to you

the result seems mixed, but does not sound promising, crashes when doing a raid extensions are never a good thing

 

but i guess its kind of academic, in 99.99% of the xpenology cases max 24 disks does the job

bigger systems are realy noisy and consume a lot of power and most of the privat use cases go for size and not for speed (more disks, more speed), with affordable 8 and 10 TB disks tere is only a slim possebility that you need more then 24 disks

 

btw. if someone realy wants to build a system with a lot of disks in one case (like 45 to 60), have a look at backblaze's blog, nice technical details, open hardware design (storage ppd's) and fun to read, also intresting to see the changes of the hardware design (like multiplexer vs. full speed ports)

 

 


I didn't mean to make it sounds like critique. You are more experienced with this than me so I just did it as I figured it, you being more accustomed to the system figured out a better way. I think some of the problems with growing my raid was due to having virtual disks stored on an old and slow drive, accessing multiple vm disks on that slow medium was bound to have some impact. Usually the problem I described only occurs when I try to grow an array with multiple disks at once, it doesn't happen if I ad 1-2 at the time.
I agree that it's mainly a theoretical problem, but I still find it interesting. Up to 24 drives seems rock solid and no issues. 26 seems to work too but maybe keep it at 24 and the possibility to mount 1-2 drives through USB for the stablest solution. Regardless we can now with some confidence say that past 26 drives is NOT stable.

Share this post


Link to post
Share on other sites
12 minutes ago, SteinerKD said:

 

I didn't mean to make it sounds like critique.

me either, my intension was to save time

i'm still thinking why its unstable (sbv300 seemed to have the same experiace with faster hardware)

such simple algorithms should be more digital like yes or no, not maybe

 

  • Like 1

Share this post


Link to post
Share on other sites
Just now, IG-88 said:

me either, my intension was to save time

i'm still thinking why its unstable (sbv300 seemed to have the same experiace with faster hardware)

such simple algorithms should be more digital like yes or no, not maybe

 

Well, all we can do is keep digging and try different things and see what we come up with. I'm game to experiment.
Think this possibly somehow ties in to the raid array max 24 disks limit? 

Share this post


Link to post
Share on other sites

Small update.
This is kinda frustrating, I'm currently running my NAS with 27 drives, so why does it suddenly work?
Edited config as per the usual, no problems, added the extra virtual disk, no problem. Tried creating a basic 1-disk array with it. Array creation failed and got a failed system partition warning. Repeatedly klicking repair system partition did nothing so I thought I had now confirmed the magical 26 disk barrier.
Rebooted the machine and it came up clean, again tried creating the basic array on disk 27 and this time it passed fine, created a volume on it, also worked fine.
Soo, we're still at a "it doesn't work, but it kinda does, but not very reliably and we don't know why"


Side note that came to my mind. No Synology unit contains more than 24 slots (although the sda-sdz implies 26 possible), what varies between models is how many external chassis they can support (0-7) each of those would then supply another set of (up to) 24 disks in the form of eSata disks. Does anyone know if the eSata disks are identified or mounted differently than the regular disks? (in theory, the software handles more disks in 24(26) disk batches). Maybe if you somehow were able to (via emulation/loader or whatever) to assign ranges of disks as separate eSata ports you could in practice create "virtual expansions" with additional drive ranges. Ok, I know this is highly theoretical and not very practical, but I still find it interesting.

Share this post


Link to post
Share on other sites

Changing the config to 28 disks again without even adding the 28th disk to the system greeted me with a "migrateable" nas crashed and max drives reset to 12. I've managed to reproduce this twice now (edit for 28 drive support and it reboots to a reset state and 12 drives supported).
Going to test something else now, disable usb ports and define 24 sata disks and 24 esata disks and see what happens.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now