bateau

Expanding beyond motherboard SATA

Recommended Posts

I would like to expand on @thedaggert’s recent thread and ask what folks use to expand their Xpenology setups once they run out of motherboard SATA ports. 
 

Specifically, I’m interested if there are SAS cards compatible with Xpenology to be able to mix SATA and SAS drives. 
 

Additionally, can folks comment on JBOD enclosures they use when you run out of drive bays? I’m using HP Z230 as bass chassis for my set up, but it’s limited to 5x3.5” bays and 5 SATA ports. 
 

Thank you and Happy Father’s Day to all dads. 

Share this post


Link to post
Share on other sites

You can run any sata drives on sas controllers, aswell as sas drives on the same controller.

In the JBOD group, you can also mix those, interface doesn't matter in software, but the controller does / is required to communicate.


Sata controllers can speak to sata.

Sas controller can speak to sas and sata. (mixed)
 

Yes; sas controllers works on normal motherboards, given that the DSM supports it, or if you have a corresponding extra.lzma with your driver.

(multiple threads on the forum, you can find out about what dsm u got, and which cards are supported).

*alternatively; u can even make ur own extra.lzma it isn't verry hard to do.

 

 

What i am trying to do however is ' cheap ass ' and trying to cheat a littlebit, i'll explain here.

 

My current setup is a simple motherboard with x1 slot and x16 pci slot. and 4 sata ports.

 

On aliexpress i purchased a Single x1 to x1(*4) expension board, typically it is used for mining rigs, but if your CPU supports multiple PCI lanes, and your motherboard is restricted by x amount ports ( my case x16 and x1 so (2) ports) i can still use more pci bandwith, given that my cpu supports 4x pci x1 and 2x pci x16.

 

Theoratically i could run x1 pci devices on 36 x1 ports intotal, to expand i purchased:

https://nl.aliexpress.com/item/4000625549082.html?spm=a2g0s.9042311.0.0.511b4c4d1hUeSW

 

in addition you can find a controller which suits your DSM version, and of which you've got drivers for, i went for a simple sata controller such as :

https://nl.aliexpress.com/item/32352030105.html?spm=a2g0s.9042311.0.0.511b4c4d1hUeSW (this controller works regardless on almost every DSM)

 

now on one board i can run 4 times x1, so i could purchase the controller x4 without having performance loss duely my pci bandwith given by my cpu can handle it.

about the drivers on the expension board, well that's a different story to talk about (i assume it will work right away) but the drivers of the chip(Pericom Pericom PI7C9X bridge chip) are easily accessable and so easy to make a extra.lzma for it.

 

so in terms of expension, i think you could do alot (if my above setup works) i dont have it yet, so for now i couldn't tell you. but yeah i thought it's nice to share it.

 

about JBOD enclosures, well if you know you're going to run out of bays, then why not consider a bigger case and modding everyting in urself?
For this reason alone, i have made my own 'synology server' inside a mdf case, as mdf is perfect to handle heat, and cheap to build with.

U can than easily expand on your demand without having to go through many websites, stores, etc etc.

Cooling-wise, well just look at every case in the market and copy the idea, u'll get it. 

 

Not sure if i answered everything correctly but still i hope i gave u some insight.

 

 

Share this post


Link to post
Share on other sites
5 hours ago, bateau said:

Specifically, I’m interested if there are SAS cards compatible with Xpenology to be able to mix SATA and SAS drives. 

lsi 9211-8i and its oem types seem to be the most used sas card

 

5 hours ago, bateau said:

Additionally, can folks comment on JBOD enclosures they use when you run out of drive bays

these are usually sata multiplexer and they dont run with dsm (synology blocks them to promote there own products)

 

2 hours ago, CreerNLD said:

now on one board i can run 4 times x1,

if you use a bridge chip to make 4 pcie 1x ports from one pcie 1x port your limit is still what one pcie lane can handle and on pcie 2.0 thats just 500MB/s

Share this post


Link to post
Share on other sites
Quote

if you use a bridge chip to make 4 pcie 1x ports from one pcie 1x port your limit is still what one pcie lane can handle and on pcie 2.0 thats just 500MB/s

 

Although this is true, 500mb/s is more than plenty for most users ( its also per lane ) duely to the stack push provided by the chip.

 

And also depending on bus speeds, transaction encoding, and pci allocation you may result higher than 500mb/s.

 

Picture below should give some clarity :)

 

compare-pci-pci-express-2.png

Share this post


Link to post
Share on other sites

@IG-88 and @CreerNLD, thank you for chiming in.  I will look into LSI 9211-8i and threads referring to it. Does it work out of the box or does it need firmware flashing?  I am trying to keep the costs low, so that's why I'm trying to repurpose an existing chassis.  A did a very quick google search and it seems that Z230 motherboard is proprietary enough that it doesn't easily port into different chassis.  It seems that Silverstone DS380 is a popular mini-ITX NAS choice.  I would need to find a mini-ITX motherboard to fit E3-1245v3 that's in the Z230, otherwise it's a pretty much new build driving up costs.

Share this post


Link to post
Share on other sites
2 hours ago, bateau said:

Does it work out of the box or does it need firmware flashing? 

it needs IT firmware (initiator target) so every disk is visible, the IR (r like raid) is the one that might need to be replaced but often its already with IT firmware

 

2 hours ago, bateau said:

I am trying to keep the costs low, so that's why I'm trying to repurpose an existing chassis. 

often 9211-8i is sold used and ~60 dollar/euro

 

2 hours ago, bateau said:

It seems that Silverstone DS380 is a popular mini-ITX NAS choice.  I would need to find a mini-ITX motherboard to fit E3-1245v3 that's in the Z230, otherwise it's a pretty much new build driving up costs.

dont stick to much to a housing, loog for micro atx, much more options (3-4 pcie slots)

getting mini itx for special processors or 10G nic well get spendy

mini itx can be a choice when soldered gemini lake cpu's are ok and high sata port caonts and 10G nic are not so important

 

 

20 hours ago, CreerNLD said:

Although this is true, 500mb/s is more than plenty for most users ( its also per lane ) duely to the stack push provided by the chip.

if your source is a pcie 1x slot then its just one lane and even iv you "split" it in four slots, all four will have 500MB/s max availible together and one ssd can saturate this (like a cache drive)

 

20 hours ago, CreerNLD said:

And also depending on bus speeds, transaction encoding, and pci allocation you may result higher than 500mb/s.

 

just look for the pcie standard 1.0, 2.0, 3.0, 4.0 thats it

other things like encoding is part of this

 

20 hours ago, CreerNLD said:

Picture below should give some clarity

 

so not the way we need it, it does not take differences into account like pcie 2.0 to 3.0

imho this is better

https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions

 

apollo lake and gemini lake (soc including chipset) only support pcie 2.0 and most of the cheap marvel chips (9215, 9230) only support 2.0

it make a difference when using 2 lanes with a marvel 9230 or a jmb585 (pcie 3.0 support) on a system supporting pcie 3.0

also its no performace gain to have a jmb585 in a apollo lake system as the chipset can only use pcie 2.0

Share this post


Link to post
Share on other sites

Thank you for feedback everyone.  I got my hands on a U-NAS NSC-810A chassis and LSI 9211-8i card in IT mode.  Now the big decision is whether I set up U-NAS as a JBOD expansion and keep my Z230 chassis or go buy a mATX Haswell board and transplant the entire setup into U-NAS.

 

Does anyone have recommendation for least problematic Haswell mATX motherboard capable of supporting i7-4790 or E3-1245 v3.  Both are massive overkills for a NAS, but since I already have the parts...

Share this post


Link to post
Share on other sites

I've used a LSI 9702-8i for a few years and quite happy with it. My motherboard didn't like it at first, but I don't think XPE has ever had any issues with it.

Share this post


Link to post
Share on other sites
11 hours ago, SteinerKD said:

I've used a LSI 9702-8i for a few years and quite happy with it. My motherboard didn't like it at first, but I don't think XPE has ever had any issues with it.

 

Thank you for the reply.  Was there anything special you needed to do for XPE to support 9702, or did it work out of the box?  I haven't gotten mine up and running yet as I'm waiting on some hardware to come in.

Share this post


Link to post
Share on other sites

I had some issues with my old mainboard, it lost all USb and didn't detect the card every few boots, but that was a hardware problem.
As for Xpenology it worked flawlessly with nothing needing to be done from day 1.
The difference between the 9207 and 9211 is that the 07 supports PCIe 3.0 instead of 2.0 so it can handle a higher throughput.

Share this post


Link to post
Share on other sites

One thing to keep in mind is that these are industrial cards meant for a server environment, they do run quite hot and need airflow over them. I added a fan attached to the back PCI brackets that blow directly at the card.

Share this post


Link to post
Share on other sites
4 minutes ago, SteinerKD said:

I added a fan attached to the back PCI brackets that blow directly at the card.


What kind of fan did you attach?  The UNAS 810A enclosure I'm using is pretty restrictive.  There's a pair of 120mm fans blowing over the drives.  CPU will have Noctua NH-L9i on it and there's a 60mm fan venting the motherboard compartment.  9211 appears to have 40mm heatsink on it, so I'll do some measurements to see if I can fit 40mm Noctua fan on top of the heatsink.

Share this post


Link to post
Share on other sites
4 minutes ago, bateau said:


What kind of fan did you attach?  The UNAS 810A enclosure I'm using is pretty restrictive.  There's a pair of 120mm fans blowing over the drives.  CPU will have Noctua NH-L9i on it and there's a 60mm fan venting the motherboard compartment.  9211 appears to have 40mm heatsink on it, so I'll do some measurements to see if I can fit 40mm Noctua fan on top of the heatsink.

I host my NAS in a desktop tower case so I'm not restricted. I used one of these

https://gelidsolutions.com/thermal-solutions/accessories-pci-slot-fan-holder/

Unfortunately I don't think that's an option for you, but also since the 9211 is a PCIe gen 2 card maybe it runs cooler?
Depending on how much you load it it might not get that hot either I guess.

Share this post


Link to post
Share on other sites

@SteinerKD One more question if I may. Do you have any experience with mixing SATA and SAS drives on the LSI controllers? Specifically, running 4xSATA on one cable (channel? Not sure of terminology) and 4xSAS on the other. 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.