Jump to content
XPEnology Community

Building NAS (advise needed)


Filo301

Recommended Posts

Hi,

I'm new here.

I recently installed dsm6.1.7 on Asus E45M1-I DELUXE (amd e-450, Realtek 8111E, 6xSata). It working pretty good, speed is enough but it isn't stable, it working good by day or two and it stop responding. I think I should swap to Intel cpu. Unfortunately motherboards with integrated cpu has mostly 2 or 4 sata ports, I need about 10.

I was thinking about motherboard + lsi sas controller or two 88SE9215 controllers.

I have some 4gb ddr3l laying around so itx motherboard would be nice but matx is ok. I wanted it to be cheap and low energy. Also I'm not sure that lsi sas controller would be better choice over 88SE9215.

I can buy:

 

1. ASUS N3050M-E (n3050, Realtek RTL8111H, 2x SATA, 3x Pci-E x1) + 2x marvell controllers

 

2. ASRock Q1900M (j1900, Realtek RTL8111GR, 2x SATA, 2x Pci-E x1, Pci-E x16) + lsi or 2x marvell controllers

 

3. ASRock Q1900-ITX  (j1900, Realtek RTL8111GR, 2x SATA, 2x Sata, Pci-E x1, mPci-E) + marvell controller

 

4. ASUS N3150I-C (N3150, Realtek RTL8111H, 2x SATA, Pci-E x4, mPci-e) + lsi controller

 

5. ASUS N3150M-E (n3150, Realtek RTL8111H, 2x SATA, 3x Pci-E x1) + 2x marvell controllers

 

6. ASRock J3455M (j3455, Realtek RTL8111E, 2x SATA, 3x Pci-E x1) + 2x marvell controllers

 

I need it for backup from 4xPC, 6xAndroid, iscsi for 2xPC, docker with openhab/smartassistant. E450 was enough for this "tasks"

Which motherboard would be best with which controllers?

 

Edited by Filo301
  • Like 1
Link to comment
Share on other sites

5 hours ago, jensmander said:

In short: stay away from Realtek NICs and Marvell controllers. Try to use Intel NICs and LSI chipsets. A good start is the update report section. Users of bare-metal systems post their hardware specs in there.

Why stay away from Realtek? I see mixed reports about it and intel for that fact. I have dsm 6.2 u2 on 8111e, I’ve also got dsm 6.2.2u3 on 8111h. So I’m not sure if there’s really any concerns here. There’s some loaders that seem to work with intel i211 and some with i219 but vice versa sometimes it doesn’t work. 

  • Like 1
Link to comment
Share on other sites

11 hours ago, jensmander said:

In short: stay away from Realtek NICs and Marvell controllers. Try to use Intel NICs and LSI chipsets. A good start is the update report section. Users of bare-metal systems post their hardware specs in there.

Unfortunately only motherboards I can buy cheap(<100$) is listed in first post, none of them has Intel nic but I can add pci-e Intel nic. I'll probably pick up ASRock J3455M because it fastest and same cpu as ds918+. I'm worried about lsi controllers, motherboard have only pci-e x1 so it's only 500MB/s(should I care about this? I use only gigabit lan). I could use two lsi if pci-e x1 speed would be bottleneck. I saw that (used) lsi controllers price is close to Marvell ones

Link to comment
Share on other sites

I don't think LSI controller will work at all without at least a PCIe-x4 slot. If you can live with four drives, J-series are decent, inexpensive board and the Realtek driver works fine on DS918 with the "real3x" mod - see my signature for more or search for real3x.

 

If you need more than four SATA, AsRock J-series is not the way to go. 

Link to comment
Share on other sites

I will try anyway. ASRock J3455M that looking promising has pci-e x16 but electrically x1. If pci-e x1 will be bottleneck I'll add second h310, one for every 5 drives.

I also ordered pci-e riser. I want desolder integrated Realtek nic and put Intel nic instead 😀 it should work.

Edited by Filo301
Link to comment
Share on other sites

Dell h310 came to me today. I tested it on Asus e45m1 with two ssd on raid 0. Realtek nic, transfers to pc with Intel nic via 2 Chinese routers.

 

Left is smb share, right is iscsi.

Pcie X1:

 

pciex1.png

IMG_20190922_132035.jpg

 

Pcie X4 with some services off(I saw that CPU usage is often 100%:

pciex4.png

 

IMG_20190922_132104.jpg

 

I did test with online video playing on pc so it can have little affect at speeds

Overall bandwidth is very similar so pci-e x1 isn't bottleneck for lsi controller while using Gbe lan

Edited by Filo301
Link to comment
Share on other sites

I need advise about intel nic, 2x lan(bonding) should be enough. Which one? i found:

1." for Intel i340-T4" - propably 82580  chipset, 4xlan, pcie x4

2. "I350-T2" -  i350AM2 chipset, 2xlan and pcie x4 but i have to cut it to pcie x1

3. "E575T2 " -  82575  chipset, 2xlan and pcie x1 so it is most suitable and cheapest too

4. "82576-T2" -82576 chipset, 2xlan and pcie x1

3 and for would be great because it's pcie x1 so will fit without any problem. 2 looking best but i have to cut pcie x4 to pcie x1.

What about compatibility with dsm? there is any advantage of i350 over 5257*?
One more question, there is any way to use one ssd cache drive for write and read and/or one ssd as cache for more than one volume?

 

Some more tests with almost all services turned off

pci-e x4:

pciex4.thumb.png.cbac684cd13332fb8fc343aeeebcc2ac.png1998874621_pciex4atto.thumb.png.89727a7f01edb22d972e335454a10316.png

pci-e x1
pciex1.thumb.png.a04f522a10a3482032f7fcd6c85d02e5.png
 

1453218142_pciex1atto.thumb.png.c4ce26d94b63005414647aeefa3616f7.png

Edited by Filo301
Link to comment
Share on other sites

  • 7 months later...

Hello

 

I'm looking for a motherboard like these. J3344 or similar, to put in it a SAS controller. 

It's possible to setup a LSI 9211 controller with pci-e 8x in one of these? It seems that the x16 slot is only for graphic cards.

 

Thanks 

Link to comment
Share on other sites

  • 2 weeks later...
On 5/6/2020 at 3:26 PM, SantiBass said:

Hello

 

I'm looking for a motherboard like these. J3344 or similar, to put in it a SAS controller. 

It's possible to setup a LSI 9211 controller with pci-e 8x in one of these? It seems that the x16 slot is only for graphic cards.

 

Thanks 

I'm using ASRock j3455-m mobo with 8GB ram, Dell h310 sas controller(pcie x16@x4), Intel i350-t2 nic(pcie x1) and tevii s480 dvb-s2 connected via usb + Jun's loader 1.04b with DSM 6.2.3

 

This mobo requires pins A1 and B17 to be shorted for use x16 pcie card in x1 slot.

I didn't know that and I "wasted" a few hours.

 

It's still a test build but its on the right track.

WP_20200516_02_44_13_Rich_LI.jpg

WP_20200516_02_51_11_Rich_LI.jpg

received_1365579133628313.jpeg

Edited by Filo301
Link to comment
Share on other sites

7 hours ago, Filo301 said:

I'm using ASRock j3455-m mobo with 8GB ram, Dell h310 sas controller(pcie x16@x4), Intel i350-t2 nic(pcie x1) and tevii s480 dvb-s2 connected via usb + Jun's loader 1.04b with DSM 6.2.3

 

This mobo requires pins A1 and B17 to be shorted for use x16 pcie card in x1 slot.

I didn't know that and I "wasted" a few hours.

 

It's still a test build but its on the right track.

WP_20200516_02_44_13_Rich_LI.jpg

WP_20200516_02_51_11_Rich_LI.jpg

received_1365579133628313.jpeg

Wow, it's amazing. I bought these days exactly the same mobo and the same SAS controller.  It could work in the PCI2 (x16) without changes? 

The realtek like i suppose doesn't work in xpenology, isn't it? It's total necessary the intel nic. What is the the thing in pci3?

Link to comment
Share on other sites

1 hour ago, SantiBass said:

Wow, it's amazing. I bought these days exactly the same mobo and the same SAS controller.  It could work in the PCI2 (x16) without changes? 

The realtek like i suppose doesn't work in xpenology, isn't it? It's total necessary the intel nic. What is the the thing in pci3?

Realtek nic showed up under DSM but it get weird ip address, I didn't try to use it. Sas controller work without problems but it has to be flashed to it mode. Pci2 x16 is actually electrically x4 however it's working fine. If DSM wouldn't detect sas controller, just change bios setting called "primary graphics adapter" or something similar from onboard to pcie express, probably ASRock j1900m had this problem/feature. There is intel nvme/optane in third pcie. It surprisingly showed up under bios and it can be used at boot drive but unfortunately it doesn't showed up under DSM. Also dev/dri folder is missing so no hw transcoding(sw is working) but it could be because old settings. These hdds with settings has been used under 3 different AMD CPUs and 4 Intel CPUs with different loaders and DSM so I should probably do a clean install.

 

My English is pretty shitty so let me know if something is unclear etc.

Edited by Filo301
Link to comment
Share on other sites

13 hours ago, Filo301 said:

I'm using ASRock j3455-m mobo with 8GB ram, Dell h310 sas controller(pcie x16@x4), Intel i350-t2 nic(pcie x1) and tevii s480 dvb-s2 connected via usb + Jun's loader 1.04b with DSM 6.2.3

 

two point why i doubt your "Dell h310 sas controller(pcie x16@x4)"

asrock states its 16x slot pcie2 as pcie1x and intel specs for the six pcie lanes of that soc are

1x4 + 1x2
or
4x1
or
2x1+1x2 + 1x2

so there is no 1x4 + 2x1 mode listed

https://ark.intel.com/content/www/us/en/ark/products/95594/intel-celeron-processor-j3455-2m-cache-up-to-2-3-ghz.html

so it seems likely you have your sas controller @pcie1x and ~500 MByte/s

 

Quote

This mobo requires pins A1 and B17 to be shorted for use x16 pcie card in x1 slot.

I didn't know that and I "wasted" a few hours.

i did not use a dell reflashed controller and a pcie 1x to 16x adapter and had nothing to change to get it up  and running, but good to know

 

Quote

It's still a test build but its on the right track.

that denpends on the point of view ;-) - its a nice franken-mod

for me you lost the way when trying to pimp it (but it a very reasonable priced build)

my choice in the same board size is a gigabyte B365M HD3

beside having 6xsata already its much more capable when it comes to pcie

2x16x (1x16 ,1x4)

2x1x

1xM.2 (also a 4x pcie, when used with a adapter card M.2 to pcie 4x or a M.2 sata card)

also it has pcie 3.0 spec so when using a controller/chip like JMB585 its possible to get enough bandwidth for normal hdd's (like when in the pci4x slot or in M.2)

and in this scenario we still have the pcie 16x slot open for a dualport 10G nic (like is use it)

depending on whats more important its also possible to have the pcie 8x sas controller in the 16x slot to get all of its performance (when using it as a all flash 8 x sata ssd controller like lsis sas 2308 based controller)

 

btw. there is a patch available to get nvme ssd usable in dsm/xpenology but in your case the performance of a pcie attathed ssd seems wasted as its just having one pcie lane, a (properly) attached sata ssd might perform similar, is easier to integrate in dsm and/xpenology and costs less

 

with your build you might not see most of the performance problems as you just use 2x1GBit ethernet, but you will see lower performance when rebuilding a raid or extending

 

13 hours ago, Filo301 said:

tevii s480 dvb-s2 connected via usb

 

how is this working?

how does a pcie device translate into a "propper" usb vid/pid to detect it as dvb device?

also the cable on the tevi card seem to be attached on the upper edge of the card, where is the wiring going from this point?

 

4 hours ago, Filo301 said:

. Also dev/dri folder is missing so no hw transcoding(sw is working) but it could be because old settings.

 

you would need to use my new 0.11 extra/extra2 to get rid of jun's old i915 drivers, that way the new i915 drivers from synology would work

 

On 9/20/2019 at 8:00 AM, jensmander said:

? The H310 is PCIe 2.0 8x. You can’t run it in an x1 slot.

i can assure you it does, i tried it with a pcie 1x to 16x adapter (usually sold for miners) and it does work without and problems - beside the terrible performance it will have when reduced to ~500 MByte/s sadly the good old LSISAS2008 chip only supports pcie 2.0 spec, only the newer LSISAS2308 chip supports pcie 3.0

 

Edited by IG-88
Link to comment
Share on other sites

12 hours ago, IG-88 said:

 

two point why i doubt your "Dell h310 sas controller(pcie x16@x4)"

asrock states its 16x slot pcie2 as pcie1x and intel specs for the six pcie lanes of that soc are


1x4 + 1x2
or
4x1
or
2x1+1x2 + 1x2

so there is no 1x4 + 2x1 mode listed

https://ark.intel.com/content/www/us/en/ark/products/95594/intel-celeron-processor-j3455-2m-cache-up-to-2-3-ghz.html

so it seems likely you have your sas controller @pcie1x and ~500 MByte/s

You're right! I assumed that it is x4 because it has pins for pcie x4 but it is actually x1(checked via lspci) which make sense. 3x Pcie slots@x1 + pcie x1(integrated realtek nic) so 4x1.

On 9/20/2019 at 8:10 AM, Filo301 said:

ASRock J3455M that looking promising has pci-e x16 but electrically x1

I forget about it.

12 hours ago, IG-88 said:

i did not use a dell reflashed controller and a pcie 1x to 16x adapter and had nothing to change to get it up  and running, but good to know

Actually Intel nic was not working in pcie x1 slot. Only pcie x1 slots in this motherboard requires shorting two pins, i didn't know that.

12 hours ago, IG-88 said:

that denpends on the point of view ;-) - its a nice franken-mod

for me you lost the way when trying to pimp it (but it a very reasonable priced build)

my choice in the same board size is a gigabyte B365M HD3

beside having 6xsata already its much more capable when it comes to pcie

2x16x (1x16 ,1x4)

2x1x

1xM.2 (also a 4x pcie, when used with a adapter card M.2 to pcie 4x or a M.2 sata card)

also it has pcie 3.0 spec so when using a controller/chip like JMB585 its possible to get enough bandwidth for normal hdd's (like when in the pci4x slot or in M.2)

and in this scenario we still have the pcie 16x slot open for a dualport 10G nic (like is use it)

depending on whats more important its also possible to have the pcie 8x sas controller in the 16x slot to get all of its performance (when using it as a all flash 8 x sata ssd

controller like lsis sas 2308 based controller)

 

btw. there is a patch available to get nvme ssd usable in dsm/xpenology but in your case the performance of a pcie attathed ssd seems wasted as its just having one pcie lane, a (properly) attached sata ssd might perform similar, is easier to integrate in dsm and/xpenology and costs less

 

with your build you might not see most of the performance problems as you just use 2x1GBit ethernet, but you will see lower performance when rebuilding a raid or extending

 

Gigabyte B365M HD3 is way better but i wanted to make this as much power efficient/cold as i could. It will work in attic where is often hot. J3455 is very cold but lsi sas2008 is not.

It will be used mostly for thveadend(dvbs), backups and VM/docker with supla + openhab or homeassistant(i cant decide which one is better). 

I wanted to use intel optane ssd for apps/VM because is pretty reliable and i have a few laying around but it also can be sata ssd. I believe that it will allow me to hibernate hdds more often/for longer time.

12 hours ago, IG-88 said:

how is this working?

how does a pcie device translate into a "propper" usb vid/pid to detect it as dvb device?

also the cable on the tevi card seem to be attached on the upper edge of the card, where is the wiring going from this point?

Tevii s480 is basically two s660 connected to pcie via pcie-usb bridge. I just bypassed pcie-usb bridge and connected it directly to internal usb.

20200517001921_IMG_8728~2.JPG

12 hours ago, IG-88 said:

you would need to use my new 0.11 extra/extra2 to get rid of jun's old i915 drivers, that way the new i915 drivers from synology would work

Thank You! I didn't do that yet because i was thinking that jun's loader is close to stock synology. I just want to make this as painless/worryfree as possible.
 

Edited by Filo301
Link to comment
Share on other sites

33 minutes ago, Filo301 said:

lsi sas2008 is not.

5-15 watt from specs, the heatsink is a hint

not sure how many disks you need to connect but there is marvell 88se9215 (4ports) and jmb585 (5ports) that would have less power consumtion (~1W)

maybe replace the nvme ssd with sata and you will have two pcie 1x slots for 8-10 disks (but the ssd would then be sharing the 500MB/s with other disks (hen they are used)

you might also be able to shave off power consumtion power by using the internal nic and replacing the 2port intel nic with a 1 port pcie 1x card,  my iintel  cards have pretty small chips without heatsink, a i211 has less the 1W (without the phy)

or getting rid of the pcie nic at all and use a usb nic?

 

55 minutes ago, Filo301 said:

Tevii s480 is basically two s660 connected to pcie via pcie-usb bridge. I just bypassed pcie-usb bridge and connected it directly to internal usb.

nice out of the box thinking and soldering

Link to comment
Share on other sites

28 minutes ago, IG-88 said:

5-15 watt from specs, the heatsink is a hint

not sure how many disks you need to connect but there is marvell 88se9215 (4ports) and jmb585 (5ports) that would have less power consumtion (~1W)

maybe replace the nvme ssd with sata and you will have two pcie 1x slots for 8-10 disks (but the ssd would then be sharing the 500MB/s with other disks (hen they are used)

you might also be able to shave off power consumtion power by using the internal nic and replacing the 2port intel nic with a 1 port pcie 1x card,  my iintel  cards have pretty small chips without heatsink, a i211 has less the 1W (without the phy)

or getting rid of the pcie nic at all and use a usb nic?

I need like 3-4 drives in raid5 + 3-4 drives for backups. This is test build so everything could be changed. Someone said that i should stay away from Marvell controllers and lsi is a way to go(which is weird because ds918+ use marvell  controller). This mobo has 2xSATA build in, perfect for ssd.
I added hestsink to this intel nic but it is unnecessary. USB nic is good idea. I could remove integrated realtek nic but i don't need more pcie slots for now.

Edited by Filo301
Link to comment
Share on other sites

7 hours ago, Filo301 said:

. Someone said that i should stay away from Marvell controllers and lsi is a way to go(which is weird because ds918+ use marvell  controller).

there can be bad builds with no quality controll at all for one reason, choosing a controller from syba/iocrest or any "brand" that has a name to loose might help in this matter

i dont favor them much because there are extremely outdated, like no pcie 3.0 support combined with a low pcie lane count

having a pcie 1x card with pcie 2.0 makes it 4 ports for 500MB/s (88se9215) and having just 2 lanes (88se923x) and blocking a valuable 4x/8x/16 pcie port for just 4 ports is also not so good

the lsi scales much better even used controllers still work reliable and there are o lot of them around so they can be bought cheap

but i took them out of my new build as the driver does some uncool things like shifting drives when one is missing so you will have drives jumping "slots" when one is having problems, can make recovery a mess and same cases exchanging drives can go wrong, i also had some weird log messages with iron wolf drives lately and had to connect them to the onboard sata

beside this the mpt sas drivers are not native drivers for the 918+ and in some situations (like the 6.2.2 update) you might loose support for the controller

ahci is better in both of the last points, fixed drive positions and native build into the synology kernel - biggest problem is to get a high port count, low price and making use of a pcie 4x or 8x port or at least pcie 3.0 support

marvell and asm media dont have pcie 3.0 to ahci  chips at all, only thing is the JMB585 (i'm using now for my new build and testing/backup build)

performace is ok for normal hdd's on pcie 3.0 (2 lanes) about 350MB/s on all 5 ports and max.570MB/s per sata port (so its good to max out 3 sata ssd's)

https://forums.unraid.net/topic/78489-pci-e-sata-expansion-card/?tab=comments#comment-806159

there is also a M.2 version of it (just tested it shortly, i'm a little worried about the chip not having a heatsink, i guess if i turn my cpu's heatsink 90 degree i might get some airflow in the M.2 region)

but when just having a pcie 4x slot, pcie 3.0 - in case of port counts and performance a sas2307 chip based controller is still the best choice

but at least there are options now with the jmb585, lets hope there will be a cheap pcie 4x bridge/switch for combining two jmb585 and make full use of a pcie 4x slot

 

btw. there is no indication for 88se9215 problems in this list

https://wiki.unraid.net/Hardware_Compatibility#PCI_SATA_Controllers

(and synology using a 9215 for four disks instead of a 9235 is just sad, reduces the rebuild speed of the system a lot, but i guess they where looking for the cheapest solution to saturate one or two 1GBit nic's and a 9215 is enough for this task)

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...