Recommended Posts

Hi guys i am working on building a nas from a S3420GP server board...i would like to use 22 drives so i need 2 8 port nic cards that will work in this along with the 6 onboard sata...does anyone know what my best best for jun's bootloader and which dsm version i am best off with? i have tried jun 1.03b with dsm3617 25426 but thats not working...i added the vid and pid and the mac to the boot stick and still nothing past happy hacking the screen will stop updating shortly. 

Link to post
Share on other sites

lsi sas in it mode are the usual choice, 92xx or 93xx, most common is the 9211-8i

there are also a lot of  reflashed oem's from ibm and dell

 

with just pcie 2.0 there is not much possible with a good performance, possible would be a 2 lane pcie ahci controller with 4 ports in a slot (1000MB/s shared for 4 drives), not that attractive and gets expansive (there are 8 port controllers but they still share 2 pcie lanes so even lower MB/s per drive)

 

but as long as you dont get 3617 to work ...

try to disable efi boot or enable csm/legacy in bios, you need to boot fro the non uefi usb drivem (there might be uefi and normal/legacy shown for the same device)

the alternative might be loader 1.02b for 3617, that boots with uefi and csm but you would be stuck with dsm 6.1, might be better to figure out how to boot in csm/legacy mode (i'd expect it to have this for a 10 year old system)

 

also read what i wrote about 0-modem cable in your "old" thread

https://xpenology.com/forum/topic/34416-t5500/?do=findComment&comment=168372

 

did you give open media vault a try? depends what you are after, maybe that will work for you

Link to post
Share on other sites

the goal is to just build storage on this machine... i need to run dual 8 port "raid" cards along with the onboard sata to get up to 22 drives...not concerned about high performance. the  goal for the t5500 from my original thread was to build a 10 drive machine that i could run win10 in VM where i could build a plex server. however i need to experiment on the 3420 to find a combination of loader, dsm and hardware that will allow me to use the dual 8 port "raid " cards....i really like the ability of xpenology to run mismatched drive sizes and to add storage without having to rebuild from scratch. FYI i currently have both systems running with jun1.02 and dsm 3615 ....but on the 3420 every time i try to add another card to get over 10 or add in an 8 port "raid" card it starts throwing drive errors and volume crashes, ....i was just hoping to get a more concrete answer on compatible 8 port cards before i throw any more money at this setup :(

Link to post
Share on other sites
22 minutes ago, Warlock928 said:

i need to run dual 8 port "raid" cards along with the onboard sata to get up to 22 drives.

not sure what's your thing with raid cards, dsm is build in the way to not use "hardware" raid, you would need to force extra drivers into that and the resulting "virtual" disk  dsm sees will not have things like s.m.a.r.t.

i'd suggest to use esxi as base using such controller there (and the comfort of esxi to handle the hardware) and then do virtual disks for a dsm vm

 

26 minutes ago, Warlock928 said:

the  goal for the t5500 from my original thread was to build a 10 drive machine that i could run win10 in VM where i could build a plex server.

vmm (the kvm from synology) might not be the best solution for that, i's suggest esxi and having dsm and win10 virtual in esxi

also there is a native dsm package for plex or there might be a docker image too, using win10 in a vm seems strange (but a dont use plex)

 

29 minutes ago, Warlock928 said:

jun1.02 and dsm 3615 ....but on the 3420 every time i try to add another card to get over 10 or add in an 8 port "raid" card it starts throwing drive errors and volume crashes,

did you read the faq and install guide, i'm pretty sure they mention the 12 drive limit of 3615/17 (and 16 drive for 918+)

the patch in t the loader tweaks some things on 3615/17, it does not tweak the drive count so its the 12 drive default synology has for this units, it can be changed manually but that will crash the raid volume when doing a bigger update that resets the max. back to 12 (when the patch in the loader handles this like in 918+ then it would be done/checked on boot and it would work ootb with more drives) - but there is not that much need for this (its still on my todo list but there are other things too - but hey in theory everyone can do this its just understanding how the drive count is working and using diff and patch to create a new patch, it need some basic linux skills but it does not need a genius, more the will to do it and some time)

you can use the search in the forum for the 12 drive limit and there are good videos in youtube about that

https://www.youtube.com/watch?v=2PSGAZy7LVQ

 

btw. the crashed volume thing sounds dramatic but when handled properly it does nut result in data loss, if 10 drives of a raid5/6 are missing from 22 then the raid will simply not start and when the config is manually fixed back to 22 it would start the raid again on next boot, it gets a little nastier when just loosing one or two drives, then the redundancy is gone and in the pretty long time when the drive is rebuild you are without redundancy (synology always seems to rebuild the whole drive and uses no log to only rebuild the missing parts so it takes 6-24 hours instead of minutes)

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.