Jump to content
XPEnology Community
  • 0

8-10 disk NAS with 10GBe


mgrobins

Question

Hi,

 

I'm after some technical assistance having dug around a lot of posts here :smile:.

 

A bit of background driving my interests in Xpenology...

 

I currently use a DS1813+ 8 bay NAS and am looking at shifting from 1GBe to 10GBe networking for my main file transfers. I do not do VM work or run a lot of other stuff on my NAS - it will be for file storage and backup/serving files.

 

I use SSD's for storage and as scratch disks in my PC (useful info in terms of bottlenecks).

 

I don't use link aggregation as I am predominantly using a single target. I was hoping for SMB Multichannel (SMBMC) to become more mainstream and even though it can be enabled in the latest DSM beta it seems Microsoft has neutered it in Windows 10 by killing off teaming in all but enterprise editions (Intels NIC software will allow it properly again but that's still being worked on).

 

I've found very little info on how this will play out so I'm examining a 10GBe link as part of my next NAS upgrade.

 

There is no reasonably costed 8 Bay in Synology's lineup that allows 10GBe and supports BTRFS (the only unit is also EOL).

 

I decided to look at Xpenology as I like DSM and thought it would be a robust and less of an admin overhead than running freenas or other server OS.

 

 

Given that the nearest options with Synology sell for about $3.5k Australian I don't mind paying a bit for hardware - I am just quite stumped as to what hardware I should be going after. I've done a lot of reading but knowing the ins and outs of different models isn't easy.

 

So.... if I lay out some requirements I would hope people can offer advice?

 

 

1. RAM - I'm looking at ECC to reduce the risk of introduced errors. (I mention this up front as it influences mb and cpu choices).... I see a lot of discussion of relative merit but have not seen a lot with regard to BTRFS and ECC.

 

 

2. CPU - if it's ECC RAM must the CPU be a Xeon? (Intel for best compatibility with future DSM upgrades?) A low power choice and something that has enough grunt to push decent data rates across 10GBe to/from my PC (I'll use mech drives in NAS, SSD in local PC).

 

3. Motherboard.

Needs sufficient SATA ports or at least 2 PCIE slots for 10GBe card and an HBA for the extra SATA connections. Cheap but reliable option.

I did see this one - ASROCK X99 - Extreme3 http://www.umart.com.au/newsite/goods.php?id=29070

It has 10 SATA 3 ports onboard plus an m.2 (not sure how I can store the DSM package for boot etc... even after reading around here :smile: )

Also has PCI-e 3.0 for 10GBe card, and SATA HBA if I wantto expand further later on.

 

4. SATA HBA - depending on motherboard I may need one. Looking for availability of reliable option and availability if replacement were needed.

 

5. Case: initially looking at Hot swap drives but that's likely not essential if I number everything and setup my connections in sequence (so I Can find the drive to swap out etc). I don't really have room for a rack mount.

 

Mid tower or smaller case - I have been looking for one I could put 2 by ICY box 5 bay (3x 5.25" ) into or 3 of the 3bay ones (2x5.25"). I'd likely put feet on the side and lay the case down.

Alternatively one I can just install 10-12 drives in internally would be ok.

 

I found a sharkoon ages ago but it seems to be unavailable now.

Simple and cheap. smaller the better :smile:.

 

6. PSU: standard ATX of 400W would be fine for my size needs?

 

 

7. 10GBe card - Intel X540 (RJ45) dual NIC. (need 2 cards - one for PC and one for NAS). Aim is to have direct connection with no switch and continue running my existing 1GBe LAN for internet, 1GBe links of laptop etc to NAS. This is feasible yes? I'd prefer to avoid purchasing a 10GBe switch at this point :smile:.

 

http://www.ebay.com.au/itm/NEW-Intel-X5 ... SwiONYNP-9

 

8. Disks: Likely to continue running 4TB in RAID 6. Cost for drives is ok and will come down as I need to expand more. DUal disk redundancy gives me some protection in case of URE on rebuild. I have a backup anyway that I will be taking of the NAS data (it's just a PITA to copy and rebuild again vs replacing a disk). I did consider RAID 10 but I want an expandable array. I'd rather a seperate backup target than that anyway really in case of hardware failure other than the disk.

 

 

Have I missed anything? Given my overall goal of REliability and simplicity I'd be looking for hardware that is 100% compatible with the mainstream build of Xpenology.

Link to comment
Share on other sites

9 answers to this question

Recommended Posts

  • 0

Ok,

 

this is what I have found to purchase in new / 2nd hand.

 

1. Xeon E5 6 core 1.6GHz

2. 8GB DDR4 ECC RAM

3. X99 Socket 2011-3 motherboard (10 onboard SATA, plus expansion PCI-e)

4. Cheapest GPU I can find for initial setup

5. Mid tower case

6. Intel X540-T2 10GBe cards.

7. 450-500W PSU

8. 4TB HDD's

 

Xpenology (DSM 6.X) running BTRFS.

 

How does that look? I want to use ECC so am somewhat limited by CPU and thus mainboard options.....

Link to comment
Share on other sites

  • 0

1st) You don't need an SSD as a system drive. DSM is always installed on all HDDs. You can use the SSD after your initial installation for caching.

 

2.) Not sure about the mainboard's sata ports (mixed or pure intel ich/pch?). Many boards have mixed sata controllers (intel & marvell for example). If this is the case you should modify your grub.cfg according to your mapping. Let's say you have 6x intel and 4x marvell, then you'll have to change the entry SataPortMap=1 to SataPortMap=64.

 

3.) Intel's X520-DA/TA series can lead to some problems if you use oem sfp modules in your card. I had these issues with 2 intel x520, connected to a HP 5406zl. The drivers couldn't be activated/loaded. When I switched to original intel sfp modules all worked fine.

 

4.) The overall performance and throughput depends on your setup (what raid level, mtu and what kind of hdds you use). I run three "big" bare-metall systems with 10gbit connections. I usually get 500-800MB per second on large files. Due to it's nature the performance drops when it comes to big loads of small files :cool:

Link to comment
Share on other sites

  • 0
I don't use link aggregation as I am predominantly using a single target. I was hoping for SMB Multichannel (SMBMC) to become more mainstream and even though it can be enabled in the latest DSM beta it seems Microsoft has neutered it in Windows 10 by killing off teaming in all but enterprise editions (Intels NIC software will allow it properly again but that's still being worked on).

 

From what I understand you don't actually need any link aggregation for the SMB multichannel. It should work by simply having x2 or more network cards on your server and PC with different IPs for each nic.

Link to comment
Share on other sites

  • 0

Thanks for the feedback :smile:.

 

From a parts replacement / rebuild point of view would it be better to use a common and easily found HBA instead of onboard SATA?

How does DSM / Xenology work with hardware failure and flexibility for different hardware keeping the disk volumes accessible?

 

 

 

With the X99 chipset you have 10x SATA 3 ports natively. Not all boards offer 10 though.

 

As for the NIC's I am looking at the X540 cards which are 10GBase-T (I think that's right for 10GB?) and as such use RJ45 via Cat-6a (Cat-5e is ok for my very short 10m run too potentially). Thanks for the heads up though as if I go down the path of SFP it's good to know the pit-falls.

 

I would really like to use SMB MC as I believe it would give me a good interim solution until I *Must* upgrade due to space limitations in my current NAS.

 

I do know that it is working in the current DSM beta so it might be on the table in the near future for xpenology users :smile:.

 

If I delay my build I will end up with a better solution I think. A smaller system that still has the CPU power to ensure my drive array is the limiting factor not processing.

Xeon-D look like a good choice with some boards on the way (and they include native 10GBe SFP+ support). I'd find a decent HBA that is robustly supported by Xenology and go from there. It would allow a smaller overall build and lower power use I think. I'd be up for a new board but could still go second hand for the SFP modules, HBA etc I should think.

 

With second hand parts the build I listed above worked out at Au$2500 with 5 x 4TB disks vs Au$3700 for the equivalent 12 Bay Synology NAS that interested me (diskless). DIY is sure a cost saver!

Link to comment
Share on other sites

  • 0

I am building a 10 bay NAS just like your need

 

1. Xeon E3-1220L -V3

2. 8GB DDR3 UDIMM ECC RAM ,(RDIMM not supported)

3. MATX C224 motherboard (2 sata 3.0 on board,4 sata 2.0 by mini-sas, 8 SAS by mini-sas in LSI2008 on mainboard) ,this is a mainboard from GreatWall R320 Server ,you can buy it from taobao

4. Display Card : VGA on MainBoard

5. Mid tower case- SilverStone KL04 or TJ04-E ,if you choose KL04 ,bur two1-4 SATA power convter cable

6. Network on MainBoard: one 10GB-sfp+ intel 82559 ,2x 1000M eth(intel i210 ,not supported by dsm 6.02 jun mod 1.01) ,1 ipmi eth port

7. PSU : SeaSonic G-550W

8. HDD capacity : 8x3.5 , 6x2.5 in case ,and the case has 4x5.25 cdrom bay for expand 。

there is an PCI-E 2.0 X8 slot on MainBoard for expand,I use it as pci-e eth expand。

 

I buy all these in china cost about 350 US$, everything is secondhand except PSU

 

hope it is useful for you

Link to comment
Share on other sites

  • 0

Thank you for the build info :smile:.

 

How did you find the Xpenology setup and compatibility? Which version are you running?

 

I had modified my options a bit due to trouble finding what I was after and the initial build was far more than I needed.

 

My current option was:

 

1. Fractal design Node-804 http://www.fractal-design.com/home/prod ... s/node-804

2. mATX Motherboard that has ECC support and IPMI or GPU onboard (XEON have no GPU correct? and I need XEON if I want ECC yes?)

- maybe something like this - http://www.umart.com.au/newsite/goods.php?id=35856

3. Intel X540T2 10G-BaseT NIC (dual RJ45 connections)

4. RAID / HBA that supports more SATA drives (most boards support 4 so I'd need a board with 2 SAS connectors I think for 8 ports)... something cheap and easily available like an OEM or similar off EBAY?

5. PSU small and reliable.... easy. 450W most likely.

6. CPU Cooler - do Xeon require active cooling?

7 RAM: 4 or 8GB ECC.

Link to comment
Share on other sites

  • 0
Thank you for the build info :smile:.

 

How did you find the Xpenology setup and compatibility? Which version are you running?

 

I am using jun mod 1.01 boot image ,and the DSM Version is 6.02 (6.1 testing failed)

the only thing I setup is /etc.defaults/synoinfo.conf

I change the disk number into 14 disk

 

 

there is some pity to my mainboard

1、inter i210 1G eth card not working under DSM 6.02 ,I think it is because no driver in boot image

2、No front USB support

3、CPU temprature sensor can not display correct。 that cause fan spped too high with noise ,so I had use fanless cooler。

 

I had modified my options a bit due to trouble finding what I was after and the initial build was far more than I needed.

 

My current option was:

 

1. Fractal design Node-804 http://www.fractal-design.com/home/prod ... s/node-804

that is an awesome MATX case

 

2. mATX Motherboard that has ECC support and IPMI or GPU onboard (XEON have no GPU correct? and I need XEON if I want ECC yes?)

- maybe something like this - http://www.umart.com.au/newsite/goods.php?id=35856

 

XEON has no GPU , if you want it work ,you should choose an gpu card or a mainboard with VGA integrated。

only XEON support ECC ram ,but you should pay attention to RDIMM and UDIMM difference as your mainboard spec。

 

my mainborad spec is here

http://www.greatwall.cn/Public/ServerProduct.aspx?id=fbca17f1-35df-420e-80a1-4e83a097fd87

3. Intel X540T2 10G-BaseT NIC (dual RJ45 connections)

My choice is SFP+ with fibre-optical cable, just because my H3C switcher only support SFP+

4. RAID / HBA that supports more SATA drives (most boards support 4 so I'd need a board with 2 SAS connectors I think for 8 ports)... something cheap and easily available like an OEM or similar off EBAY?

I think LSI 9211-8i is a good choice ,cheap and enough performance

5. PSU small and reliable.... easy. 450W most likely.

I do think so

6. CPU Cooler - do Xeon require active cooling?

Xeon has many version with different power consumption.

My choice is a Low Power version E3-1220L V3 ,it is only 16W at 1.1GHZ , I use it with a passive cooler ,yes fanless

7 RAM: 4 or 8GB ECC.

I do think so

Link to comment
Share on other sites

  • 0

THanks :smile:

 

I think I found the right board to get but I need to work out if it is compatible with Xpenology :smile:.

 

SUPERMICRO MBD-X10SDV-4C-TLN2F-O

https://www.newegg.com/global/au/Produc ... -_-Product

 

I am interested to know if the 10GBe particularly is recognised as it is on the SOC (X540 I think).

I'll add a HBA to this (Flash IBM one off Ebay perhaps).

 

 

*regardng your 10GBe problems there were some posts on here stating that a process needs to be stopped in the .conf file that is preventing data being sent across the 10GBe connection.

Try reading this: viewtopic.php?f=2&t=29976

 

I was going to go for the Avaton based one (C2000 cpu) but the intel error in them forcing failures around 2yrs is too risky.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...