Jump to content
XPEnology Community

Newbie: aiming to build a fast NAS that can occasionally run some computational stuff


JJJL

Recommended Posts

Hi all!

I liked my previous Synology NAS / DSM a lot! For the things it was meant for, that is. However, occasionally I think of something crazy, build a script and have it run for days. (The other day I was trying to run complex queries on a > 100GB database; I failed completely :) )

 

Hence, I was looking for a new NAS that could double as a server that could do some hobby computational work, but couldn’t really find what I was looking for. Hence I got to the idea to build something myself.

 

i was thinking of something like the following:

Xeon processor, e.g. Intel Xeon E3-1230 sounds like good value for money, but may opt to go for E5

Large RAM (at least 32GB) and/or NVME capable (although I understand from this forum that this is not supported yet?)

ITX form factor

4 times 2 TB in RAID 10

ready for 10GbE

 

Does this his make sense? What should I look for as a motherboard? Are all (recent) chipsets supported? Would this even fit in ITX cases?

 

My usual NAS tasks:

- storing photos and editing them in Photoshop

- multiple background scripts for home automation 

- recording for few cameras

- this occassional computational stuff

- VPN server

 

Thank you all for your help!

J

 

Edited by JJJL
Link to comment
Share on other sites

I'm running almost everything you're inquiring about.

  • 8-bay U-NAS 810A (8 hot swap drive chassis, MicroATX)
  • SuperMicro X11SSH-F (MicroATX)
  • E3-1230V6
  • 64GB RAM
  • 8x 4TB in RAID 10
  • Mellanox Connect-X 3 dual 10Gbe
  • 2x Intel P3500 2TB NVMe (these are U.2 drives)

A few items of note.  XPenology is running as a VM under ESXi.  This allows the NVMe drives to be RDM'd as SCSI, which works fine.  NVMe native doesn't work as DSM doesn't currently support it for regular storage.  The NVMe drives are attached via PCIe U.2 adapters since I don't need the slots.  I'm using the motherboard M.2 slot for the ESXi scratch/VM drive.   The SATA controller and Mellanox are passed through to the VM so DSM is using mostly native drivers, which work fine baremetal or VM with Mellanox and the C236 chipset.

 

So all this in a package in the footprint of, and just a bit taller than a DS1817.  I'm pretty happy with it, except that case was the hardest to set up of any server I've ever built.

 

I'm also running another XPenology server on an ITX board in a U-NAS 410, which is a 4-bay ITX chassis.  Works fine but it is running a low-power embedded chip. I'd pay a lot of attention to cooling and cooler compatibility if you want to run a 95W chip in an ITX case.

 

You should know that DSM code bases that work with XPenology only support 8 threads total (including hyperthreading) so most E5s might not be a fit.

  • Thanks 1
Link to comment
Share on other sites

Great!! Thanks. This helps a lot. Micro ATx it is then :)

 

Quote

I'm pretty happy with it, except that case was the hardest to set up of any server I've ever built.

 

Because of the case itself, right? Not the VM part. What was the difficult part? (I do not necessarily need the hot-swapping feature. Hence I could do with a 'simpler' case.)

 

Thanks again!

 

 

 

Link to comment
Share on other sites

My comment does refer to the case. The case assembly/disassembly is a bit intricate and requires some careful cable routing for airflow.  And I made it somewhat more complicated by adding in the U.2 drives and doing a custom power supply.

 

That said, my ESXi environment has a lot of tweaks in it as well - you can find the ESXi/NVMe thread on here with a little searching.

Link to comment
Share on other sites

Typically ESXi boots from a USB or a DOM, then runs from RAM.  It needs a place for temporary ("scratch") files, and also for the VM definitions and support/patch files.  So you will need some sort of storage for this.  You can virtualize your NAS storage or provide a physical (passthrough controller or RDM).

 

My configuration described above has all the storage intended for the NAS configured via passthrough or RDM.  None of that is available to ESXi for scratch, so another drive is needed.  I use the NVMe slot and a 128GB drive for this, and all it has on it is the VM definitions and scratch files - maybe 30GB in total, which includes some small virtual drives for XPenology test, and virtualized storage for a few other non-XPenology VM's.

 

Sorry if this is overly explanatory, but it sounds like you might be setting up ESXi for the first time.

  • Thanks 1
Link to comment
Share on other sites

Hi @flyride, sorry to bother you again (and in parallel). Seeing your comment elsewhere on this forum. I feel a bit insecure about the configuration. Would you be willing to shortly check the below on whether it can indeed run ESXi+Xpenology as intended? Thanks!

 

(I am now leaning towards RAID 1 for both HDD and SSD as speed is already high when needed (because of the SSDs) and the increased speed of RAID 10 would not really outweigh the costs - do you agree?)

(I was advised that extra cooling in the case would not be necessary - do you agree?)

 

 

Edited by JJJL
Link to comment
Share on other sites

This is ambitious. It's cool that you are following my install, but please understand that it's an esoteric and complex configuration with some inherent risk. XPenology has always been a bit fiddly. Be prepared to do a lot of testing and experimentation, and always have backups of your data. Honestly, I would have jumped to FreeNAS if it was necessary to get NVMe working.

 

What does testing and experimentation mean in this context? BIOS vs UEFI. BIOS spinup modes on your drives. BIOS virtualization configuration options. Check ESXi compatibility and stability thoroughly. Upgrade ESXi to the latest major patchlevel. Try various permutations of VM configurations and do a XPenology full install on each, so you know exactly what works and doesn't. Benchmark array performance on each. Deliberately break your arrays. Reboot them before rebuilding. Fully test hotplug in/out if your hardware supports it. Upgrade DSM. Just because some users have success upgrading a simple setup does not mean there won't be problems with a fully custom configuration.

 

A simple test VM is not a fully adequate simulator, because once you pass through disks, you adopt the risks of a baremetal install.

 

I apologize for the lecture.  I just don't want you committing to hundreds or thousands of dollars of hardware without understanding what you could be getting into.

 

On the equipment manifest:

  • The motherboard should work based on the spec sheet. The case and CPU cooling combo are fine. You might want to review this FreeNAS thread.
  • I'm not sure how the ASUS Hyper M.2 x16 card works - it must have a PCI switch on-board for it to support 4 drives? It must be supported by ESXi natively. If it is able to see all the drives using the standard ESXi NVMe driver, it should be fine.
  • Performance wise, there is no practical reason for a NVME SSD RAID10. NVMe SSD's will read crazy fast in RAID1 (>1 gigaByte per second), but they will probably throttle on sustained writes without active cooling. You might want RAID5/6/10 for capacity, or to use some lower capacity/less expensive sticks, which will also reduce (delay) the cooling issue. This is really silly talk though!
  • To be clear, 1GBps (capital B = bytes) disk throughput is 8x the performance of 1Gbps (small b = bits) Ethernet. If you don't have a 10Gbe or 40Gbe network setup, the NVMe performance is wasted.
  • RAID10 performance (and capacity, obviously) on HDDs scales linearly with the number of drives.  4xHDD in RAID10 is roughly 2x the speed of 2xHDD in RAID1.

Full disclosure item:

I have not set up the required NVMe pRDM passthrough configuration using 1.04b/6.2.1 yet.  I'm intentionally still running 1.02b/6.1.7 and intend to do some careful testing before committing to a newer DSM.  I can't think of a reason it won't continue to work technically.  One area needing attention is how to present the pRDM devices to DSM.  I've documented how SAS emulation can be used on 6.1.7 to match up native device SMART functionality with DSM's internal smartctl query arguments. This no longer works with 6.2.1. It seems that SATA emulation is the only reliable option.

Link to comment
Share on other sites

Wow! Thanks for taking the time!

 

I currently have DSM running and really like it. Specifically the apps (for both dsm and iOS, e.g. the photo app, auto backup of Google Drive and Onedrive, etc) and stability. But I did not like the hardware performance / price ratio. 

 

Your story sounds like an interesting challenge :) but also one that makes me double check. I will do a bit more research on FreeNAS in the coming days. I always thought FreeNAS was storage only (not the apps mentioned above), but maybe that has changed?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...