Jump to content
XPEnology Community
  • 0

RAID5 BTRFS w self healing and file system scrubbing.


mervincm

Question

Hi Folks!

 

I am considering adding an additional server to my lab and this time going with another open (open media vault, ubuntu lts server 20.04, maybe truenas core) to learn something new and get around a few items that I have not been able to get going in xpenology (16 threads, CPU Turbo)

Nothing else I look at seems to trust BTRFS in RAID 5 and offer the snapshots, self healing, and file system scrubbing that is so straight forward with a Synology/Xpenology

 

apparantly generic advice is

-RAID 5 / 6 build into BTRFS is not ready for prime time, it is NOT usable for prod.

-You can make an mdadm RAID5 then format it with BTRFS, but that is a bad idea (unsure why)

 

Does anyone here have a solid understanding of what Synology does differently that the other linux based options?

 

 

Link to comment
Share on other sites

6 answers to this question

Recommended Posts

  • 0

Synology does not use btrfs native RAID5 (the code that is untrusted).  DSM uses btrfs as a simple filesystem on top of a MDADM RAID (or a LVM comprised of multiple MDADM RAIDs in the case of SHR).

 

Snapshots and scrubbing are standard btrfs features.  But I believe Syno has forked the code to add/enhance the self healing services they advertise.  They may have to keep enhancing it themselves as I think open-source btrfs development is coasting and falling out of favor on the major Linux distros.

 

But there isn't any other low cost storage platform available that can offer the combination of features you speak of and run on modest hardware...

Edited by flyride
Link to comment
Share on other sites

  • 0

Given all your constraints, you should probably ask what you want to use 16 threads for. There is no filesharing workload that requires more than 8 cores for maximum throughput (my system can max 10Gbe with 4C/8T at well under 100%).  Is it to run other VM workloads on the same server?

 

I don't know if anyone has actually done this, but it is technically possible to pass iGPU through under ESXi 6.7.  It is also possible (although can be hardware limited) to passthrough NVMe devices.  You might consider testing a switch to ESXi, run DS918+ DSM as a VM and passthrough iGPU/NVMe.  If NVMe passthrough doesn't work on your hardware, you can RDM or RAW and present the disk to the VM as SATA and use it as cache.  The performance will be the same.  That would give you 4C/8T for DSM and whatever you have left for VM's.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...