Jump to content
XPEnology Community
  • 0

To buy smaller actual Synology or build MEGA Xpenology?


Sniper_X

Question

My need is to have a storage target with:

  • Multiple 10Gbe or better interfaces (40Gb, Infiniband, etc.)
  • Have snapshot and replication abilities
  • No "write cliff"  (meaning can be written to continuously with no degradation in speed after cache fills)
  • Be VAAI compliant, etc. (to interface with vSphere infrastructure)
  • Easy to set up and manage (WEB GUI, etc.)
  • Redundant power

 

My "would be nice to have" list includes:

  • Dual controllers
  • Automated storage tiering
  • Badass lookin' chassis :)

 

My minimum expectation on the hardware would be:

  • 24 drive bays
  • Expansion shelves should be possible
  • 12Gb SAS
  • FAST & Healthy amount of SSD cache
  • Motherboard, CPU and RAM combination that is ideal for storage and all background operations & housekeeping activities
  • 2-4 IP (or IB) interfaces for access to storage
  • 1-2 management interfaces (can be 1 or 10Gb)
  •  

 

My goal is to spend as little money as possible (yet knowing wither way it will be a few thousand), and get the most performance possible.

I'm willing to buy an actual Synology target, but would be MORE likely to build an even LARGER Xpenology target for the same money.

 

What I need from this group is to know what works and what doesn't.

I would be willing to put up with a certain amount of manual hacking to make it work, and manage it over the life of the unit, but want to keep that to a minimum.

 

I am also willing to document my build in DETAIL and post my work product online here or on some blog site for all to use.

 

I realize that I'm putting this out to a "rag-tag, fugitive fleet" of Xpenologists, but SURELY we can pull this together, yes?

 

-SX

 

(extra points to those that recognize my quote-reference above)  :)
 

Link to comment
Share on other sites

7 answers to this question

Recommended Posts

  • 0

Sounds like a nice system when it's done. 

Your spec list is a little confusing though.  Let's deconstruct:

  • 10 or 40GBe is no problem (dual-port card).  IB is a different animal, even if on the same drivers.  While it may work, I'm not sure you'll find many here that have used it.
  • If you are willing to overbuild such that there is no "write cliff" (your terms) then there is no point to a secondary cache.
    I'm with you, my system will r/w a maxed 10Gbe interface all day and doesn't drop a cent.  But it took enterprise SSDs to do it (4x RAID F1 minimum) and no cache.  I would rather not have SSD cache and more RAM instead, for the amount of money you are going to invest.
  • What are you trying to accomplish with "dual controllers"?  You'll probably need them for 24 bays, but are you thinking RAID10 with interleaving controller/drive assignments?  Otherwise there isn't any functional point to that objective either.
  • There are a limited number of SAS controllers that work well with XPEnology.  If that is really a requirement, test carefully or use Synology or other platform. For me high density enterprise SATA SSD scratches the itch just fine, but I don't have a pile of SAS drives lying around that I feel compelled to use.
  • VAAI compliance is as good as DSM gives, which is "okay."  I suggest you build up a sample XPEnology system or borrow a real Synology, test and make sure it meets your needs.
  • Redundant power is not managed by DSM and entirely up to the chassis you select.  So it's sort of irrelevant from an engineering standpoint.
  • Snapshots are inherent to DSM, therefore any Synology or XPEnology system you select.  Again, sort of irrelevant from an engineering standpoint.
  • Web GUI is inherent to DSM, ditto the last

You're not going to get automated storage tiering out of DSM (Synology or XPEnology), or any free solution. 24 drive bays are possible but problematic with XPEnology.  More possible with the right Synology unit. Expansion units are ideally supported with Synology, less so with XPEnology but can be made to work if you fiddle.  In my opinion, just build for the number of slots you want and don't make expansion chassis a distinction.

 

As far as getting some practical input on hardware, my best advice is to look through the compatible hardware and upgrade threads.  Make sure you have post signatures turned on and you can see what many people are using (mine may be useful as a start of some of what you are looking for).

 

https://xpenology.com/forum/forum/78-dsm-updates-reporting/

https://xpenology.com/forum/topic/12867-user-reported-compatibility-thread-for-dsm-62/

 

Lastly, have you actually used DSM before?  You've posted before, about older hardware you were trying that didn't work.  But have you actually used DSM?  If not, build up a simple XPEnology box and validate ALL the functionality you want before committing to the platform and spending thousands of dollars.

 

Your post title appears to ask whether you should choose XPEnology or Synology.  You are likely to get a biased answer, so if you are seeking support for XPEnology, you are likely to get it.  But a system at this level cannot be thrown together without some effort and a deep understanding of DSM, otherwise you probably ought to buy a Synology. And DSM cannot completely meet your spec list in any form, and is by no means a premium storage platform on the market. If you really want all that, you'll have to look elsewhere.

Link to comment
Share on other sites

  • 0

 

Thanks for taking the time to reply with your thoughts.

Let me also mention that I am very familiar with Synology, it's product line and abilities.

My company uses them all the time and may actually become a partner.


That being said, I still wouldn't want to spend the amount of money i would have to for even a discounted unit at what Synology offers partners for internal use.

(Last time I saw those discounts, they weren't deep enough to make it affordable enough for me at the level of power I want)

 

I also own a DS2415+ and use it daily.

 

On a side note:

I have been performing board-level electronics repairs for years, and when the CPU bug announcement was made, I actually repaired the CPU bug with a drop resistor on my own before Synology started repairing them officially and before people were posting about what the repair could be.

 

Anyway, back to the topic.


Allow me to answer your questions and comment on things in context below...

 

  • 10 or 40GBe is no problem (dual-port card).  IB is a different animal, even if on the same drivers.  While it may work, I'm not sure you'll find many here that have used it.

I have used Infiniband to make storage targets before, but on larger enterprise level installations for clients where read/write speeds to storage were critical.

However, that was about 6 years ago using HP C7000 blade chassis and Mellanox gear.

 

Trivia: Most 10Gbe cards are actually IB cards flashed with different firmware - at least that used to be the case, I haven't checked in a while.

  • If you are willing to overbuild such that there is no "write cliff" (your terms) then there is no point to a secondary cache.
    I'm with you, my system will r/w a maxed 10Gbe interface all day and doesn't drop a cent.  But it took enterprise SSDs to do it (4x RAID F1 minimum) and no cache.  I would rather not have SSD cache and more RAM instead, for the amount of money you are going to invest.

I wouldn't be concerned about the "write cliff" (actually an EMC term from the Xtreme IO days) using these storage targets in the way most of us do, (i.e. for middle of the road use at home). The use case I'm trying to build for would be to provide storage for a enterprise level lab containing two identical compute pods, 16 server blades each, two racks with WAN emulator in between and several proof of concept platforms like VDI, VMware Cloud Foundation, etc.

 

I want to demo actual real-world performance when I demo these platforms and their functionality.

 

What do you feel about the secondary cache now that you know that?

I didn't think Xpen/Synology had a secondary cache at all, so I'm curious to know more about that.

  • What are you trying to accomplish with "dual controllers"?  You'll probably need them for 24 bays, but are you thinking RAID10 with interleaving controller/drive assignments?  Otherwise there isn't any functional point to that objective either.

Let me clarify.

When I say dual controllers, I'm not meaning controller as discrete controller "cards", I mean like on typical SAN targets.

To use a Synology example, the UC3200.

  • There are a limited number of SAS controllers that work well with XPEnology.  If that is really a requirement, test carefully or use Synology or other platform. For me high density enterprise SATA SSD scratches the itch just fine, but I don't have a pile of SAS drives lying around that I feel compelled to use.

I mentioned this since 12Gb SAS is likely the fastest drive type I can lay hands on outside of going full NVME. (too pricey)

I would consider using SATA disks, but only as a lower tier of storage.

Since storage tiering is a highly unlikely possibility if i go the DIY route, it's not much of a factor.

  • VAAI compliance is as good as DSM gives, which is "okay."  I suggest you build up a sample XPEnology system or borrow a real Synology, test and make sure it meets your needs.

Understood, and VAAI is "good enough" for my purposes.

It would be nice to have other features too, but it's not a big deal.

  • Redundant power is not managed by DSM and entirely up to the chassis you select.  So it's sort of irrelevant from an engineering standpoint.

Oh, I know this.

This was a comment about chassis selection and mentioned to help paint a clearer picture of how I'm approaching this build with redundancy everywhere.

  • Snapshots are inherent to DSM, therefore any Synology or XPEnology system you select.  Again, sort of irrelevant from an engineering standpoint.
  • Web GUI is inherent to DSM, ditto the last

I also knew these things, but I was just making the list complete.

I can see how my listing it might suggest that I didn't know about it though.

 

Lastly, have you actually used DSM before?  You've posted before, about older hardware you were trying that didn't work.  But have you actually used DSM?  If not, build up a simple XPEnology box and validate ALL the functionality you want before committing to the platform and spending thousands of dollars.

 

Yes, I was (and still am) trying to make this unit work.

It's become quite the quagmire of time for me, and I certainly have gone "down the rabbit hole" trying to make it work.

In fact, it's one of the things that's tipping me over to just biting the bullet and building a real target (or buying one)

 

I would still like to make it go though.

It would be a nice lower tier of storage for me and would perform decently at 10Gb too.

 

 

Link to comment
Share on other sites

  • 0
35 minutes ago, Sniper_X said:

 

Thanks for taking the time to reply with your thoughts.

Let me also mention that I am very familiar with Synology, it's product line and abilities.

My company uses them all the time and may actually become a partner.

I also own a DS2415+ and use it daily.

 

Great, that helps.  You would be surprised about the number of people that come along planning to use XPEnology and haven't even used DSM.

 

35 minutes ago, Sniper_X said:

Trivia: Most 10Gbe cards are actually IB cards flashed with different firmware - at least that used to be the case, I haven't checked in a while.

 

Still true, or they have an Infiniband mode that can be selected.  Thus my comment about the drivers.  I know the Synology linux driver has support for the Infiniband features on my Mellanox card, but really couldn't tell you how well it worked as my home lab has no need for it.

 

37 minutes ago, Sniper_X said:

I wouldn't be concerned about the "write cliff" (actually an EMC term from the Xtreme IO days) using these storage targets in the way most of us do, (i.e. for middle of the road use at home). The use case I'm trying to build for would be to provide storage for a enterprise level lab containing two identical compute pods, 16 server blades each, two racks with WAN emulator in between and several proof of concept platforms like VDI, VMware Cloud Foundation, etc.

 

I want to demo actual real-world performance when I demo these platforms and their functionality.

 

What do you feel about the secondary cache now that you know that?

I didn't think Xpen/Synology had a secondary cache at all, so I'm curious to know more about that.

 

I use the term secondary cache for SSD because all the available RAM in DSM is used for cache.

 

If I can feed a storage system 100% bandwidth feed and the disks keep up indefinitely, what should I care about cache or interface type?   And SSD cache (particularly the r/w style) creates a concentrated data loss vector.  Risk for little or no reward IMHO.

 

I'm not saying don't use SAS (there are controllers that work).  But you have a cost limitation, drives are the majority of your cost, and SAS may not offer any more performance depending on layout. You never really said how much total storage was needed but the drive layout obviously factors (even with SSD it's bandwidth multipliers vs sustained write ability instead of spindles).

 

Depending on your targets, the value proposition for drives like 883DCT/Micron 5300/Seagate Nytro/Intel S4510 could be compelling.  For example, eight of the above SSD's can handle one 40Gbe IB connection. With RAIDF1 that's 26TB usable (using 3.84Tb drives).

Edited by flyride
  • Like 1
Link to comment
Share on other sites

  • 0

flyride,

 

So, I'm ready to build this thing.

I have access to TONS of 40Gb Inifiniband and 10Gb network gear now.

 

To further push the issue, I just had a close call with a large volume on my personal workstation and I've been sitting on this idea long enough.

 

So, lets pick the hardware and make a monster.

 

If you want to help out, let me know and I'll start the build thread.

Edited by Sniper_X
Link to comment
Share on other sites

  • 0
On 10/30/2020 at 12:03 AM, Sniper_X said:

So, I'm ready to build this thing.

keep in mind that we have no proof xpenology (at least 6.2.x) can use more then 24 drives, for now thats a hard limit and as long as no one proof's quicknick's claim of up to 60 drives and transfers his loader code from 6.1 (his 3.0 loader, never official but still floating around) to 6.2 (jun's laoder) its max. 24 anything above results in broken raid sets

 

also xpenology is a hacked appliance, you are bound (mostly) to the features of the hacked units DS3616 xs, DS3617sx, DS918+ - no specialty's like UC3200

additional drivers can be done in some cases when it compiles against synologys modded kernel source we have (6.2.2 is the latest we have)

in the end you just building a 3617 on steroids (limited to 16 cpu cores incl. HT, so it might be not that big at all, no 64 core monster as the kernel we have to use is hardcoded to that, 3615 and 918+ have only 8 cores)

but i guess for just moving some date between disks and nic 16 cores might be enough

 

if you want to play safe then just rely on hardware that works with drivers synology provides, it does not have to be hardware synology has on its list, you can go into the *.ko driver with a hex editor, look for vermagic and just above you will find the pci vendor and product id's supported (modinfo can deliver the same info)

so for you sas3 controller you would extract the mpt3sas.ko from the DSM_DS3617xs_25426.pat (insidehda1.tgz) and look into it

there are additional made drivers too but as seen with the dsm 6.2.1/6.2.2 phase you can be cut off added drivers when in need of a recent (security fixed) dsm version

and newer dsm versions like 7.0 or off limit as long as no new loader is available

 

be careful to have anything important tie to that system, it can easily break if other people without knowing about xpenology specific things doing low lever stuff or updates like assuming its a legit synology unit

Edited by IG-88
Link to comment
Share on other sites

  • 0
23 minutes ago, Sniper_X said:

Do you have any information about infiniband driver support in DSM?

beside that some drivers from synology have a "ib" in the name? no

ib_addr.ko

ib_cm.ko

ib_core.ko

ib_isert.ko

ib_mad.ko

ib_sa.ko

target_core_iblock.ko

mlx4_ib.ko

mlx5_ib.ko

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...