Jump to content
XPEnology Community

Building a 60 Drive 480TB Storage Server!


polishrob

Recommended Posts

Greetings XPEers,

 

I came across a pretty good read this morning and I thought I would share. It's the new design for the BackBlaze storage pod 6.0. It's a 4U device that holds 60 drives using off the shelf components. They use (12) 5 port back planes attached to some extremely inexpensive SATA cards to achieve the 60 drive 480TB beast. This could be scaled down to something a little more realistic. I thought this might get some creative juices flowing. The main article is https://www.backblaze.com/blog/open-sou ... ge-server/

 

It's a bit long though you can skim to the Storage Pod Parts List here https://f001.backblaze.com/file/Backbla ... s+List.pdf

 

If anyone does anything with this please leave a comment. I would be curious to hear what others have to say.

 

Peace and Joy & Bits and Bytes,

 

Rob

Link to comment
Share on other sites

Interesting design but I doubt that you can get much performance out of this when you take a look on the backplanes. Assigning one S-ATA cable to five ports cuts down the bandwith. It's different to "traditional" backplanes in major server systems which usually have SAS.

Link to comment
Share on other sites

Supermicro have low price JBOD (http://www.wiredzone.com/supermicro-racks-kvm-chassis-power-server-chassis-4u-rackmount-cse-847e26-r1k28jbod-10022306).

 

Still it is a JBOD and you need another server to handle it. So I wonder if you can use it with Xpenology. I currently manage a jbod with 45 HDD (around 60TB) linked to a ZFS server (OpenIndiana) via Infiniband link.

Link to comment
Share on other sites

Interesting design but I doubt that you can get much performance out of this when you take a look on the backplanes. Assigning one S-ATA cable to five ports cuts down the bandwith. It's different to "traditional" backplanes in major server systems which usually have SAS.

 

True but... If you're using spinning disks it will probably bottleneck at the network anyway. (Unless you're using aggregation) On second thought, $250.00 would get you 16 disks capacity with the POD build. (One card with 4 backplanes). You could get a used LSI 16 port card for less with more bandwidth. I guess this POD design doesn't work out after all.

Link to comment
Share on other sites

I've the luxury that one of my customers runs a plant for tool making and engineering. I'm building my own NAS case for 32 HDDs with 2X SuperMicro backplanes attached to 2X LSI 12GBs HBAs. Case consists of adonized Aluminium sheets.

 

When it's finished I'll upload some pictures :smile:

 

Btw: they already made PC cases for their own industrial panels. Milled from one solid block of Aluminium, pure luxury :grin:

Link to comment
Share on other sites

I've the luxury that one of my customers runs a plant for tool making and engineering. I'm building my own NAS case for 32 HDDs with 2X SuperMicro backplanes attached to 2X LSI 12GBs HBAs. Case consists of adonized Aluminium sheets.

 

When it's finished I'll upload some pictures :smile:

 

Btw: they already made PC cases for their own industrial panels. Milled from one solid block of Aluminium, pure luxury :grin:

 

When modern craftsmanship and technology meet it's a beautiful thing. Post the pics and the stats, I'm sure we would all like to see the end result. Have fun!

Link to comment
Share on other sites

as stated it will be the network that will be the true bottleneck.

 

For not to random read patterns 2-3 7200 rpm drives will fill a 1GB link.

If you storage needs aren't that extreme I would say 3 cheap 4 port sata cards (around $35) plus the ports from the mainboard and you got support for 16-20 drives with most mainboards

Link to comment
Share on other sites

That's correct if you're on one simple Gbit connection. But I doubt that someone would build a large system with - for example - 20 drives and use it in an soho environment :wink: The HDDs alone would cost above 1.500 $/€ if you use 2TB drives similar to WD's Red. And then use it with a simple Gbit card? Errr... :grin:

Link to comment
Share on other sites

  • 7 months later...

In enterprise and service provider networks, 40Gig network connections are very common, and 100gig connections are becoming more common. :smile:

 

I have 2 servers, 12x4TBSAS with 12G-SAS cards connected to a compute array (Cisco UCS) via 4x10G (40Gig) connections. I can easily see bandwidth well over 10G but i have yet to see it peak my aggregated 40G link.

Link to comment
Share on other sites

×
×
  • Create New...