polishrob Posted April 26, 2016 Share #1 Posted April 26, 2016 Greetings XPEers, I came across a pretty good read this morning and I thought I would share. It's the new design for the BackBlaze storage pod 6.0. It's a 4U device that holds 60 drives using off the shelf components. They use (12) 5 port back planes attached to some extremely inexpensive SATA cards to achieve the 60 drive 480TB beast. This could be scaled down to something a little more realistic. I thought this might get some creative juices flowing. The main article is https://www.backblaze.com/blog/open-sou ... ge-server/ It's a bit long though you can skim to the Storage Pod Parts List here https://f001.backblaze.com/file/Backbla ... s+List.pdf If anyone does anything with this please leave a comment. I would be curious to hear what others have to say. Peace and Joy & Bits and Bytes, Rob Link to comment Share on other sites More sharing options...
andale Posted April 28, 2016 Share #2 Posted April 28, 2016 Interesting design but I doubt that you can get much performance out of this when you take a look on the backplanes. Assigning one S-ATA cable to five ports cuts down the bandwith. It's different to "traditional" backplanes in major server systems which usually have SAS. Link to comment Share on other sites More sharing options...
zwiter Posted April 28, 2016 Share #3 Posted April 28, 2016 Supermicro have low price JBOD (http://www.wiredzone.com/supermicro-racks-kvm-chassis-power-server-chassis-4u-rackmount-cse-847e26-r1k28jbod-10022306). Still it is a JBOD and you need another server to handle it. So I wonder if you can use it with Xpenology. I currently manage a jbod with 45 HDD (around 60TB) linked to a ZFS server (OpenIndiana) via Infiniband link. Link to comment Share on other sites More sharing options...
polishrob Posted April 28, 2016 Author Share #4 Posted April 28, 2016 Interesting design but I doubt that you can get much performance out of this when you take a look on the backplanes. Assigning one S-ATA cable to five ports cuts down the bandwith. It's different to "traditional" backplanes in major server systems which usually have SAS. True but... If you're using spinning disks it will probably bottleneck at the network anyway. (Unless you're using aggregation) On second thought, $250.00 would get you 16 disks capacity with the POD build. (One card with 4 backplanes). You could get a used LSI 16 port card for less with more bandwidth. I guess this POD design doesn't work out after all. Link to comment Share on other sites More sharing options...
andale Posted April 29, 2016 Share #5 Posted April 29, 2016 I've the luxury that one of my customers runs a plant for tool making and engineering. I'm building my own NAS case for 32 HDDs with 2X SuperMicro backplanes attached to 2X LSI 12GBs HBAs. Case consists of adonized Aluminium sheets. When it's finished I'll upload some pictures Btw: they already made PC cases for their own industrial panels. Milled from one solid block of Aluminium, pure luxury Link to comment Share on other sites More sharing options...
polishrob Posted April 29, 2016 Author Share #6 Posted April 29, 2016 I've the luxury that one of my customers runs a plant for tool making and engineering. I'm building my own NAS case for 32 HDDs with 2X SuperMicro backplanes attached to 2X LSI 12GBs HBAs. Case consists of adonized Aluminium sheets. When it's finished I'll upload some pictures Btw: they already made PC cases for their own industrial panels. Milled from one solid block of Aluminium, pure luxury When modern craftsmanship and technology meet it's a beautiful thing. Post the pics and the stats, I'm sure we would all like to see the end result. Have fun! Link to comment Share on other sites More sharing options...
mats42 Posted May 1, 2016 Share #7 Posted May 1, 2016 as stated it will be the network that will be the true bottleneck. For not to random read patterns 2-3 7200 rpm drives will fill a 1GB link. If you storage needs aren't that extreme I would say 3 cheap 4 port sata cards (around $35) plus the ports from the mainboard and you got support for 16-20 drives with most mainboards Link to comment Share on other sites More sharing options...
andale Posted May 1, 2016 Share #8 Posted May 1, 2016 That's correct if you're on one simple Gbit connection. But I doubt that someone would build a large system with - for example - 20 drives and use it in an soho environment The HDDs alone would cost above 1.500 $/€ if you use 2TB drives similar to WD's Red. And then use it with a simple Gbit card? Errr... Link to comment Share on other sites More sharing options...
tomtcs Posted December 8, 2016 Share #9 Posted December 8, 2016 Just checking in. I own three storinator storage servers and I'm curious if you were able to get xpenology running with all 60 drives. Link to comment Share on other sites More sharing options...
mattvirus Posted December 8, 2016 Share #10 Posted December 8, 2016 In enterprise and service provider networks, 40Gig network connections are very common, and 100gig connections are becoming more common. I have 2 servers, 12x4TBSAS with 12G-SAS cards connected to a compute array (Cisco UCS) via 4x10G (40Gig) connections. I can easily see bandwidth well over 10G but i have yet to see it peak my aggregated 40G link. Link to comment Share on other sites More sharing options...
Recommended Posts