Jump to content
XPEnology Community

Supermicro with 57 HDDs


zkd

Recommended Posts

Hi guys

 

First of all great work and wonderful solution.

 

I have the following hardware:-

 

- Supermicro Motherboard X9DR7-LN4F

- Dual Intel Xeon 2609 Quad Cores

- 32GB RAM

- LSI SAS 9200 HBA

- 12 Bay SAS/SATA Expander (3.5")

- Supermicro 2U Chassis with redundant PSU

- SuperMicro 45 bay JBOD 4U Chassis (3.5")

- 32GB SATA SSD for booting

 

basically the machine came from iXsystems with FreeNAS installed. it works fine and the FreeNAS sees all 57 Drives (WDC 4TB SE). I want to use this machine (and 4 others like it) as a storage system for Surveillance (300 Cameras on each NAS) the cameras don't seem to see the FreeNAS so I tested them with a single drive on Xpenology, they worked great.

 

my problem started when I got my HDDs, I installed them and the LSI saw them fine, but the DSM only sees 6. I tried all the solutions listed in the from but it will only see 6. I tried the different updates and versions of Xpenology including the DSM 5 still no luck.

 

I don't know what I'm missing Any help would be appreciated.

Link to comment
Share on other sites

ok here is he deal so far:

 

first of all Schnapps 200+ TB is not that much considering that each Camera will record full HD with sound on motion detection and want to keep the recording for t least 6 months all that x 300+ cameras you will run of space very quickly....:smile:

 

Dear Stanza I got all working. now I'm facing a new and strange problem, after creating several volumes to test what is the maximum space the camera will see. (I Created 5TB, 9TB and 15TB) the system for some wierd reason (watching this on the main console of the machine) detects a power button pressed and then shuts down.

 

I turned it on twice and as soon as it finishes the boot sequence there it is again (Power Button pressed) I have a feeling that the ACPI driver is acting up.

 

BTW I used the synoboot-3827-pre-v1.1_v8_hba.img for this.

 

any ideas?

Link to comment
Share on other sites

You might have hit the 16tb ext4 problem I would guess.

 

Eg, the versions of software (kernel) within xpenology can't expand an ext4 partition past 16tb...

 

Tho you can get around it with other tools... Eg expand the volume in another OS then remount back in xpenology....

 

Bit of a pain, I know.

 

So you have two options...

 

Either run as you are.... Expand in another OS and remount.

 

Or wait until you have more drives, and create a volume larger than 16tb straight up.

 

Tho you will still have a problem later if a drive dies.... And you need to replace it. (Eg hit the expansion problem again. :roll: )

 

Not much of a work around as yet.... Unless Trantor comes up with a workaround kernel tools etc.

 

.

Link to comment
Share on other sites

16TB max?

I don't understand - as the Synology website states, that the DS3612xs is able to expand a single volume to 180TB.

http://www.synology.com/en-global/produ ... w/DS3612xs

Or is it limited to max 16TB in one filesystem?

 

Yes real one has probably got a later linux kernel than the ones here.... Hence they don't have / have fixed the problem of expansion.

 

I have a 19 drive box waiting for this fix myself... Waiting as in ... Not game to transfer everything over to it just yet because of this issue. ( as some drives are in use in other boxen)... So once I go I can't go back just yet.

 

.

Link to comment
Share on other sites

Survaillence Station only comes with license for 1 camera. You will have to purchase a license to use it with all the other cameras.

 

no need each camera connects to the NAS directly and has NVR capabilities. also there are several NVRs that are free like Genius vision NVR and ispy

Link to comment
Share on other sites

You might have hit the 16tb ext4 problem I would guess.

 

Eg, the versions of software (kernel) within xpenology can't expand an ext4 partition past 16tb...

 

Tho you can get around it with other tools... Eg expand the volume in another OS then remount back in xpenology....

 

Bit of a pain, I know.

 

So you have two options...

 

Either run as you are.... Expand in another OS and remount.

 

Or wait until you have more drives, and create a volume larger than 16tb straight up.

 

Tho you will still have a problem later if a drive dies.... And you need to replace it. (Eg hit the expansion problem again. :roll: )

 

Not much of a work around as yet.... Unless Trantor comes up with a workaround kernel tools etc.

 

.

 

Stanza

 

I don't believe that this is the problem, because what I did was a multi-volume RAID and never exceeded 15TB per volume. I also did a single volume with 32TB and it didn't do the shutdown.

 

Edit:

 

My bad, turns out the Hard drives overheated. now I got the fans blasting, everything is running great.

 

Except I keep getting: wrong cpu core: 3

 

any ideas??

Link to comment
Share on other sites

Must admit, I didn't try the multi volume raid option yet.... Which SHOULD get around the problem.

 

Eg mdadm to pool the 19 x 2 tb drives together.... Then make under 16tb vg volumes (which are ext4) upon this array.

 

Had the hdd overheating problem

And yes had the wrong CPU core problem, which would be a kernel apic problem.... Not sure about a work around for that unless there is some grub setting to fix it?

 

.

Link to comment
Share on other sites

Thanks Stanza

 

That is exactly what I was looking for, BTW when you do link aggregation this means load balancing+fault tolerance. this will balance the load on all NICs and if all but one fails the NAS stays up. Best way if you put each NIC on a separate Switch. but if you want you can have them on the same switch except the switch becomes your weak point.

 

thanks again you are full of wonderful ideas and help wish I meet you before.

Link to comment
Share on other sites

  • 11 months later...

It's been running for about a year now with no major issues.

 

I did install the 48 HDD expander today, I installed 20 HDDs in it, all went well except that shows me any drive above 24 count as Disk 1 so I have 8 disk 1 in my disk management.

 

any ideas how this happened.

 

by the way I disabled any esata presence, I also disabled all the internal drive controllers, I only have the LSI logic running. I also made the mode recommended so that 'internalportcfg=0xffffffff

 

any ideas?

 

by the way I would like to thank the person who actually ported this because I discovered it is extremely resilient. I managed to screw up disk 1, and when I booted again it actually recovered all the data with a migrate install. so well done mate.

Link to comment
Share on other sites

  • 4 weeks later...
×
×
  • Create New...