Jump to content
XPEnology Community

Xpenology with 12 drives volume crashing


NeoID

Recommended Posts

This post is based on ferno's Don't fill all 12 Bays! thread.

 

This post targets all users who have filled all 12 drives and experience issues with XPenology in terms of volume corruption. I know that a lot of people do not have this issue, but I'm certain that there is a major bug in the current version of XPenology we need to solve.

 

How to reproduce the issue:

1. Create a new VM in ESXi

Configuration used: ESXi 6.0, 8GB RAM, lsi sas 9201-16i in passthrough

2. Fill all 12 drives and wait for XPenology to rebuild

I've personally started with 10 drives and then expanded to 12

3. Run a data scrub or simply stress the volume and it should happen within a short periode of time < 1 day

It happened once the rebuild was done and started running the data scrub

 

 

I have no idea why this happens. the configuration file seems fine and it's happening on vanilla Xpenology. I know that a lot of you don't have this issue, so it may not be a issue on bare metal at all, but if you do please let me know in this thread! hope to be able to rule out a few thing by requesting the following information from you who experience the same issue (here with my current info). My best guess is that this has something to do with SHR or disk groups...

 

XPenology version: XPEnoboot 5.2-5967.1

Virtualization: Yes, ESXi 6.0 Update 2, VM version: 11

RAID controller: LSI SAS 9201-16i HBA (latest IT firmware)

Disk group or single volume: Single volume and no disk group

RAID type: SHR 2

PSU: Corsair TX750M (750 W)

Short description: Volume crashed when expanding from 10 to 12 drives. Volume started throwing disks out of the array and suddenly about half of the drives where missing. Data was still readable, but volume marked as crashed and set to read-only.

Edited by Guest
Link to comment
Share on other sites

Can you do another test by re-building again with all 12 drives at a time?

You mean start from scratch? Would love to, but I only have one HBA card and that's currently in use... I'll look into it and see if I can get a new rig for testing.

In the meantime I hope more people are able to share their thoughts on this and help me test. :smile:

Link to comment
Share on other sites

I've built a number of bare metal systems from 6 to 20 drives (about to experiment with a 24 bay system). These are not 'high end' systems, tend to use the 4/6 onboard sata plus 8 or 4 port add in cards (marvel).

I've had a number of 'volume crashes' as described and have generally found that using a bigger/better PSU seems to solve it. I can imagine that 'under load' (eg scrubbing/building) if the PSU cant deliver the power then one or more of the drives will 'disconnect' randomly. This could be either the 5v or 12v rail getting 'maxed out'. Also take into account that stressing a system might also be drawing more current for the CPU, fans etc. I tested the startup v running current on the 12v rail for 4 drives and it was 6 amps dropping to 3 (about right for the drive spec 12v@700ma) but it did fluctuate in use. Also did some tests - an 'n' drive system - totally stable - n+1 - crashed - used a higher spec PSU - stable.

So maybe check the spec of the PSU and make sure it can cope with the various demands

Link to comment
Share on other sites

×
×
  • Create New...