Jump to content
XPEnology Community
  • 0

Getting System Partition Failed on Raid Group 1 on multiple drives (SATA HDD) while verifying new disk to be added. Repair works but it happened twice already.


undecided

Question

This is a new Xpenology server running 6.1.7 bare metal on an Intel 4790T with an LSI HBA ( LSI 9300-16i )and 6 6TB HDDs in RAID Group 1 (Raid-6,Btrfs). Most drives are WDC Red Plus with 2 Seagates. I also have a RAID Group 2 (Raid-6,Btrfs) which is 5 500GB SSDs. That has been stable. I had an addition EXOS drive set up as Hot Spare on Raid Group 1 and when the System Partition Failed message happened, that drive dropped out of being hot spare. The logs look suspicious:

2023/03/03 02:52:35  Write error at internal disk [12] sector 2153552.

2023/03/03 02:52:35  Write error at internal disk [8] sector 2153552.

2023/03/03 02:52:35 Write error at internal disk [7] sector 2153552.

2023/03/03 02:52:34 Write error at internal disk [13] sector 2153552.

2023/03/03 02:52:26 Write error at internal disk [14] sector 2153552.

How can all 5 drives fail on the same sector?

 

Thx

 

 

 

Link to comment
Share on other sites

11 answers to this question

Recommended Posts

  • 0
6 minutes ago, undecided said:

How can all 5 drives fail on the same sector?

Sometimes could be micro-cuts power. Have you tried change cable (Sata/Power) among disks? Check each connection through Board and Disks. Check plug tight. Try it. 

Edited by Mary Andrea
Link to comment
Share on other sites

  • 0
16 minutes ago, Mary Andrea said:

Sometimes could be micro-cuts power. Have you tried change cable (Sata/Power) among disks? Check each connection through Board and Disks. Check plug tight. Try it. 

Interesting. It was the same 5 disks both times. The weird thing is that they all failed on same sector. Now, my SATA power comes from different modular psu cables so I am pretty sure they do not share 1 cable and the one with the most plugs has 4 SATA power connectors on it and I have 5 drives fail simultaneously but I will double check.

Link to comment
Share on other sites

  • 0
4 minutes ago, undecided said:

I am pretty sure they do not share 1 cable and the on

Humm, share connection is not a problem, (its better for fluid electron vibrations) . Different factory quality, connectors, etc.etc. My solutions for micro-cut and permanently cache problems and slow sync, were minery splitters cables. -->  

cable-minieri.png

Link to comment
Share on other sites

  • 0

Are you under DSM 6.x, really ? or 7.x ?? because in my experience  7.1 is more stable and faster to resync. -Maybe if you upgrade to 7.1 by easy way APRL . You could unplug groups/volumes, (save your usb loader 6.x), and with a new stick and one new HD install in the same server . After isntalation, if new DSM works ok, you plug connect one group/volume at time, and new DSM7 will recognize a migration, DSM7.1 will request you "if want to keep old configuration", so you have "to say OK", and old HDDs , in a no more 3 minutes charge HDs Volume into new DSM. After that you can charge by each group and no more 15' you rebuilt migration into 7.1 . I d'it and works, after that -in my case 14TB- depuration time was 8hs. (while in 6.3 were 2 days). So, its a new way.   

Link to comment
Share on other sites

  • 0

6.1.7 . I also found a post on STH about possibly needing to plug in the additional pciex power into the 9300-16i which I have not done as it is plugged into a pciex-16 port which should have enough power but who knows. I was very nervous going to 7.1. Been on 6.x for years.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...