Jump to content
XPEnology Community
  • 0

RAID 5 btrfs Volume crash...


Mitt27

Question

So my system booted up this morning as per its schedule but immediately I was welcomed by a volume crash notification, the system has been stable for around 5-6 years with only 1 drive failure ever occurring last year. I have had a look around the forum but cant find any examples that help me troubleshoot my exact issue and I don't want to just blindly head off trying things as if possible I need to recover the data. I have included some screenshots below of storage manager along with a few commands I ran on the command line. 

 

Any help would be greatly appreciated as I am drawing a bit of a blank having never encountered anything like this in the past.

 

image.thumb.png.e5efaed0a415116ca9ff19122d848eef.png

image.thumb.png.c02b3c0d4ec1f8f9dfb9a56caa1677e3.png

image.thumb.png.45654e16231ed3ac50bba1a42cedb08f.png

image.png.d98450a9e3c71d1a9faaa892240e7308.png

Link to comment
Share on other sites

12 answers to this question

Recommended Posts

  • 0

You have a simple RAID5 so the logical volume manager (lvm) is probably not being used and you won't have vg's.  You need to figure out what device your array is.  Try a "df" and see if you can match /dev/md... to your volume.  If that is inconclusive because the volume isn't mounting "cat /etc/fstab"

 

See this thread for some options: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/#comment-108013

Edited by flyride
Link to comment
Share on other sites

  • 0
6 hours ago, flyride said:

You have a simple RAID5 so the logical volume manager (lvm) is probably not being used and you won't have vg's.  You need to figure out what device your array is.  Try a "df" and see if you can match /dev/md... to your volume.  If that is inconclusive because the volume isn't mounting "cat /etc/fstab"

 

See this thread for some options: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/#comment-108013


so it looks like its /dev/md2 but df didnt pick it up.

1A4B5EB3-D631-4CDF-A658-971778518AD7.jpeg

Link to comment
Share on other sites

  • 0

the raid /dev/md2 seems to be ok, the fstab (mountig) looks good, no lvm when using a simple raid type is normal (i use raid6 and dont have lv), you see in your fstab that not lv is mounted its md2 that is mounted

check your log files, maybe a problem with the file system (btrfs) and mountig fails because of this, try to user the check/repair for the filesystem and device /dev/md2

Link to comment
Share on other sites

  • 0

Hello I'm reading this post with big hope as I'm in the same situation but with a RAID SHR1, this mean I got lv.

but unfortunately the btrfs check -init-extent-tree /dev/vg1000/lv  for me won't fix my issue. 😥

 

here is my result

# btrfs check --init-extent-tree /dev/vg1000/lv
parent transid verify failed on 394340270080 wanted 940895 found 940897
parent transid verify failed on 394340270080 wanted 940895 found 940897
parent transid verify failed on 394340270080 wanted 940895 found 940897
parent transid verify failed on 394340270080 wanted 940895 found 940897
Ignoring transid failure
Couldn't setup extent tree
parent transid verify failed on 394340270080 wanted 940895 found 940897
Ignoring transid failure
Couldn't setup device tree
extent buffer leak: start 394340171776 len 16384
extent buffer leak: start 394340171776 len 16384
Couldn't open file system

 

I'm lost again like my data.

@flyride  do you have an idea why my don't want to be fixed ?

 

thanks for advise and sorry to come into this post generated by @Mitt27 but his post is corresponding at 99% of my situation, but his solution apparently don't work with me.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...