Jump to content
XPEnology Community

Raid 5 btrfs - cannot mount


Recommended Posts


Unfortunately I have a big problem with my NAS. There is important data on the Raid 5 volume (consisting of 4 4TB hard drives). The raid seems to be fine, more likely a problem with the file system (btrfs). I hope someone can help me with my problem (fingers crossed).

I'm really grateful for any help.  The logs are attached. Please let me know if more logs are needed.


Link to comment
Share on other sites

I agree, your array seems okay.


cat /etc/fstab will tell you how DSM thinks your array should be mounted.

Current versions do not mount directly to an array dev unless you go through some significant effort.  Are you certain you don't have a volume group setup?


This thread should help you with troubleshooting and data recovery on btrfs:


Link to comment
Share on other sites

Hi! Thanks for your answer. Apparently I don't have a group volume because typing vgdisplay --verbose or lvdisplay --verbose gives me the following output:


root@Synology:~# vgdisplay --verbose
    Using volume group(s) on command line.
    No volume groups found.
root@Synology:~# lvdisplay --verbose
    Using logical volume(s) on command line.
    No volume groups found.


cat /etc/fstab gives me the following:


root@Synology:~# cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/md2 /volume1 btrfs auto_reclaim_space,synoacl,ssd,relatime 0 0


This is how it looks on my machine (hope you don't mind it's in german):






What advice can you give me? Should I try the commands from the other post?
btrfs rescue super /dev/md2
sudo btrfs-find-root /dev/md2
sudo btrfs insp dump-s -f /dev/md2
btrfs check --init-extent-tree /dev/md2

btrfs check --init-csum-tree /dev/md2

btrfs check --repair /dev/md2


Or is executing a btrfs recovery my best chance? Appreciate your help!

Link to comment
Share on other sites

There isn't all that much documentation on how to recover btrfs.  The thread is following a recovery using some commands I compiled over several years.


It's supposed to self-heal (which it does most of the time, and often without your knowledge), and when it doesn't there is usually something significantly wrong.  It doesn't always mean data is lost, but the ability for btrfs to operate the filesystem in a healthy mode cannot usually be restored.  There are a few things that can go wrong that it won't automatically invoke redundancy for, which is the purpose of the find-root and tree commands.  Sometimes those need to be executed for anything to work at all.


Almost always, the long-term solution is to recover the files to another device, delete and rebuild the btrfs filesystem.

Link to comment
Share on other sites

Hi  I'm having trouble with the recovering and mounting of the raid0 array in Ubuntu  

Sda & sdb & sdc is one raid volume md2 and.     Sdd & sde is another raid md3.... I seemed to get the error of wrong fs type, bad option bad superblock and how do I get initialize the drives 1and 2 and 3



Edited by advante
Additional information
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Create New...