peterdermeter Posted September 20, 2022 #1 Posted September 20, 2022 Hi! Unfortunately I have a big problem with my NAS. There is important data on the Raid 5 volume (consisting of 4 4TB hard drives). The raid seems to be fine, more likely a problem with the file system (btrfs). I hope someone can help me with my problem (fingers crossed). I'm really grateful for any help. The logs are attached. Please let me know if more logs are needed. logsl.txt Quote
flyride Posted September 20, 2022 #2 Posted September 20, 2022 I agree, your array seems okay. cat /etc/fstab will tell you how DSM thinks your array should be mounted. Current versions do not mount directly to an array dev unless you go through some significant effort. Are you certain you don't have a volume group setup? This thread should help you with troubleshooting and data recovery on btrfs: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/?do=findComment&comment=107931 Quote
peterdermeter Posted September 20, 2022 Author #3 Posted September 20, 2022 Hi! Thanks for your answer. Apparently I don't have a group volume because typing vgdisplay --verbose or lvdisplay --verbose gives me the following output: Quote root@Synology:~# vgdisplay --verbose Using volume group(s) on command line. No volume groups found. root@Synology:~# lvdisplay --verbose Using logical volume(s) on command line. No volume groups found. cat /etc/fstab gives me the following: Quote root@Synology:~# cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/md2 /volume1 btrfs auto_reclaim_space,synoacl,ssd,relatime 0 0 This is how it looks on my machine (hope you don't mind it's in german): What advice can you give me? Should I try the commands from the other post? btrfs rescue super /dev/md2 sudo btrfs-find-root /dev/md2 sudo btrfs insp dump-s -f /dev/md2 btrfs check --init-extent-tree /dev/md2 btrfs check --init-csum-tree /dev/md2 btrfs check --repair /dev/md2 Or is executing a btrfs recovery my best chance? Appreciate your help! Quote
flyride Posted September 20, 2022 #4 Posted September 20, 2022 There isn't all that much documentation on how to recover btrfs. The thread is following a recovery using some commands I compiled over several years. It's supposed to self-heal (which it does most of the time, and often without your knowledge), and when it doesn't there is usually something significantly wrong. It doesn't always mean data is lost, but the ability for btrfs to operate the filesystem in a healthy mode cannot usually be restored. There are a few things that can go wrong that it won't automatically invoke redundancy for, which is the purpose of the find-root and tree commands. Sometimes those need to be executed for anything to work at all. Almost always, the long-term solution is to recover the files to another device, delete and rebuild the btrfs filesystem. Quote
advante Posted September 22, 2022 #5 Posted September 22, 2022 (edited) Hi I'm having trouble with the recovering and mounting of the raid0 array in Ubuntu Sda & sdb & sdc is one raid volume md2 and. Sdd & sde is another raid md3.... I seemed to get the error of wrong fs type, bad option bad superblock and how do I get initialize the drives 1and 2 and 3 Edited September 22, 2022 by advante Additional information Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.