Jump to content
XPEnology Community

DANGER : Raid crashed. Help me restore my data!


Recommended Posts

il y a 2 minutes, flyride a dit :

@supermounter if your mdstats indicate healthy arrays, check out this thread, starting from post #9

https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability

 

Yess!

 

vgchange -ay  give success and active now

 

but only the mount -o recovery,ro /dev/vg1000/lv /volume2    the one without the file system had worked.

I just being a test rsync between one of my folder to another basic nas mounted with nfs into volume 1, and apparently it goes well.

 

Do you think there is a way to fix the file system from this volume , and then to be back again into my xpenology ?

 

Thank You Flyride to have take time to come back to me, really appreciate.

Link to comment
Share on other sites

Yep ! I don't want to miss the gate, first I will try to copy all outside, but this will take a wile (5,5To here) and I need to purshase a spare disk of 6To (not a good time for expense but if we need...)

I can already tell you the btrfs check --init-extent-tree /dev/vg1000/lv gived in return

parent transid verify failed on 394340270080 wanted 940895 found 940897
parent transid verify failed on 394340270080 wanted 940895 found 940897
parent transid verify failed on 394340270080 wanted 940895 found 940897
parent transid verify failed on 394340270080 wanted 940895 found 940897
Ignoring transid failure
Couldn't setup extent tree
extent buffer leak: start 394340171776 len 16384
Couldn't open file system

 

is it lost doctor ?

Link to comment
Share on other sites

14 hours ago, flyride said:

 

Honestly I don't know. btrfs repair is a bit of a void even if you search online. I still have the preference to copy off and rebuild the volume.

Hi Flyride. I got a new 8Tb drive like you recommended to me.

Is that ok to SATA attach it to my mobo and mount it as a volume2 unique drive, and then copy my data from volume1 to this new volume2?

I plan to do a full NAS reinstall - easier. I will detach the volume 2 before... but as it is unique, I can easily, reattach it to the future new NAS?

Thanks

Link to comment
Share on other sites

Sure you can make a new volume and copy as long as there are enough slots to build your new array for volume1.

 

I'm not sure why you think reinstalling DSM is easier though.  There is nothing unstable or corrupted about your DSM installation, thus nothing to be gained by doing a reinstall, and additional risk of making sure your 8TB volume is accessible and undamaged.

Link to comment
Share on other sites

5 minutes ago, flyride said:

Sure you can make a new volume and copy as long as there are enough slots to build your new array for volume1.

 

I'm not sure why you think reinstalling DSM is easier though.  There is nothing unstable or corrupted about your DSM installation, thus nothing to be gained by doing a reinstall, and additional risk of making sure your 8TB volume is accessible and undamaged.

Thanks! My DSM version should be updated. I will probably get a new server too! Thanks again! 

Link to comment
Share on other sites

I understand. I still advise to recover in place, get two copies of your data (one on a healthy RAID5, the other on 8TB) before making changes to DSM.  This situation exists due to acceptance of excessive risk (no backups) and now your only copy of your data is barely accessible.  Simultaneous data recovery operation and a DSM upgrade is a bad combination.

 

I'm not lecturing, just stating facts.  But I'll stop advising on this matter as it's entirely your choice.

Edited by flyride
Link to comment
Share on other sites

2 hours ago, flyride said:

I understand. I still advise to recover in place, get two copies of your data (one on a healthy RAID5, the other on 8TB) before making changes to DSM.  This situation exists due to acceptance of excessive risk (no backups) and now your only copy of your data is barely accessible.  Simultaneous data recovery operation and a DSM upgrade is a bad combination.

 

I'm not lecturing, just stating facts.  But I'll stop advising on this matter as it's entirely your choice.

Flyride, I am sorry but I restarted the syno to install the drive. I added the new drive as new volume, but now it seems I do not see the data anymore. Probably because of the restart.

 

Link to comment
Share on other sites

root@DiskStation:/volume1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF                                                                                                                1]
md3 : active raid1 sde3[0]
      7809204544 blocks super 1.2 [1/1] [U]

md2 : active raid5 sdc5[1] sdd5[3] sdb5[2]
      8776595520 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]

md1 : active raid1 sde2[3] sdb2[0] sdc2[1] sdd2[2]
      2097088 blocks [12/4] [UUUU________]

md0 : active raid1 sde1[0] sdb1[2] sdd1[3]
      2490176 blocks [12/3] [U_UU________]

unused devices: <none>
root@DiskStation:/volume1# cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/vg1000/lv /volume1 btrfs  0 0
root@DiskStation:/volume1# df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/md0         2385528 1085424   1181320  48% /
none             1022500       0   1022500   0% /dev
/tmp             1027768    1224   1026544   1% /tmp
/run             1027768    3012   1024756   1% /run
/dev/shm         1027768       4   1027764   1% /dev/shm
none                   4       0         4   0% /sys/fs/cgroup
cgmfs                100       0       100   0% /run/cgmanager/fs


Here are some info. Really hope it helps!

Link to comment
Share on other sites

When you added the new volume through the GUI, DSM probably rewrote the /etc/fstab file which we customized to get your broken volume to mount.

 

Go back to post #93 and edit it again.  Note that you might have a new line in the file for your volume2/md3 that you should leave alone.

Link to comment
Share on other sites

I don't understand how you come from

root@DiskStation:/volume1# cat /etc/fstab

 none /proc proc defaults 0 0

 /dev/root / ext4 defaults 1 1

 /dev/vg1000/lv /volume1 ext4 ro,noload,sb=1934917632,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0

 

to now

root@DiskStation:/volume1# cat /etc/fstab

 none /proc proc defaults 0 0

 /dev/root / ext4 defaults 1 1

 /dev/vg1000/lv /volume1 btrfs  0 0

 

seems your new mounted volume take the place of your previous volume...but here you had choose btfrs instead of ext4 for the new volume disk added.

maybe you will need to mount your previous volume into /volume2

I'm maybe wrong, but @flyride will give better check of my supposition as he said .: "you might have a new line in the file for your volume2/md3 that you should leave alone"

Link to comment
Share on other sites

why didn't you just connected the new drive with usb and leaved in peace the previous work ?

with a reboot you always be under risk of loosing something if you are already in trouble with bad disk or corrupted array.

I don't understand the choice of btrfs for your spare job disk if it's just to recover your data and rebuild your crashed volume, in final term they will go back in your array isn't it ?

Link to comment
Share on other sites

5 hours ago, flyride said:

When you added the new volume through the GUI, DSM probably rewrote the /etc/fstab file which we customized to get your broken volume to mount.

 

Go back to post #93 and edit it again.  Note that you might have a new line in the file for your volume2/md3 that you should leave alone.

 

Yep, it makes sense then!

I only have 3 lines on /etc/fstab. Why did the other line totally disapeared? Can I resinstall it manually?

 

none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/vg1000/lv /volume1 btrfs  0 0


~
 

Link to comment
Share on other sites

Ugh, my last post was on my phone and I didn't see that you had posted the contents of your fstab which is DSM crosslinked and confused. Probably not a good deal; hopefully no damage has been done, I think rather unlikely, but still too bad. Better to have taken the initial advice to copy everything off when it was up and running and not go change something.

 

At this point, please post DSM Storage Manager screenshots of RAID groups and volumes.

 

Also, run this set of commands again and post.

# vgdisplay
# lvs
# lvm vgscan
# lvm pvscan
# lvm lvmdiskscan

 

Link to comment
Share on other sites

 

root@DiskStation:/# vgdisplay
  --- Volume group ---
  VG Name               vg1000
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               8.17 TiB
  PE Size               4.00 MiB
  Total PE              2142723
  Alloc PE / Size       2142723 / 8.17 TiB
  Free  PE / Size       0 / 0
  VG UUID               YQVlVb-else-xKqP-OVtH-kU9e-WJPm-7ZWuWt

 

root@DiskStation:/# lvs
  LV   VG     Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv   vg1000 -wi-a----- 8.17t

 

root@DiskStation:/# lvm vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg1000" using metadata type lvm2

 

root@DiskStation:/# lvm pvscan
  PV /dev/md2   VG vg1000   lvm2 [8.17 TiB / 0    free]
  Total: 1 [8.17 TiB] / in use: 1 [8.17 TiB] / in no VG: 0 [0   ]

 

root@DiskStation:/# lvm lvmdiskscan
  /dev/md2 [       8.17 TiB] LVM physical volume
  /dev/md3 [       7.27 TiB]
  0 disks
  1 partition
  0 LVM physical volume whole disks
  1 LVM physical volume

Here it is... another long list :-)
 

 

Link to comment
Share on other sites

8 hours ago, flyride said:

Ugh, my last post was on my phone and I didn't see that you had posted the contents of your fstab which is DSM crosslinked and confused. Probably not a good deal; hopefully no damage has been done, I think rather unlikely, but still too bad. Better to have taken the initial advice to copy everything off when it was up and running and not go change something.

 

At this point, please post DSM Storage Manager screenshots of RAID groups and volumes.

 

Also, run this set of commands again and post.


# vgdisplay
# lvs
# lvm vgscan
# lvm pvscan
# lvm lvmdiskscan

 

So, what do you think of my answers above?

Edited by jbesclapez
Link to comment
Share on other sites

Just now, flyride said:

Ok, just edit your /etc/fstab to look exactly like post #93.  Shut down the NAS.  Remove the 8TB drive altogether.  Power back up and hopefully you should be able to get to your data again.

 

 

 

root@DiskStation:~# vi /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/vg1000/lv /volume1 btrfs  0 0
login as: admin
admin@192.168.1.34's password:
Could not chdir to home directory /var/services/homes/admin: No such file or directory
admin@DiskStation:/$ sudo -i

Do you see the message above, It is weird. Also, when i save teh fstab it gets overwritten at reboot.
 

 

 

Link to comment
Share on other sites

Just now, flyride said:

Is the 8TB drive out of the system?  If so, edit fstab again and reboot.

The 8TB is out of the system. I edidted teh fstab, did the reboot and the fstab goes back to its previous state without our work.

Did you note also that when I log on with putty i get this error > Could not chdir to home directory /var/services/homes/admin: No such file or directory
Any idea on what is happening?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...