flyride Posted April 10, 2020 Share #101 Posted April 10, 2020 (edited) @supermounter if your mdstats indicate healthy arrays, check out this thread, starting from post #9 https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability Edited April 10, 2020 by flyride Quote Link to comment Share on other sites More sharing options...
supermounter Posted April 10, 2020 Share #102 Posted April 10, 2020 il y a 2 minutes, flyride a dit : @supermounter if your mdstats indicate healthy arrays, check out this thread, starting from post #9 https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability Yess! vgchange -ay give success and active now but only the mount -o recovery,ro /dev/vg1000/lv /volume2 the one without the file system had worked. I just being a test rsync between one of my folder to another basic nas mounted with nfs into volume 1, and apparently it goes well. Do you think there is a way to fix the file system from this volume , and then to be back again into my xpenology ? Thank You Flyride to have take time to come back to me, really appreciate. Quote Link to comment Share on other sites More sharing options...
flyride Posted April 10, 2020 Share #103 Posted April 10, 2020 You can try and do the repair options (post #14 in the thread). But really btrfs is supposed to self-heal. If it were me I would probably copy everything off and rebuild the btrfs volume. Quote Link to comment Share on other sites More sharing options...
supermounter Posted April 10, 2020 Share #104 Posted April 10, 2020 Yep ! I don't want to miss the gate, first I will try to copy all outside, but this will take a wile (5,5To here) and I need to purshase a spare disk of 6To (not a good time for expense but if we need...) I can already tell you the btrfs check --init-extent-tree /dev/vg1000/lv gived in return parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 Ignoring transid failure Couldn't setup extent tree extent buffer leak: start 394340171776 len 16384 Couldn't open file system is it lost doctor ? Quote Link to comment Share on other sites More sharing options...
flyride Posted April 11, 2020 Share #105 Posted April 11, 2020 3 hours ago, flyride said: If it were me I would probably copy everything off and rebuild the btrfs volume. Honestly I don't know. btrfs repair is a bit of a void even if you search online. I still have the preference to copy off and rebuild the volume. Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 11, 2020 Author Share #106 Posted April 11, 2020 14 hours ago, flyride said: Honestly I don't know. btrfs repair is a bit of a void even if you search online. I still have the preference to copy off and rebuild the volume. Hi Flyride. I got a new 8Tb drive like you recommended to me. Is that ok to SATA attach it to my mobo and mount it as a volume2 unique drive, and then copy my data from volume1 to this new volume2? I plan to do a full NAS reinstall - easier. I will detach the volume 2 before... but as it is unique, I can easily, reattach it to the future new NAS? Thanks Quote Link to comment Share on other sites More sharing options...
flyride Posted April 11, 2020 Share #107 Posted April 11, 2020 Sure you can make a new volume and copy as long as there are enough slots to build your new array for volume1. I'm not sure why you think reinstalling DSM is easier though. There is nothing unstable or corrupted about your DSM installation, thus nothing to be gained by doing a reinstall, and additional risk of making sure your 8TB volume is accessible and undamaged. Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 11, 2020 Author Share #108 Posted April 11, 2020 5 minutes ago, flyride said: Sure you can make a new volume and copy as long as there are enough slots to build your new array for volume1. I'm not sure why you think reinstalling DSM is easier though. There is nothing unstable or corrupted about your DSM installation, thus nothing to be gained by doing a reinstall, and additional risk of making sure your 8TB volume is accessible and undamaged. Thanks! My DSM version should be updated. I will probably get a new server too! Thanks again! Quote Link to comment Share on other sites More sharing options...
flyride Posted April 11, 2020 Share #109 Posted April 11, 2020 (edited) I understand. I still advise to recover in place, get two copies of your data (one on a healthy RAID5, the other on 8TB) before making changes to DSM. This situation exists due to acceptance of excessive risk (no backups) and now your only copy of your data is barely accessible. Simultaneous data recovery operation and a DSM upgrade is a bad combination. I'm not lecturing, just stating facts. But I'll stop advising on this matter as it's entirely your choice. Edited April 11, 2020 by flyride Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 11, 2020 Author Share #110 Posted April 11, 2020 2 hours ago, flyride said: I understand. I still advise to recover in place, get two copies of your data (one on a healthy RAID5, the other on 8TB) before making changes to DSM. This situation exists due to acceptance of excessive risk (no backups) and now your only copy of your data is barely accessible. Simultaneous data recovery operation and a DSM upgrade is a bad combination. I'm not lecturing, just stating facts. But I'll stop advising on this matter as it's entirely your choice. Flyride, I am sorry but I restarted the syno to install the drive. I added the new drive as new volume, but now it seems I do not see the data anymore. Probably because of the restart. Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 11, 2020 Author Share #111 Posted April 11, 2020 after trying to log with putty i got this login as: admin admin@192.168.1.34's password: Could not chdir to home directory /var/services/homes/admin: No such file or directory when you think everything is over!!!!! Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 11, 2020 Author Share #112 Posted April 11, 2020 root@DiskStation:/volume1# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF 1] md3 : active raid1 sde3[0] 7809204544 blocks super 1.2 [1/1] [U] md2 : active raid5 sdc5[1] sdd5[3] sdb5[2] 8776595520 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU] md1 : active raid1 sde2[3] sdb2[0] sdc2[1] sdd2[2] 2097088 blocks [12/4] [UUUU________] md0 : active raid1 sde1[0] sdb1[2] sdd1[3] 2490176 blocks [12/3] [U_UU________] unused devices: <none> root@DiskStation:/volume1# cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 btrfs 0 0 root@DiskStation:/volume1# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/md0 2385528 1085424 1181320 48% / none 1022500 0 1022500 0% /dev /tmp 1027768 1224 1026544 1% /tmp /run 1027768 3012 1024756 1% /run /dev/shm 1027768 4 1027764 1% /dev/shm none 4 0 4 0% /sys/fs/cgroup cgmfs 100 0 100 0% /run/cgmanager/fs Here are some info. Really hope it helps! Quote Link to comment Share on other sites More sharing options...
flyride Posted April 12, 2020 Share #113 Posted April 12, 2020 When you added the new volume through the GUI, DSM probably rewrote the /etc/fstab file which we customized to get your broken volume to mount. Go back to post #93 and edit it again. Note that you might have a new line in the file for your volume2/md3 that you should leave alone. Quote Link to comment Share on other sites More sharing options...
supermounter Posted April 12, 2020 Share #114 Posted April 12, 2020 I don't understand how you come from root@DiskStation:/volume1# cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 ext4 ro,noload,sb=1934917632,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0 to now root@DiskStation:/volume1# cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 btrfs 0 0 seems your new mounted volume take the place of your previous volume...but here you had choose btfrs instead of ext4 for the new volume disk added. maybe you will need to mount your previous volume into /volume2 I'm maybe wrong, but @flyride will give better check of my supposition as he said .: "you might have a new line in the file for your volume2/md3 that you should leave alone" Quote Link to comment Share on other sites More sharing options...
supermounter Posted April 12, 2020 Share #115 Posted April 12, 2020 why didn't you just connected the new drive with usb and leaved in peace the previous work ? with a reboot you always be under risk of loosing something if you are already in trouble with bad disk or corrupted array. I don't understand the choice of btrfs for your spare job disk if it's just to recover your data and rebuild your crashed volume, in final term they will go back in your array isn't it ? Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 12, 2020 Author Share #116 Posted April 12, 2020 5 hours ago, flyride said: When you added the new volume through the GUI, DSM probably rewrote the /etc/fstab file which we customized to get your broken volume to mount. Go back to post #93 and edit it again. Note that you might have a new line in the file for your volume2/md3 that you should leave alone. Yep, it makes sense then! I only have 3 lines on /etc/fstab. Why did the other line totally disapeared? Can I resinstall it manually? none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 btrfs 0 0 ~ Quote Link to comment Share on other sites More sharing options...
flyride Posted April 12, 2020 Share #117 Posted April 12, 2020 Ugh, my last post was on my phone and I didn't see that you had posted the contents of your fstab which is DSM crosslinked and confused. Probably not a good deal; hopefully no damage has been done, I think rather unlikely, but still too bad. Better to have taken the initial advice to copy everything off when it was up and running and not go change something. At this point, please post DSM Storage Manager screenshots of RAID groups and volumes. Also, run this set of commands again and post. # vgdisplay # lvs # lvm vgscan # lvm pvscan # lvm lvmdiskscan Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 12, 2020 Author Share #118 Posted April 12, 2020 root@DiskStation:/# vgdisplay --- Volume group --- VG Name vg1000 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 8.17 TiB PE Size 4.00 MiB Total PE 2142723 Alloc PE / Size 2142723 / 8.17 TiB Free PE / Size 0 / 0 VG UUID YQVlVb-else-xKqP-OVtH-kU9e-WJPm-7ZWuWt root@DiskStation:/# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv vg1000 -wi-a----- 8.17t root@DiskStation:/# lvm vgscan Reading all physical volumes. This may take a while... Found volume group "vg1000" using metadata type lvm2 root@DiskStation:/# lvm pvscan PV /dev/md2 VG vg1000 lvm2 [8.17 TiB / 0 free] Total: 1 [8.17 TiB] / in use: 1 [8.17 TiB] / in no VG: 0 [0 ] root@DiskStation:/# lvm lvmdiskscan /dev/md2 [ 8.17 TiB] LVM physical volume /dev/md3 [ 7.27 TiB] 0 disks 1 partition 0 LVM physical volume whole disks 1 LVM physical volume Here it is... another long list Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 12, 2020 Author Share #119 Posted April 12, 2020 (edited) 8 hours ago, flyride said: Ugh, my last post was on my phone and I didn't see that you had posted the contents of your fstab which is DSM crosslinked and confused. Probably not a good deal; hopefully no damage has been done, I think rather unlikely, but still too bad. Better to have taken the initial advice to copy everything off when it was up and running and not go change something. At this point, please post DSM Storage Manager screenshots of RAID groups and volumes. Also, run this set of commands again and post. # vgdisplay # lvs # lvm vgscan # lvm pvscan # lvm lvmdiskscan So, what do you think of my answers above? Edited April 12, 2020 by jbesclapez Quote Link to comment Share on other sites More sharing options...
flyride Posted April 12, 2020 Share #120 Posted April 12, 2020 9 hours ago, flyride said: At this point, please post DSM Storage Manager screenshots of RAID groups and volumes. Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 12, 2020 Author Share #121 Posted April 12, 2020 14 minutes ago, jbesclapez said: So, what do you think of my answers above? Sorry Flyride, I did them but forgot to send. Quote Link to comment Share on other sites More sharing options...
flyride Posted April 12, 2020 Share #122 Posted April 12, 2020 Ok, just edit your /etc/fstab to look exactly like post #93. Shut down the NAS. Remove the 8TB drive altogether. Power back up and hopefully you should be able to get to your data again. Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 12, 2020 Author Share #123 Posted April 12, 2020 Just now, flyride said: Ok, just edit your /etc/fstab to look exactly like post #93. Shut down the NAS. Remove the 8TB drive altogether. Power back up and hopefully you should be able to get to your data again. root@DiskStation:~# vi /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 btrfs 0 0 login as: admin admin@192.168.1.34's password: Could not chdir to home directory /var/services/homes/admin: No such file or directory admin@DiskStation:/$ sudo -i Do you see the message above, It is weird. Also, when i save teh fstab it gets overwritten at reboot. Quote Link to comment Share on other sites More sharing options...
flyride Posted April 12, 2020 Share #124 Posted April 12, 2020 Is the 8TB drive out of the system? If so, edit fstab again and reboot. Quote Link to comment Share on other sites More sharing options...
jbesclapez Posted April 12, 2020 Author Share #125 Posted April 12, 2020 Just now, flyride said: Is the 8TB drive out of the system? If so, edit fstab again and reboot. The 8TB is out of the system. I edidted teh fstab, did the reboot and the fstab goes back to its previous state without our work. Did you note also that when I log on with putty i get this error > Could not chdir to home directory /var/services/homes/admin: No such file or directory Any idea on what is happening? Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.