supermounter

Members
  • Content Count

    28
  • Joined

  • Last visited

Community Reputation

1 Neutral

About supermounter

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello. Does anyone can advise to me if I can copy (rsync) entire volume data (not the one where package are installed) from xpenology nas1 volume2 to nas2 volume2 ? I seen a lot of @eaDir and @DS_Store folder into my nas1 volume2 and I don't know the bad issue possibility if I copy too these ones to the other nas. Any advise here ?
  2. fter a new reboot, the DSM seen the third disk as free and initiated I was able to repair the raid. The disk SMART info show this into his history some search on the web suggest that it can come from a bad sata cable...does a sata cable can become bad after 4 years of good state ? during the repair of may array, as I don't have an answer from anybody, I'm working now to delete all @EADIR & @DS_STORE folders from my backup data, in case of need to rsync all again to my DSM.
  3. I followed good advises from @flyride found there : and did a # vgchange -ay # mount -o clear_cache /dev/vg1000/lv /volume2 All my data was again accessible with SSH this gived me the way to do a # rsync -avzh /volume2/ /volume1/MOUNTNFS/volume2 and transfer all to a mounted volume folder from another nas to the first one. This had took a while but at the end I got now a copy from all ma data. Then I had removed completely my volume 2 from my DSM et recreate it. again took a while for the first consistency check and after did a # rsync -avzh /v
  4. Well, here I suffer a lot after days of rsync to backup my data to another Nas and rsync again backin to the recreated volume 2 I discover now others issues (seem to be not a good procedure to rsync ALL with the @EADIR and @DS_STORE folders included) All my data are back into my volume 2 but the DSM hangs a lot when I use filestation and some folders got now root as owner. Once , the system completly hangs I asked a reboot with ssh and DSM didn't came back. I had completly shutdown, remove my 4 disk (volume from the nas, restart the nas with Only volume1 and
  5. Like said Flyride, your data are still there, but here it's your DSM files systems that are now messy, then start from another DSM fresh will maybe give you more opportunity to get back your data. happy Easter to you
  6. Yes you probabely right I was thinking about a repair of your array with the spare 8TB disk...as I said, you have nothing to loose now to try different approach. If FSTAB is now messy with the new disk addon, this will be maybe a solution to fix you file system with the disk 3 once the resync will perform
  7. may I make a suggestion to you here. If you plan is to upgrade your xpenology I suggest to shutdown your actual version remove your 4 disk array Put only your 8TB disk and install newest DSM but Don't create a volume with your 8tb disk shutdown your new dsm Put back your 4 disk array, and start your dsm. Let's see if you are now with a array (still degraded) but without filesystem corruption. as you said you got 99% of your data arleady saved into another backup, you have just 1% of possibility loosing stuff here.
  8. why didn't you just connected the new drive with usb and leaved in peace the previous work ? with a reboot you always be under risk of loosing something if you are already in trouble with bad disk or corrupted array. I don't understand the choice of btrfs for your spare job disk if it's just to recover your data and rebuild your crashed volume, in final term they will go back in your array isn't it ?
  9. I don't understand how you come from root@DiskStation:/volume1# cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 ext4 ro,noload,sb=1934917632,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0 to now root@DiskStation:/volume1# cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 btrfs 0 0 seems your new mounted volume take the place of your previous volume...but here you had choose btfrs instead of ext4 f
  10. Yep ! I don't want to miss the gate, first I will try to copy all outside, but this will take a wile (5,5To here) and I need to purshase a spare disk of 6To (not a good time for expense but if we need...) I can already tell you the btrfs check --init-extent-tree /dev/vg1000/lv gived in return parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897
  11. Yess! vgchange -ay give success and active now but only the mount -o recovery,ro /dev/vg1000/lv /volume2 the one without the file system had worked. I just being a test rsync between one of my folder to another basic nas mounted with nfs into volume 1, and apparently it goes well. Do you think there is a way to fix the file system from this volume , and then to be back again into my xpenology ? Thank You Flyride to have take time to come back to me, really appreciate.
  12. Hello @harmakhis, in my case when I do the check --repair, he can't setup extent tree, setup the device tree and can't open the file system. /# btrfs check --repair /dev/vg1000/lv enabling repair mode parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 Ignoring transid failure Couldn't setup extent tree parent transid verify failed o
  13. Hi @flyride do you mind you will have some time to help me too ? My volume is crashed but this one is not an ext4 but btrfs. Raid array looks good but lvm show me #lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv vg1000 -wi-a----- 7.26t # lvm pvscan PV /dev/md3 VG vg1000 lvm2 [5.44 TiB / 0 free] PV /dev/md4 VG vg1000 lvm2 [1.82 TiB / 0 free] Total: 2 [7.26 TiB] / in use: 2 [7.26 TiB] / in no VG: 0 [0 ] # lvm lvmdiskscan /dev/md2 [ 163.08 GiB] /dev/md3 [ 5.44 TiB] LVM physica
  14. Hello I'm reading this post with big hope as I'm in the same situation but with a RAID SHR1, this mean I got lv. but unfortunately the btrfs check -init-extent-tree /dev/vg1000/lv for me won't fix my issue. here is my result # btrfs check --init-extent-tree /dev/vg1000/lv parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 parent transid verify failed on 394340270080 wanted 940895 found 940897 Ignor