Jump to content
XPEnology Community

supermounter

Member
  • Posts

    28
  • Joined

  • Last visited

Posts posted by supermounter

  1. Il y a 3 heures, supermounter a dit :

     

    for your case what's  your result with the command  # vgchange -ay

     

    Sorry

    forget the previous post I made , I'm wrong there

     

    I think you maybe not working on the correct device here as you got a VOLUME with SHR where md2 is the disk device

    you need to work on the logical volume created by the raid array here this is why I ask you to do a # vgdisplay -v to see it

     

    did you tried yet to mount the volume in read only mode without his system file cache ? # mount -o clear_cache /dev/vg1000/lv /volume1

     

     

  2. Hello.

    Does anyone can advise to me if I can copy (rsync) entire volume data (not the one where package are installed) from xpenology nas1 volume2 to nas2 volume2 ?

    I seen a lot of @eaDir and @DS_Store folder into my nas1 volume2 and I don't know the bad issue possibility if I copy too these ones to the other nas.

     

    Any advise here ?

     

  3. fter a new reboot, the DSM seen the third disk as free and initiated

    I was able to repair the raid.

    The disk SMART info show this into his history

    Capture%20smart%20disk7.JPG?psid=1&width

     

    some search on the web suggest that it can come from a bad sata cable...does a sata cable can become bad after 4 years of good state ?

    during the repair of may array,

    as I don't have an answer from anybody, I'm working now to delete all @EADIR & @DS_STORE folders from my backup data, in case of need to rsync all again to my DSM.

    Capture.JPG?psid=1&width=1232&height=809

  4. I followed good advises from @flyride found there :

    and did a

    # vgchange -ay

    # mount -o clear_cache /dev/vg1000/lv /volume2

    All my data was again accessible with SSH

    this gived me the way to do a

    # rsync -avzh /volume2/ /volume1/MOUNTNFS/volume2    and transfer all to a mounted volume folder from another nas to the first one.

    This had took a while but at the end I got now a copy from all ma data.

    Then I had removed completely my volume 2 from my DSM et recreate it. again took a while for the first consistency check

    and after did a # rsync -avzh /volume1/MOUNTNFS/volume2 /volume2/ (took again days to complete the transfer)

    I was happy but too fast, as my DSM now hangs a lot and I got issues with some permission/owner folders.

    I realise now that it was apparently a big mistake to left rsync handle the @EADIR and @DS_STORE folders, as i didn't made  exclusions for those ones.

    My DSM become often unresponsive with Filestation during some copy jobs or change owner settings of some folder, I did a # reboot from ssh

    and there big scare, as the first restart the DSM didn't came back. OK, is it the end? do I need to re-install all xpenology ?

    after 2 restart, still front of a DSM not showing, I decided to start without my second volume and pull off the 4 disk once shutdown.

    This made the trick and the DSM came back correctly with only the volum1.

    Shutdown again, put back my 4 disk volume2, and yes!! the DSM came back with my 2 volumes.

    The DSM did again on a consistency check on the volume 2 and then proposed me to fix the file system from the third disk . I did the click on the repair link proposed

    but again the system hang and no acces at all to DSM again.

     

    @flyridedoes it mean that my third disk is really dead ?

     

    does anybody can help me here, I'm lost.

    after a volume crashed, now I'm facing with a degraded one that refuse to be fixed.

     

  5. Well, here I suffer a lot

    after days of rsync to backup my data to another Nas

    and rsync again backin to the recreated volume 2

    I discover now others issues (seem to be not a good procedure to rsync ALL with the @EADIR and @DS_STORE folders included)

    All my data are back into my volume 2 but the DSM hangs a lot when I use filestation

    and some folders got now root as owner.

    Once , the system completly hangs I asked a reboot with ssh and DSM didn't came back.

    I had completly shutdown, remove my 4 disk (volume from the nas, restart the nas with Only volume1 and the DSM came back.

    Shutdown again, put back my 4 disk and restart, the DSM came back with the 2 volumes and all my data.

    But it is now making a consistency check of my volume 2 and my disk 3 say error file system, with the storage manager that propose me to fix it.

    I'm waiting now the end of the consistency check before do anything else.

    But I'm afraid I will need to delete my volume again and again do an rsync WITHOUT @EADIR and @DS_STORE folder this time.

    what a painy a... situation and ways time here for me.

     

    for your case what's  your result with the command  # vgchange -ay

     

    I think you maybe not working on the correct device here as you got a VOLUME with SHR where md2 is the disk device

    you need to work on the logical volume created by the raid array here this is why I ask you to do a # vgdisplay -v to see it

     

    did you tried yet to mount the volume in read only mode without his system file cache ? # mount -o clear_cache /dev/vg1000/lv /volume1

     

  6. may I make a suggestion to you here.

    If you plan is to upgrade your xpenology

    I suggest to shutdown your actual version

    remove your 4 disk array

    Put only your 8TB disk and install newest DSM but Don't create a volume with your 8tb disk

    shutdown your new dsm

    Put back your 4 disk array, and start your dsm.

    Let's see if you are now with  a array (still degraded) but without filesystem corruption.

    as you said you got 99% of your data arleady saved into another backup, you have just 1% of possibility loosing stuff here.

  7. why didn't you just connected the new drive with usb and leaved in peace the previous work ?

    with a reboot you always be under risk of loosing something if you are already in trouble with bad disk or corrupted array.

    I don't understand the choice of btrfs for your spare job disk if it's just to recover your data and rebuild your crashed volume, in final term they will go back in your array isn't it ?

  8. I don't understand how you come from

    root@DiskStation:/volume1# cat /etc/fstab

     none /proc proc defaults 0 0

     /dev/root / ext4 defaults 1 1

     /dev/vg1000/lv /volume1 ext4 ro,noload,sb=1934917632,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0

     

    to now

    root@DiskStation:/volume1# cat /etc/fstab

     none /proc proc defaults 0 0

     /dev/root / ext4 defaults 1 1

     /dev/vg1000/lv /volume1 btrfs  0 0

     

    seems your new mounted volume take the place of your previous volume...but here you had choose btfrs instead of ext4 for the new volume disk added.

    maybe you will need to mount your previous volume into /volume2

    I'm maybe wrong, but @flyride will give better check of my supposition as he said .: "you might have a new line in the file for your volume2/md3 that you should leave alone"

  9. Yep ! I don't want to miss the gate, first I will try to copy all outside, but this will take a wile (5,5To here) and I need to purshase a spare disk of 6To (not a good time for expense but if we need...)

    I can already tell you the btrfs check --init-extent-tree /dev/vg1000/lv gived in return

    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    Ignoring transid failure
    Couldn't setup extent tree
    extent buffer leak: start 394340171776 len 16384
    Couldn't open file system

     

    is it lost doctor ?

  10. il y a 2 minutes, flyride a dit :

    @supermounter if your mdstats indicate healthy arrays, check out this thread, starting from post #9

    https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability

     

    Yess!

     

    vgchange -ay  give success and active now

     

    but only the mount -o recovery,ro /dev/vg1000/lv /volume2    the one without the file system had worked.

    I just being a test rsync between one of my folder to another basic nas mounted with nfs into volume 1, and apparently it goes well.

     

    Do you think there is a way to fix the file system from this volume , and then to be back again into my xpenology ?

     

    Thank You Flyride to have take time to come back to me, really appreciate.

  11. Hello @harmakhis, in my case when I do the check --repair, he can't  setup extent tree, setup the device tree and can't open the file system.

     

    /# btrfs check --repair /dev/vg1000/lv
    enabling repair mode
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    Ignoring transid failure
    Couldn't setup extent tree
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    Ignoring transid failure
    Couldn't setup device tree
    extent buffer leak: start 394340171776 len 16384
    extent buffer leak: start 394340171776 len 16384
    Couldn't open file system

     

    any advise or other possibility ?

  12. Hi @flyride do you mind you will have some time to help me too ?

    My volume is crashed but this one is not an ext4 but btrfs.

    Raid array looks good but lvm show me

     #lvs
      LV   VG     Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      lv   vg1000 -wi-a----- 7.26t

    # lvm pvscan
      PV /dev/md3   VG vg1000   lvm2 [5.44 TiB / 0    free]
      PV /dev/md4   VG vg1000   lvm2 [1.82 TiB / 0    free]
      Total: 2 [7.26 TiB] / in use: 2 [7.26 TiB] / in no VG: 0 [0   ]

    # lvm lvmdiskscan
      /dev/md2 [     163.08 GiB]
      /dev/md3 [       5.44 TiB] LVM physical volume
      /dev/md4 [       1.82 TiB] LVM physical volume
      0 disks
      1 partition
      0 LVM physical volume whole disks
      2 LVM physical volumes

     

    my issue appair with a power cut after a non responding system when I was in a scan folder shearch in the entire \

    all disk looks OK, and at the first restart the system made a resynchronisyng of my raid but volume stay into crashed status.

     

     

  13. Hello I'm reading this post with big hope as I'm in the same situation but with a RAID SHR1, this mean I got lv.

    but unfortunately the btrfs check -init-extent-tree /dev/vg1000/lv  for me won't fix my issue. 😥

     

    here is my result

    # btrfs check --init-extent-tree /dev/vg1000/lv
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    Ignoring transid failure
    Couldn't setup extent tree
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    Ignoring transid failure
    Couldn't setup device tree
    extent buffer leak: start 394340171776 len 16384
    extent buffer leak: start 394340171776 len 16384
    Couldn't open file system

     

    I'm lost again like my data.

    @flyride  do you have an idea why my don't want to be fixed ?

     

    thanks for advise and sorry to come into this post generated by @Mitt27 but his post is corresponding at 99% of my situation, but his solution apparently don't work with me.

  14. # btrfs restore -D -v  /dev/vg1000/lv /volume1/MOUNTNFS/volume2
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    Ignoring transid failure
    Couldn't setup extent tree
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    Ignoring transid failure
    Couldn't setup device tree
    extent buffer leak: start 394340171776 len 16384
    extent buffer leak: start 394340171776 len 16384
    Could not open root, trying backup super
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    Ignoring transid failure
    Couldn't setup extent tree
    Couldn't setup device tree
    Could not open root, trying backup super
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    Ignoring transid failure
    Couldn't setup extent tree
    Couldn't setup device tree
    Could not open root, trying backup super

  15. # btrfs rescue super-recover /dev/vg1000/lv
    Make sure this is a btrfs disk otherwise the tool will destroy other fs, Are you sure? [y/N]: y
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    Ignoring transid failure
    Couldn't setup extent tree
    Failed to recover bad superblocks
    *** Error in `btrfs': double free or corruption (fasttop): 0x0000000002324010 ***
    ======= Backtrace: =========
    /lib/libc.so.6(+0x72a8e)[0x7fbf00911a8e]
    /lib/libc.so.6(+0x77c5e)[0x7fbf00916c5e]
    /lib/libc.so.6(+0x78413)[0x7fbf00917413]
    btrfs(btrfs_close_devices+0x10f)[0x45832f]
    btrfs(btrfs_recover_superblocks+0x441)[0x439081]
    btrfs(main+0x82)[0x410452]
    /lib/libc.so.6(__libc_start_main+0xf0)[0x7fbf008bf030]
    btrfs[0x410558]
    ======= Memory map: ========
    00400000-0049b000 r-xp 00000000 09:00 5792                               /usr/sbin/btrfs
    0069b000-0069f000 r--p 0009b000 09:00 5792                               /usr/sbin/btrfs
    0069f000-006a1000 rw-p 0009f000 09:00 5792                               /usr/sbin/btrfs
    006a1000-006c7000 rw-p 00000000 00:00 0
    02324000-02366000 rw-p 00000000 00:00 0                                  [heap]
    7fbf0068e000-7fbf0069f000 r-xp 00000000 09:00 6975                       /usr/lib/libgcc_s.so.1
    7fbf0069f000-7fbf0089e000 ---p 00011000 09:00 6975                       /usr/lib/libgcc_s.so.1
    7fbf0089e000-7fbf0089f000 rw-p 00010000 09:00 6975                       /usr/lib/libgcc_s.so.1
    7fbf0089f000-7fbf00a3a000 r-xp 00000000 09:00 6422                       /usr/lib/libc-2.20-2014.11.so
    7fbf00a3a000-7fbf00c3a000 ---p 0019b000 09:00 6422                       /usr/lib/libc-2.20-2014.11.so
    7fbf00c3a000-7fbf00c3e000 r--p 0019b000 09:00 6422                       /usr/lib/libc-2.20-2014.11.so
    7fbf00c3e000-7fbf00c40000 rw-p 0019f000 09:00 6422                       /usr/lib/libc-2.20-2014.11.so
    7fbf00c40000-7fbf00c44000 rw-p 00000000 00:00 0
    7fbf00c44000-7fbf00c5b000 r-xp 00000000 09:00 8027                       /usr/lib/libpthread-2.20-2014.11.so
    7fbf00c5b000-7fbf00e5a000 ---p 00017000 09:00 8027                       /usr/lib/libpthread-2.20-2014.11.so
    7fbf00e5a000-7fbf00e5b000 r--p 00016000 09:00 8027                       /usr/lib/libpthread-2.20-2014.11.so
    7fbf00e5b000-7fbf00e5c000 rw-p 00017000 09:00 8027                       /usr/lib/libpthread-2.20-2014.11.so
    7fbf00e5c000-7fbf00e60000 rw-p 00000000 00:00 0
    7fbf00e60000-7fbf00e81000 r-xp 00000000 09:00 7374                       /usr/lib/liblzo2.so.2
    7fbf00e81000-7fbf01080000 ---p 00021000 09:00 7374                       /usr/lib/liblzo2.so.2
    7fbf01080000-7fbf01081000 rw-p 00020000 09:00 7374                       /usr/lib/liblzo2.so.2
    7fbf01081000-7fbf01096000 r-xp 00000000 09:00 9689                       /usr/lib/libz.so.1.2.8
    7fbf01096000-7fbf01295000 ---p 00015000 09:00 9689                       /usr/lib/libz.so.1.2.8
    7fbf01295000-7fbf01296000 r--p 00014000 09:00 9689                       /usr/lib/libz.so.1.2.8
    7fbf01296000-7fbf01297000 rw-p 00015000 09:00 9689                       /usr/lib/libz.so.1.2.8
    7fbf01297000-7fbf012d5000 r-xp 00000000 09:00 8475                       /usr/lib/libblkid.so.1.1.0
    7fbf012d5000-7fbf014d4000 ---p 0003e000 09:00 8475                       /usr/lib/libblkid.so.1.1.0
    7fbf014d4000-7fbf014d8000 r--p 0003d000 09:00 8475                       /usr/lib/libblkid.so.1.1.0
    7fbf014d8000-7fbf014d9000 rw-p 00041000 09:00 8475                       /usr/lib/libblkid.so.1.1.0
    7fbf014d9000-7fbf014da000 rw-p 00000000 00:00 0
    7fbf014da000-7fbf014de000 r-xp 00000000 09:00 6918                       /usr/lib/libuuid.so.1.3.0
    7fbf014de000-7fbf016dd000 ---p 00004000 09:00 6918                       /usr/lib/libuuid.so.1.3.0
    7fbf016dd000-7fbf016de000 r--p 00003000 09:00 6918                       /usr/lib/libuuid.so.1.3.0
    7fbf016de000-7fbf016df000 rw-p 00004000 09:00 6918                       /usr/lib/libuuid.so.1.3.0
    7fbf016df000-7fbf01700000 r-xp 00000000 09:00 9614                       /usr/lib/ld-2.20-2014.11.so
    7fbf018f6000-7fbf018ff000 rw-p 00000000 00:00 0
    7fbf018ff000-7fbf01900000 r--p 00020000 09:00 9614                       /usr/lib/ld-2.20-2014.11.so
    7fbf01900000-7fbf01902000 rw-p 00021000 09:00 9614                       /usr/lib/ld-2.20-2014.11.so
    7ffc87edb000-7ffc87efc000 rw-p 00000000 00:00 0                          [stack]
    7ffc87f78000-7ffc87f79000 r-xp 00000000 00:00 0                          [vdso]
    ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
    Aborted (core dumped)

  16. # btrfs check  /dev/vg1000/lv
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    parent transid verify failed on 394340270080 wanted 940895 found 940897
    Ignoring transid failure
    Couldn't setup extent tree
    extent buffer leak: start 394340171776 len 16384
    Couldn't open file system

  17. # pvdisplay
      --- Physical volume ---
      PV Name               /dev/md3
      VG Name               vg1000
      PV Size               5.44 TiB / not usable 1.56 MiB
      Allocatable           yes (but full)
      PE Size               4.00 MiB
      Total PE              1427258
      Free PE               0
      Allocated PE          1427258
      PV UUID               K64N9d-qCtH-drER-5SY9-9k8q-3dvy-3wcgD0

      --- Physical volume ---
      PV Name               /dev/md4
      VG Name               vg1000
      PV Size               1.82 TiB / not usable 1.88 MiB
      Allocatable           yes (but full)
      PE Size               4.00 MiB
      Total PE              476927
      Free PE               0
      Allocated PE          476927
      PV UUID               ROBzZy-BBO4-rkY9-NMlb-znJ3-YOhH-zUm3k9


      Block device           253:0

  18. here are some of the already test made and commands:

     

    # lvdisplay
      --- Logical volume ---
      LV Path                /dev/vg1000/lv
      LV Name                lv
      VG Name                vg1000
      LV UUID                Iv9ZJ0-oN3I-K6jV-76Px-gIbp-Fqc4-8ohokw
      LV Write Access        read/write
      LV Creation host, time ,
      LV Status              available
      # open                 0
      LV Size                7.26 TiB
      Current LE             1904185
      Segments               2
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     512

  19. à l’instant, supermounter a dit :

    Hello.

    I'm trying to restore my btfrs volume2 but it seems to be not so easy, especialy for a newbee like me

    I followed some old post from this forum who gived some path to a solution but after many try and finaly doing

    a btfrs Dry run restore to check if i'm able to restore, but i'm still falling on some errors.

     

    Do I have obligation to pull out my 4 disk from my nas and connect it to a spare computer under a live CD ubuntu to get this restore/copy ?

    Before I purshase a new drive of 6TB to be able to receive the 5,5To data to restore, I want to be sure i can really restore my data from this crashed volume.

     

    Any help will be much appreciate, and a big Easter chocolate egg will be send to my angel keeper that will offer his knowledge and time to me :-) ang give a solution to copy my data in another secure place.

     

     

     

     

     

     

    my array and his logical volume are apparently ok but I can't mount anymore the volume.

    Each btfrs command to repare or to mount the volume always give me those responses

    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    parent transid verify failed on 394346741760 wanted 940897 found 920148
    Ignoring transid failure
    Couldn't setup extent tree
    Failed to recover bad superblocks

  20. Hello.

    I'm trying to restore my btfrs volume2 but it seems to be not so easy, especialy for a newbee like me

    I followed some old post from this forum who gived some path to a solution but after many try and finaly doing

    a btfrs Dry run restore to check if i'm able to restore, but i'm still falling on some errors.

     

    Do I have obligation to pull out my 4 disk from my nas and connect it to a spare computer under a live CD ubuntu to get this restore/copy ?

    Before I purshase a new drive of 6TB to be able to receive the 5,5To data to restore, I want to be sure i can really restore my data from this crashed volume.

     

    Any help will be much appreciate, and a big Easter chocolate egg will be send to my angel keeper that will offer his knowledge and time to me :-) ang give a solution to copy my data in another secure place.

     

     

     

     

     

     

  21. Hello.

    I'm facing the same problem than you, and looking for a solution from days now.

    Apparently you got ext4 for your file system, then it will be more easy to refund and mount your volume.

    Myn is a btfrs and is much more complicate as synology use his own btfrs repositery and this create some behavior to make the btfrs command work as expected.

     

    Can you make those commands and send back the result ?

     

    vpdisplay

    vgdisplay

     

    • Like 1
×
×
  • Create New...