Donkey545

Members
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Donkey545

  • Rank
    Newbie
  1. I was able to recover my data with a Restore. None of the repair options worked as you expected and each yielded the incompatibility flag you spoke of. With the recovery, I configured my other NAS with a large enough volume for all of the data and mounted it as a NFS. For any future googlers, the destination Volume for a BTRFS Recover does not need to be the size of the source Volume, but rather only the size of the data to be recovered. Furthermore, you do not need to worry too much about doing the recovery process in a single shot. I had several network changed during my recovery processes
  2. Hey flyride, I really appreciate the help. I have not gone through any of these last options just yet, but I will be attempting a restore to my other NAS soon. I think at this point making a new OS drive wouldn't be too much more work if it yields better results than the Synology system. I will investigate this option and follow up if I use it. Again, I really appreciate your help. Your method of investigating the issue has taught me a great deal about how issues like these can be addressed, and even more about how I can be more diligent
  3. First, attempt one more mount: sudo mount -o recovery,ro /dev/vg1000/lv /volume1 /$ sudo mount -o recovery,ro /dev/vg1000/lv /volume1 mount: wrong fs type, bad option, bad superblock on /dev/vg1000/lv, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. Excerpt of dmesg: dmesg [249007.103992] BTRFS: error (device dm-0) in convert_free_space_to_extents:456: errno=-5 IO failure [249007.112998] BTRFS: error (device dm-0) in add_to_free_space_tree:1049: errno=-5 IO failure [2490
  4. I will be testing out these options later today. I have four more 6tb drives that I had purchased to do a backup of this NAS, so I have the space available. Thanks for the help!
  5. It looks like I still have some issues. The mounting fails with each of these option. First I do the vchange -ay: sudo vgchange -ay 1 logical volume(s) in volume group "vg1000" now active Then a normal mount: sudo mount /dev/vg1000/lv /volume1 mount: wrong fs type, bad option, bad superblock on /dev/vg1000/lv, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. Then a clear cache: sudo mount -o clear_cache /dev/vg1000/lv /volume1 mount: wrong f
  6. Here is some check info on the volume: sudo btrfs check /dev/vg1000/lv Password: Syno caseless feature on. Checking filesystem on /dev/vg1000/lv UUID: b27120c9-a2af-45d6-8e3b-05e2f31ba568 checking extents checking free space tree free space info recorded 619 extents, counted 620 Wanted bytes 999424, found 303104 for off 5706027515904 Wanted bytes 402767872, found 303104 for off 5706027515904 cache appears valid but isnt 5705893412864 found 5651223941120 bytes used err is -22 total csum bytes: 5313860548 total tree bytes: 6601883648 total fs tree bytes: 703512576 total extent tree b
  7. That is interesting, and it explains why I could create the volume in the first place. I think that it was BTRFS, but I am not sure honestly. For some reason in my research I neglected to make sure I was looking for that detail. This suggests that btrfs is correct: cat /etc/fstab /$ cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 btrfs auto_reclaim_space,synoacl,relatime 0 0 lvdisplay -v $ sudo lvdisplay -v Using logical volume(s) on command line. --- Logical volume --- LV Path
  8. Here are the results of the vgdisplay: /$ sudo vgdisplay --- Volume group --- VG Name vg1000 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 20.00 TiB PE Size 4.00 MiB Total PE 5242424 Alloc PE / Size 5242424 / 20.00 TiB Free PE / Size
  9. Hi, Thanks for the response, I appreciate the pointer on the direction to go with this. I am quite inexperienced with this type of troubleshooting. In my original post I had posted the cat /proc/mdstat results but there were numbers in there that flagged the content filter as phone numbers or something. I will try it again. All of my disks are healthy. Unfortunately I went to back up the data while the volume was still showing its original mounted size and it has populated it with an empty volume after the reboot with my spare drive. I have since disconnected my spare.
  10. Hello, I am trying to recover about 5tb of data off of a 20tb volume on my xpenology system. The volume crashed while the system was under normal operating conditions. I have the system on battery backup, so power failure is not a likely cause. I have done a fair bit of research on how to accomplish the recovery, but I am running into a roadblock on what I expect is my potential solution. If I can get so pointers in the right direction I would really appreciate it. So I'll just drop some details here: fdisk -l cat /proc/mdstat mdadm --assemble --scan -v