Jump to content
XPEnology Community

drcrow_

Transition Member
  • Posts

    7
  • Joined

  • Last visited

drcrow_'s Achievements

Newbie

Newbie (1/7)

0

Reputation

  1. So I ran the btrfs restore /dev/vg1000/lv /root/hope but it got about 250GBs in and hit this error. Tried googling it and nothing really came up. Do you have any idea whats going on? Maybe the regex file path would help by skipping this section. It looks like its just the recycle bin anyways, not something I care about. Any chance you know how to fix the regex?
  2. @flyride Thanks for all the help. I think I was able to get the rough size of my volume, ~21.61TB. You can see in the output below, bytes_used 21619950809088. Hopefully that is the right amount. I have amounted my other NAS and I am running btrfs restore /dev/vg1000/lv /root/hope/. Where /root/hope/ is my remote NFS mount to my other NAS. One question I did have is on the btrfs restore command they mention you can use the following flag: Now I am not a regex wizard, but my file path on my NAS when something like /Volume1/Media/TV, /Volume1/Media/Movies, /Volume1/Media/Home Videos. Let's say I just wanted to restore my movies folder, I think the regex should be ^/(|Media(|/Movies(|/.*)))$. But when I tried to do a dry run with that, btrfs restore -D --path-regex '^/(|Media(|/Movies(|/.*)))$' /dev/vg1000/lv /root/hope/, it did not seem to work. Do you know if there is something wrong with my syntax?
  3. Is there away I can see how much data I have on the volume to make sure I have enough space on my other array?
  4. I might of misspoke when I was looking at the size of my file. I am not sure. But when I do a mdadm -D /dev/md2 I get: Do you know if the Used Dev Size means the size used? That would mean I only have used roughly 9.99TB. Which would would in terms of using my NAS with 20TB. I just want to confirm your thoughts before I do ahead and try to run the command.
  5. I see what talking about in the other post, you mean this right? That is my biggest problem. I have another NAS with roughly 20TB of storage and a friends NAS with 16TB. Is there away to just restore the data not the entire array? Meaning, does the btrfs restore /dev/vg1000/lv /volume2 need to be as big as the entire volume, ~90TB, or just as big as the data I stored on it, ~35TB? Additionally, is that all the info on Btrfs restore via this link, https://btrfs.wiki.kernel.org/index.php/Restore. I was hoping for some more information. Ideally, I could use part of my 20TB NAS and part of my friends NAS. BTW, thanks for your help so far. Seems kind of grim.
  6. Thanks for responding @flyride. I was actually using that post to help troubleshoot my issues. But I ran into issues with your recovering files comment. When I try to run: I get the following errors: I can't seem to interact with the LV. Got any other steps/commands I should try? I was really hoping you would respond, since you seemed to help out the other guy in the thread you linked. I think having the checksum error emails and then when I do a fdisk -l, I get GPT PMBR size mismatch (102399 != 60062499) will be corrected by w(rite). Makes me think the issue stems from there. But I am willing to give anything you suggest a try!
  7. I been trying to troubleshoot my volume crash myself but I am at the end of my wits here. I am hoping someone can shine some light on to what my issue is and how to fix it. A couple weeks I started to receive email alerts stating, “Checksum mismatch on NAS. Please check Log Center for more details.” I hopped on my NAS WebUI and I did not really seem much in the logs. After checking my systems were still functioning properly and I could access my file, I figured something was wrong but was not a major issue…..how wrong I was. That brings us up until today, where I notice my NAS was only in read only mode. Which I thought was really odd. I tried logging into the WebUI but after I entered my username and password, I was not getting the NAS’s dashboard. I figured I would reboot the NAS, thinking it would fix the issue. I had problems with the WebUI being buggy in the past and a reboot seemed to always take care of it. But after the reboot I received the dreaded email, “Volume 1 (SHR, btrfs) on NAS has crashed”. I am unable to access the WebUI. But luckily, I have SSH enabled and logged on to the server and that’s where we are now. Some info about my system: 12 x 10TB Drives Synology 6.1.X as a DS3617xs 1 SSD Cache 24 GBs of RAM 1 x XEON CPU Here is the output of some of the commands I tried already: (Have to edit some of the outputs due to SPAM detection) Looks like the RAID comes up as md2. Seems to have the 12 drives active, not 100% sure Received an error when running the this command: GPT PMBR size mismatch (102399 != 60062499) will be corrected by w(rite). I think this might have to do something with the checksum errors I was getting before. When I try to interact with the LV it says it couldn't open file system. I tried to unmounted the LV and/or remount it, it gives me errors saying its not mounted, already mounted or busy. Can anyone comment on whether this is a possibility to recover the data? Am I going in the right direction? Any help would be greatly appreciated!
×
×
  • Create New...