Search the Community

Showing results for tags 'crashed volume'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Readers News & Rumours
    • The Noob Lounge
    • Information and Feedback
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN
    • CHINESE
    • HUNGARIAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 3 results

  1. I been trying to troubleshoot my volume crash myself but I am at the end of my wits here. I am hoping someone can shine some light on to what my issue is and how to fix it. A couple weeks I started to receive email alerts stating, “Checksum mismatch on NAS. Please check Log Center for more details.” I hopped on my NAS WebUI and I did not really seem much in the logs. After checking my systems were still functioning properly and I could access my file, I figured something was wrong but was not a major issue…..how wrong I was. That brings us up until today, where I notice my NAS was only in read only mode. Which I thought was really odd. I tried logging into the WebUI but after I entered my username and password, I was not getting the NAS’s dashboard. I figured I would reboot the NAS, thinking it would fix the issue. I had problems with the WebUI being buggy in the past and a reboot seemed to always take care of it. But after the reboot I received the dreaded email, “Volume 1 (SHR, btrfs) on NAS has crashed”. I am unable to access the WebUI. But luckily, I have SSH enabled and logged on to the server and that’s where we are now. Some info about my system: 12 x 10TB Drives Synology 6.1.X as a DS3617xs 1 SSD Cache 24 GBs of RAM 1 x XEON CPU Here is the output of some of the commands I tried already: (Have to edit some of the outputs due to SPAM detection) Looks like the RAID comes up as md2. Seems to have the 12 drives active, not 100% sure Received an error when running the this command: GPT PMBR size mismatch (102399 != 60062499) will be corrected by w(rite). I think this might have to do something with the checksum errors I was getting before. When I try to interact with the LV it says it couldn't open file system. I tried to unmounted the LV and/or remount it, it gives me errors saying its not mounted, already mounted or busy. Can anyone comment on whether this is a possibility to recover the data? Am I going in the right direction? Any help would be greatly appreciated!
  2. Hi I am using synology hybrid RAID with 5 drives. 4 are working fine, but 1 is failing. My volume is crashed. I know that SHR has 1 disk tolerance so it should recover if only 1 drive failed but i cannot recover it. Please help me.
  3. So I wanted to post this here as I have spent 3 days trying to fix my volume. I am running xpenology on a JBOD nas with 11 drives. DS3615xs DSM 5.2-5644 So back a few months ago I had a drive go bad and the volume went into degraded mode, I failed to replace the bad drive at the time because the volume still worked. A few days ago I had a power outage and the nas came back up as crashed. I searched many google pages on what to do to fix it and nothing worked. The bad drive was not recoverable at all. I am no linux guru but I had similar issues before on this nas with other drives so I tried to focus on mdadm commands. Problem was that I could not copy any data over from the old drive. I found a post here https://forum.synology.com/enu/viewtopic.php?f=39&t=102148#p387357 that talked about finding the last known configs of the md raids. I was able to determine that the bad drive was /dev/sdk After trying fdisk, and gparted and realizing I could not use gdisk since it is not native in xpenology and my drive was 4tb and gpt I plugged the drive into a usb hard drive bay in a seperate linux machine. I was able to use another 4tb that was working and copy the partition tables almost identically using gdisk. Don't try to do it on windows, I did not find a worthy tool to partition it correctly. After validating my partition numbers, start-end size and file system type FD00 I stuck the drive back in my nas. I was able to do mdadm --manage /dev/md3 --add /dev/sdk6 and as soon as they showed under cat /proc/mdstat I see the raids rebuilding. I have 22tb of space and the bad drive was lost on md2, md3 and md5 so it will take a while. I am hoping my volume comes back up after they are done.