Jump to content
XPEnology Community

Search the Community

Showing results for tags 'degraded volume'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 1 result

  1. So I wanted to post this here as I have spent 3 days trying to fix my volume. I am running xpenology on a JBOD nas with 11 drives. DS3615xs DSM 5.2-5644 So back a few months ago I had a drive go bad and the volume went into degraded mode, I failed to replace the bad drive at the time because the volume still worked. A few days ago I had a power outage and the nas came back up as crashed. I searched many google pages on what to do to fix it and nothing worked. The bad drive was not recoverable at all. I am no linux guru but I had similar issues before on this nas with other drives so I tried to focus on mdadm commands. Problem was that I could not copy any data over from the old drive. I found a post here https://forum.synology.com/enu/viewtopic.php?f=39&t=102148#p387357 that talked about finding the last known configs of the md raids. I was able to determine that the bad drive was /dev/sdk After trying fdisk, and gparted and realizing I could not use gdisk since it is not native in xpenology and my drive was 4tb and gpt I plugged the drive into a usb hard drive bay in a seperate linux machine. I was able to use another 4tb that was working and copy the partition tables almost identically using gdisk. Don't try to do it on windows, I did not find a worthy tool to partition it correctly. After validating my partition numbers, start-end size and file system type FD00 I stuck the drive back in my nas. I was able to do mdadm --manage /dev/md3 --add /dev/sdk6 and as soon as they showed under cat /proc/mdstat I see the raids rebuilding. I have 22tb of space and the bad drive was lost on md2, md3 and md5 so it will take a while. I am hoping my volume comes back up after they are done.
×
×
  • Create New...