Jump to content
XPEnology Community

Search the Community

Showing results for tags 'bad sector'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 1 result

  1. I've had a mixture of WD Red drives in a Syno DS410 and an Intel SSE4200 enclosure running Xpenology for years with very few drive issues. Recently I thought I'd repurpose an Intel box I'd built a few years ago but was just sitting there (CPU/RAM/MOBO) and successfully set it up with 4x3TB WD Red drives running Xpenology. When given the choice, I chose to create a btrfs RAID 5 volume. But. In the 5 or so months I've been running this NAS, three drives have crashed and started reporting a bunch of bad sectors. These drives have less than 1000 hours on them, practically new. Fortunately they are under warranty at some level. But, still, wondering, could this be btrfs? I'm no file system expert. Light research suggests that while btrfs has been around for several years and of course is a supported option in Synology, some feel it isn't ready for prime time. I'm at a loss to explain why 3 WD Red drives with less than 1000 hours on them manufactured on different dates are failing so catastrophically. I understand btrfs and bad sectors are not really in the same problem zone; software shouldn't be able to cause hardware faults. I considered heat but these drives are rated at 65 celsius and they are not going above 38 or so. If it matters, when drives fail, the drive always reports problems at boot up; in fact, as the volume is now degraded with the loss of yet another drive I'm just not turning the system off until I get a new drive in there; one of the remaining drives reported failure to start up properly in the past week. Final consideration I have is that this is a build-a-box using a Gigabyte motherboard and 4 drives on the SATA bus in AHCI mode. Some sort of random hardware issue in this system could possibly be causing bad sectors to be reported on the drives?? Seems unlikely. Has anyone ever heard of SynologyOS reporting bad sectors when there weren't actually bad sectors? Anyone have any thoughts on this? Should I go back to ext4? This is mainly a plex media server.
×
×
  • Create New...