Jump to content
XPEnology Community

Search the Community

Showing results for tags 'degraded'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Found 2 results

  1. On some DSM updates I get my RAID degraded after the system update reboot. It does not happen on all system updates but the latest one DSM 6.1.3-15152 from July 13th it happened again. I know what the root cause of the problem is and it is the fact that my special settings in \etc.defaults\synoinfo.conf gets overwritten with some Synology defaults. My setting of internalportcfg="0x3fc0" is reset back to internalportcfg="0xfff" and this causes my 8 drive SHR-2 RAID to become degraded since it looses 2 of the 8 disks. The reason why I have internalportcfg="0x3fc0" is that I am not using the 6 SATA ports on my motherboard at all. All my 8 disks are connected to my LSI 9211 controller. Not a big problem though since I just edit the file \etc.defaults\synoinfo.conf back to my default setting and then run a time consuming scrub/repair of my raid volume. This is very time consuming though since my RAID volume is approx 21TB and having 2 disks being degraded is quite risky during the repair. It looks like this is a problem only if one has more than a certain number of disks and/or the disks being attached starts at a higher port number. In my case I am not using the first 6 SATA ports on the motherboard so my disk slots being used starts at disk number 7 and goes up to disk number 14 (totally 8 disks). I have another server with only 4 disks and that server has not these problems on Synology DSM updates. Is there ANY way to have the Synology update to NOT overwrite the internalportcfg-setting in /etc.defaults/synoinfo.conf file to NOT being overwritten with new Synology default values or can I replace this file BEFORE the Synoloy restart?
  2. Hey, I'm running XPEnology DSM 6.0.2-8451 Update 11 on a self built computer. I started out with 4 x 1Tb older Samsung drives (HD103UJ & HD103SJ). These are in SHR2/BTRFS array (enabled SHR for DS3615xs). This setup hasn't had any issues and I intended to expand the array with other 1Tb drives, but I decided to go with bigger drives since I had the chance to do so. So I added a 3Tb WD Red and started expanding the volume. The main goal was to replace the 1Tb drives one by one with 3Tb drives and have 5x 3Tb WD Reds in the end. The expansion went ok and so did the consistency check. Then for some unknown reason the newly added disk was restarted, then degraded the swap system volume, then degraded the volume, was "inserted" and "removed" (although I didn't do anything) and finally it degraded the root system volume on the disk. I tried repairing the volume, but it didn't help. I shut the server down and no new data has been written on the array since this issue. Yesterday I finally had time to do something about this, so I removed the disk, emptied everything on the disk and re-inserted it in the XPEnology server. I also changed the SATA-cable and power cable for the disk. Repair was successful like the array expansion before and so was the consistency check. After this had finished, I started the RAID scrub. Even the scrub went through just fine at 3:28:01 and then it did just about the same as when I was expanding the array. Disk restarted due to "unknown error", volumes degraded and the disk 5 was inserted and removed from the array. This is the situation now along with: Next step is of course to run diagnostics on that WD Red, but for some reason I don't think it's the disk that is causing this issue. I also have a few other WD Reds which I could be able to try out, but I'd need to empty them first. If you have some inkling on what could be causing this, It'd be appreciated. Best regards, Darkened aka. Janne
×
×
  • Create New...