Search the Community

Showing results for tags 'degraded'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • GERMAN
    • KOREAN

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



About Me

Found 2 results

  1. On some DSM updates I get my RAID degraded after the system update reboot. It does not happen on all system updates but the latest one DSM 6.1.3-15152 from July 13th it happened again. I know what the root cause of the problem is and it is the fact that my special settings in \etc.defaults\synoinfo.conf gets overwritten with some Synology defaults. My setting of internalportcfg="0x3fc0" is reset back to internalportcfg="0xfff" and this causes my 8 drive SHR-2 RAID to become degraded since it looses 2 of the 8 disks. The reason why I have internalportcf
  2. Hey, I'm running XPEnology DSM 6.0.2-8451 Update 11 on a self built computer. I started out with 4 x 1Tb older Samsung drives (HD103UJ & HD103SJ). These are in SHR2/BTRFS array (enabled SHR for DS3615xs). This setup hasn't had any issues and I intended to expand the array with other 1Tb drives, but I decided to go with bigger drives since I had the chance to do so. So I added a 3Tb WD Red and started expanding the volume. The main goal was to replace the 1Tb drives one by one with 3Tb drives and have 5x 3Tb WD Reds in the end. The expansion went ok and s