Jump to content
XPEnology Community

Search the Community

Showing results for tags 'synoinfo'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • GERMAN
    • KOREAN

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



About Me

Found 2 results

  1. On some DSM updates I get my RAID degraded after the system update reboot. It does not happen on all system updates but the latest one DSM 6.1.3-15152 from July 13th it happened again. I know what the root cause of the problem is and it is the fact that my special settings in \etc.defaults\synoinfo.conf gets overwritten with some Synology defaults. My setting of internalportcfg="0x3fc0" is reset back to internalportcfg="0xfff" and this causes my 8 drive SHR-2 RAID to become degraded since it looses 2 of the 8 disks. The reason why I have internalportcfg="0x3fc0" is that I am not using the 6 SATA ports on my motherboard at all. All my 8 disks are connected to my LSI 9211 controller. Not a big problem though since I just edit the file \etc.defaults\synoinfo.conf back to my default setting and then run a time consuming scrub/repair of my raid volume. This is very time consuming though since my RAID volume is approx 21TB and having 2 disks being degraded is quite risky during the repair. It looks like this is a problem only if one has more than a certain number of disks and/or the disks being attached starts at a higher port number. In my case I am not using the first 6 SATA ports on the motherboard so my disk slots being used starts at disk number 7 and goes up to disk number 14 (totally 8 disks). I have another server with only 4 disks and that server has not these problems on Synology DSM updates. Is there ANY way to have the Synology update to NOT overwrite the internalportcfg-setting in /etc.defaults/synoinfo.conf file to NOT being overwritten with new Synology default values or can I replace this file BEFORE the Synoloy restart?
  2. I have a DSM1517+, it have 5 internal sata port and 2 Esata port. Now I have install 5 harddisk and I want to add a SSD as read cache, and install the SSD on esata port. so I want to modify synoinfo.conf to change esata to internal sata drive. Below are 2 file need to be changed. /etc/synoinfo.conf /etc.defaults/synoinfo.conf the content is changed from: internalportcfg="0x1f" esataportcfg="0x60" maxdisks="5" to internalportcfg="0x7f" esataportcfg="0x80" maxdisks="7" And I have confirmed that I have login as root, configmed the file with Winscp after I did the changed the content. but after I restart DSM, I found the data what I have changed in /etc.defaults/synoinfo.conf are all changed back...... I redo it again, it still changed back. So my question is who have ever met same issue and how to fix it.
  • Create New...