On some DSM updates I get my RAID degraded after the system update reboot. It does not happen on all system updates but the latest one DSM 6.1.3-15152 from July 13th it happened again.
I know what the root cause of the problem is and it is the fact that my special settings in \etc.defaults\synoinfo.conf gets overwritten with some Synology defaults.
My setting of
internalportcfg="0x3fc0"
is reset back to
internalportcfg="0xfff"
and this causes my 8 drive SHR-2 RAID to become degraded since it looses 2 of the 8 disks.
The reason why I have internalportcfg="0x3fc0" is that I am not using the 6 SATA ports on my motherboard at all. All my 8 disks are connected to my LSI 9211 controller.
Not a big problem though since I just edit the file \etc.defaults\synoinfo.conf back to my default setting and then run a time consuming scrub/repair of my raid volume. This is very time consuming though since my RAID volume is approx 21TB and having 2 disks being degraded is quite risky during the repair.
It looks like this is a problem only if one has more than a certain number of disks and/or the disks being attached starts at a higher port number. In my case I am not using the first 6 SATA ports on the motherboard so my disk slots being used starts at disk number 7 and goes up to disk number 14 (totally 8 disks).
I have another server with only 4 disks and that server has not these problems on Synology DSM updates.
Is there ANY way to have the Synology update to NOT overwrite the internalportcfg-setting in /etc.defaults/synoinfo.conf file to NOT being overwritten with new Synology default values or can I replace this file BEFORE the Synoloy restart?
Question
CrazyFin
On some DSM updates I get my RAID degraded after the system update reboot. It does not happen on all system updates but the latest one DSM 6.1.3-15152 from July 13th it happened again.
I know what the root cause of the problem is and it is the fact that my special settings in \etc.defaults\synoinfo.conf gets overwritten with some Synology defaults.
My setting of
internalportcfg="0x3fc0"
is reset back to
internalportcfg="0xfff"
and this causes my 8 drive SHR-2 RAID to become degraded since it looses 2 of the 8 disks.
The reason why I have internalportcfg="0x3fc0" is that I am not using the 6 SATA ports on my motherboard at all. All my 8 disks are connected to my LSI 9211 controller.
Not a big problem though since I just edit the file \etc.defaults\synoinfo.conf back to my default setting and then run a time consuming scrub/repair of my raid volume. This is very time consuming though since my RAID volume is approx 21TB and having 2 disks being degraded is quite risky during the repair.
It looks like this is a problem only if one has more than a certain number of disks and/or the disks being attached starts at a higher port number. In my case I am not using the first 6 SATA ports on the motherboard so my disk slots being used starts at disk number 7 and goes up to disk number 14 (totally 8 disks).
I have another server with only 4 disks and that server has not these problems on Synology DSM updates.
Is there ANY way to have the Synology update to NOT overwrite the internalportcfg-setting in /etc.defaults/synoinfo.conf file to NOT being overwritten with new Synology default values or can I replace this file BEFORE the Synoloy restart?
Link to comment
Share on other sites
12 answers to this question
Recommended Posts