Jump to content
XPEnology Community
  • 0

syninfo.conf gets overwritten on system updates / Raid with 8 disks+ degraded


CrazyFin

Question

On some DSM updates I get my RAID degraded after the system update reboot. It does not happen on all system updates but the latest one DSM 6.1.3-15152 from July 13th it happened again.

I know what the root cause of the problem is and it is the fact that my special settings in \etc.defaults\synoinfo.conf gets overwritten with some Synology defaults.

 

My setting of

internalportcfg="0x3fc0"

is reset back to

internalportcfg="0xfff"

and this causes my 8 drive SHR-2 RAID to become degraded since it looses 2 of the 8 disks.

The reason why I have internalportcfg="0x3fc0" is that I am not using the 6 SATA ports on my motherboard at all. All my 8 disks are connected to my LSI 9211 controller.

 

Not a big problem though since I just edit the file \etc.defaults\synoinfo.conf back to my default setting and then run a time consuming scrub/repair of my raid volume. This is very time consuming though since my RAID volume is approx 21TB and having 2 disks being degraded is quite risky during the repair.

 

It looks like this is a problem only if one has more than a certain number of disks and/or the disks being attached starts at a higher port number. In my case I am not using the first 6 SATA ports on the motherboard so my disk slots being used starts at disk number 7 and goes up to disk number 14 (totally 8 disks).

I have another server with only 4 disks and that server has not these problems on Synology DSM updates.

 

Is there ANY way to have the Synology update to NOT overwrite the internalportcfg-setting in /etc.defaults/synoinfo.conf file to NOT being overwritten with new Synology default values or can I replace this file BEFORE the Synoloy restart?

 

 

Link to comment
Share on other sites

12 answers to this question

Recommended Posts

  • 1
3 hours ago, smokers said:

But basically there is no solution to get updates working without crashing the whole raid (if you use more than 12 disks) Right ? 

 

in theory you could change the patch file inside the extra.lzma to change the values you need, the 3615/3617 patch files do not contain sections with this values (they have the "default" of 12) the 916+ patch does try to patch the values from 4 to 12, doing it with the 916+ would be easier then for 3615/3617, for 3615/3617 you would have to create a diff for this and make it part of the patch (if you know how a diff works its not so difficult), i was thinking of doing this last year but it would (kind of) collide with the driver thing i do

if you thing that further it would be possible to define a value in grub.conf and use it as the amount of disks and change the things in the patch according to this valuen, but thats beyond what i will do for now and afaik quicknick did this already in his loader (should he not release then maybe i will do something it this direction)

Link to comment
Share on other sites

  • 1

Wohoooo! After almost giving up on the issue with not being able to disable motherboard SATA ports in order to get my 8 SATA port on LSI card to start numbering at 1 I finally found it!!

 

Stupid setup in the BIOS but one has to FIRST choose IDE for the SATA ports and then voilaaaa! A submenu option where the SATA ports can be disabled suddenly shows up! This is NOT visible if you have configured SATA as RAID or AHCI!?

See this:

5aa00747a6fc9_ASUSP7F-X-DisableonboardSATAports.JPG.4ab8124466fc224d3378c3e2784a08ea.JPG

 

After reconfiguring my setting internalportcfg from 0x3cf0 to 0x00ff all works fine now and all my 8 disks connected to my add-on LSI card are now detected as drives numbered from 1-8 and I will not have the problem with loosing disks on a Xpenology/Synology system update.... Aaaahhh finally! :-)

 

Edited by CrazyFin
Link to comment
Share on other sites

  • 0

I'm running 20 and 24 HDD systems. In my experience the synoinfo.conf gets overwritten (and set back to 12 drives) on a change of boot loader and the only thing to do is edit the file after the first boot (degraded/crashed volume) then allow raid reassembly/file system check. My systems have always recovered ok - but I've backed up just in case.

Why dont you disable the onboard sata ports so that the LSI is the only disk controller - that way you will always be within the 12 drive standard setup?

Link to comment
Share on other sites

  • 0
On 2017-07-15 at 3:19 PM, sbv3000 said:

Why dont you disable the onboard sata ports so that the LSI is the only disk controller - that way you will always be within the 12 drive standard setup?

 

Yepp, thats my next step to test and in fact I thought I already had disabled the SATA controller in the BIOS setup and on a reboot I can see that it is indeed disabled but I found another setting in BIOS that I will try on next reboot (as soon as my RAID repair is done).

My motherboard ASUS P7F-X (ASUS P7F-X server motherboard) has totally 6 SATA ports and the SATA chip is Intel® 3420 with 6 * SATA2 300MB/s ports.

 

I have tried with searching the forum for posts that could help me out but none of the posts discusses the sataportmap setting in the GRUB file and the internalportcfg-setting in the /etc.defaults/synoinfo.conf when one disables the onboard sata ports completely and only uses for example an LSI 9211 controller attached to the motherboard.

 

As soon as my RAID repair is done (approx 60% at the moment) then I´ll do some new testing with different BIOS-settings, sataportmap settings and internalportcfg settings.

Link to comment
Share on other sites

  • 0

From the threads I've seen the sataportmap setting should only need editing with multiple controllers and if the default does not work, so if you only have the LSI and the onboard are disabled then you should be able to leave that setting at default (1)

 

There have been a few posts about synoinfo.conf to increase beyond 12 and thats where the binary/hex workings come in, but again I'd have thought that with under 12 drives and one controller with equal/less than that number of channels everything should be stable.

 

Dumb thought but looking at your board - some variants have an additional marvel controller, best check thats disabled too if you have that model :smile:

 

Link to comment
Share on other sites

  • 0
On 2017-07-16 at 11:36 PM, sbv3000 said:

From the threads I've seen the sataportmap setting should only need editing with multiple controllers and if the default does not work, so if you only have the LSI and the onboard are disabled then you should be able to leave that setting at default (1)

 

Dumb thought but looking at your board - some variants have an additional marvel controller, best check thats disabled too if you have that model :smile:

 

Alright I have now tested various settings and the only SATA chip that I can completely disable in the BIOS is the Marvell SATA chip (for 4 drives) but the main Intel SATA chip is not possible to disable... :-(

So this means that Xpenology loader will always find the first 6 SATA ports and then my LSI 9211 card, i.e. my raid group will always start with disk no 7 and go up to no 14 no matter what I try to do.

We´ll I can live with this now since I know why and when it happens. My only solution would be to try with another motherboard where I can actually turn off the onboard SATA controllers.

Link to comment
Share on other sites

  • 0

I found a copy of your mobo manual and couldn't see the option to disable either - thats a weird thing.

 

This might be another dumb thought but before an upgrade, why not connect drives 13-14 to two of on board ports, do the upgrade/edit synoinfo, then swap back?

might save you time on rebuilds, less risk, even though its a pain to mess with the hardware each time

Link to comment
Share on other sites

  • 0
23 minutes ago, sbv3000 said:

I This might be another dumb thought but before an upgrade, why not connect drives 13-14 to two of on board ports, do the upgrade/edit synoinfo, then swap back?

might save you time on rebuilds, less risk, even though its a pain to mess with the hardware each time

 

Ah, good idea! I didn't think that the raid group would work if I move 2 of the 8 drives with no 13 and 14 away from the LSI 9211 controller and connect them to a SATA port.

 

If it works then it is a much better work around compared to having to do a rebuild/repair that takes 16-20 hours after every update. Thanks for the tip. Will try it out.

Link to comment
Share on other sites

  • 0

So nice you found a solution :)

 

 

But basically there is no solution to get updates working without crashing the whole raid (if you use more than 12 disks) Right ? 

 

Isn‘t there a option using permissions to deny override, or changing the pat file to change the included synoinfo.conf ?! :-/

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...