Xpenology not saving config for expanded drives-1.0.3b_6.2.3-25426


Recommended Posts

i've been using Xpenology 3617xs on a Norco 4224 chassis for almost 2 weeks now, learning as I go.

Original I did the expand use case editing the max disks, esata, usb and internaldevices with 0 problems.

(Edited the synoinfo.conf based on this thread:

It's been working as well as I can expect it to the entire time.

I powered it off last night for some maintenance and when I powered it on this morning it's not using the settings I had originally set and now it's not keeping the settings I re-enter so it keeps defaulting to 12 total drives usable.

I have 24 drives installed plus a ssd connected to the motherboard as a SSD cache.

Nothing I do seems to keep the config, upon reboot, the system is not responsive, ends up rebooting and requires installation over and over again.

Is there any point in trying to resolve this (I have all my original data on other NAS devices, and had only started copying data to this box?

 

Thank you

 

SP_crash2.png

SP_crash1.png

synoinfo1.conf

Edited by primant
Link to post
Share on other sites

You are probably editing files in /etc

 

The files that the system uses for run-time behavior are in /etc

 

At boot time, files are overwritten in /etc from /etc.defaults

 

So:

  • if you want changes to take place on reboot, edit /etc.defaults
  • if you want changes to take place now, edit /etc
  • if you want changes now and reboot persistence, edit both

All bets are off at upgrade, many items are reset to original system values in /etc.defaults during an upgrade.

Edited by flyride
Link to post
Share on other sites

I definitely have been editing the /etc.defaults synoinfo.conf file.

What I see happening upon editing that and rebooting is the Juno bootloader screen, I cannot connect to the xpenology. I cannot discover it through the app or find.synology.com, then it reboots and resets itself to a default configdmesg.txt

I'm including a copy of the DMESG file.

System see's all the drives, just ends up resetting itself after the initial configuration and rebooting.

 

I'll add to this:

I edit the synoinf.conf files in both /etc and /etc.defaults, save, reboot.

System boots to the juno loader. It sits there doing something for a few minutes (during which I cannot access the system through the app or find.syno*.com. It then reboots and falls back to a default config. I can then discover it, re-install it and we're back at square one again.

Edited by primant
Link to post
Share on other sites
18 hours ago, primant said:

I have 24 drives installed plus a ssd connected to the motherboard as a SSD cache.

 

your synoinfo.conf is not 24 disks at all

maxdisks="45"

internalportcfg="0x1FFFFFFFFFFF"
usbportcfg="0x60000000000000"
esataportcfg="0x1FE00000000000"
 

that would be 24 disks

maxdisks="24"

internalportcfg="0xFFFFFF"

 

from dmesg

...

[   20.544967] md: sdz2 has different UUID to sdg1
[   20.544969] md: sdaa2 has different UUID to sdg1
[   20.544972] md: sdab2 has different UUID to sdg1

 

thats usually where problems are as this is above 24 disks, afair its ok as long its still sd[.], thats because 24 disks are the safe value

 

[   54.639321] Get flashcache access md error, return error code
[   54.639330] device-mapper: flashcache: flashcache_io_callback: io error -5 block 62358161280 action 8
[   54.648548] device-mapper: flashcache: flashcache_io_callback: switching /dev/md3 to BYPASS mode
[   54.684575] Get flashcache access md error, return error code
[   54.684591] Get flashcache access md error, return error code

...

i guess thats also not supposed to happen

 

 

i'd suggest count your disks and ports and connect everything so that it "lines up from below" and stays below 26 (sda to sdz)

if it was just one ssd then its only a read cache and its not needed, connect it as last drive in the row

1st are the 6 internal ports and thten the lsi sas ports

then set the config to 24 as seen above reboot, you should be able to have all 24 drives of your raid, you might "loose" the ssd cache but it should give you back your access to the data

if the cache is the 25th drive and does not work then you can experiment if you want to squeeze it in

 

 

edit:

maybe read a little in this thread about it

https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=77932

 

 

 

Edited by IG-88
Link to post
Share on other sites
  • 2 weeks later...
On 11/10/2020 at 3:50 PM, IG-88 said:

 

your synoinfo.conf is not 24 disks at all

maxdisks="45"

internalportcfg="0x1FFFFFFFFFFF"
usbportcfg="0x60000000000000"
esataportcfg="0x1FE00000000000"
 

that would be 24 disks

maxdisks="24"

internalportcfg="0xFFFFFF"

 

from dmesg

...

[   20.544967] md: sdz2 has different UUID to sdg1
[   20.544969] md: sdaa2 has different UUID to sdg1
[   20.544972] md: sdab2 has different UUID to sdg1

 

thats usually where problems are as this is above 24 disks, afair its ok as long its still sd[.], thats because 24 disks are the safe value

 

[   54.639321] Get flashcache access md error, return error code
[   54.639330] device-mapper: flashcache: flashcache_io_callback: io error -5 block 62358161280 action 8
[   54.648548] device-mapper: flashcache: flashcache_io_callback: switching /dev/md3 to BYPASS mode
[   54.684575] Get flashcache access md error, return error code
[   54.684591] Get flashcache access md error, return error code

...

i guess thats also not supposed to happen

 

 

i'd suggest count your disks and ports and connect everything so that it "lines up from below" and stays below 26 (sda to sdz)

if it was just one ssd then its only a read cache and its not needed, connect it as last drive in the row

1st are the 6 internal ports and thten the lsi sas ports

then set the config to 24 as seen above reboot, you should be able to have all 24 drives of your raid, you might "loose" the ssd cache but it should give you back your access to the data

if the cache is the 25th drive and does not work then you can experiment if you want to squeeze it in

 

 

edit:

maybe read a little in this thread about it

https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=77932

 

 

 

Thank you for the help!

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.