Schnapps

DSM 5.0-4493 Update 4

Recommended Posts

Hi guys,

Just wanted to tell you that after applying Update 4 for 4493 i have some strange things going on:

Here are some messages i see in the Notifications screen:

t1J6NQV.png

 

Here is the System Info Overview

5ncTn9t.png

 

Everything seems to work but it's not quite like that

 

Here is the tail of messages:

Aug 26 21:21:23 NAS dnsdsm: Check failed. H
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_get.c:115 Failed to open /dev/sda, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_enum.c:42 Failed to get disk information sda
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_get.c:115 Failed to open /dev/sdb, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_enum.c:42 Failed to get disk information sdb
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_get.c:115 Failed to open /dev/sdc, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_enum.c:42 Failed to get disk information sdc
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_get.c:115 Failed to open /dev/sde, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_enum.c:42 Failed to get disk information sde
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_get.c:115 Failed to open /dev/sdf, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_info_enum.c:42 Failed to get disk information sdf
Aug 26 21:21:23 NAS storagehandler.cgi: disk_size_get.c:42 Failed to open /dev/sdf5, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_size_get.c:42 Failed to open /dev/sde5, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_size_get.c:42 Failed to open /dev/sdb5, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_size_get.c:42 Failed to open /dev/sda5, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: space_minimal_disk_size_get.c:155 Failed to get partition info of [/dev/sdf5]
Aug 26 21:21:23 NAS storagehandler.cgi: space_minimal_disk_size_get.c:155 Failed to get partition info of [/dev/sdf5]
Aug 26 21:21:23 NAS storagehandler.cgi: disk_size_get.c:42 Failed to open /dev/sdf5, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_size_get.c:42 Failed to open /dev/sde5, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_size_get.c:42 Failed to open /dev/sdb5, errno=No such file or directory
Aug 26 21:21:23 NAS storagehandler.cgi: disk_size_get.c:42 Failed to open /dev/sda5, errno=No such file or directory

 

Some other things could be missing but i didn't finished testing.

 

Any advices?

Share this post


Link to post
Share on other sites

Everything seems to be working fine after 3 reboots :smile:

Yay!

 

Here is the tail of the messages

 

NAS> tail -f /var/log/messages
Aug 27 00:23:50 NAS LogViewer.cgi: login.c (1453) Token Authentication Fail.
Aug 27 00:23:50 NAS CurConn.cgi: login.c (1453) Token Authentication Fail.
Aug 27 00:23:50 NAS task.cgi: login.c (1453) Token Authentication Fail.
Aug 27 00:23:50 NAS synosyslog.cgi: login.c (1453) Token Authentication Fail.
Aug 27 00:23:54 NAS storagehandler.cgi: login.c (1453) Token Authentication Fail.
Aug 27 00:23:58 NAS entry.cgi_SYNO.Backup.Task[1].list[14853]: login.c (1453) Token Authentication Fail.
Aug 27 00:24:01 NAS entry.cgi_SYNO.Core.System.Utilization[1].get[14859]: login.c (1453) Token Authentication Fail.
Aug 27 00:24:03 NAS dsmnotify.cgi: login.c (1453) Token Authentication Fail.
Aug 27 00:35:03 NAS entry.cgi_SYNO.Backup.Task[1].list[25649]: login.c (1453) Token Authentication Fail.
Aug 27 00:35:04 NAS entry.cgi_SYNO.Core.System.Utilization[1].get[25655]: login.c (1453) Token Authentication Fail.

Share this post


Link to post
Share on other sites

HI all,

i got the same problem as Schnapps.

after 3 reboot then everything came back.

what happened?!

Share this post


Link to post
Share on other sites

Just read about Schnapps issues so checked my server to find the volume was showing ok but my 4 2TB drives weren't showing in storage manager, rebooted a couple of times & everything appears to be ok now.

Share this post


Link to post
Share on other sites

I had the same issues. no more disks after update 4.

so i went and reinstalled from scratch applied update 3 and left it that way.

i'll just wait until they sort that bug out...

Share this post


Link to post
Share on other sites

Similar issue here... all of a sudden I get a notice that an abnormality was detected and the volume(s) were unmounted. I searched around but it was not showing my disks, so I rebooted. When it came back, it looked like it had 5 fresh disks, but no volume.

 

This is a VM on my ESXi machine that I'm playing with so no harm done, but it is concerning. I have just now created a new volume to see if it happens again.

Share this post


Link to post
Share on other sites

Same issue here on VM based DSM. it is running fine now but I am holding off on upgrading bare metal DSM. Did you D/L update from sinology or bromolow from elsewhere?

 

...time rolls on....

 

updated DSM bare metal system.

D/L from sinology,

run script,

apply patch.

Wait a painfully long time (almost another 10mins beyond the 10min countdown circle of hope) for system to come ready, was almost going to do hard off but could hear disk activity so let it go.

Eventually could connect, all volumes and data alive and well.

Edited by Guest

Share this post


Link to post
Share on other sites

I downloaded from Synology and ran the sed and mv commands before installing. My volume is likewise running fine right now. I only rebooted once when it came back up and told me there were no volumes. Some folks I've read said their volumes came back after 2-3 reboots. No idea if that would have fixed my issue or not.

Share this post


Link to post
Share on other sites

Same "problem" here...

 

First reboot (automatic reboot after the update was applied) everything was ok.

After that I did a manual reboot and my volumes were "gone".

Rebooted again and now they are back again...

 

Also Webstation was disabled and I needed to enable it manually.

Share this post


Link to post
Share on other sites
Also Webstation was disabled and I needed to enable it manually.

 

Thanks for the hint!

Mine was also disabled :razz:

Share this post


Link to post
Share on other sites

No probs Schnapps! :grin:

 

Another problem I have after the update...

All rights have been reset on my Photo Station folders.

I use Photo Station accounts (instead of DSM accounts) in Photo Station.

All photo folders are reset and do not contain any user rights. So I have to go through all photo folders and set the user rights again...

 

133 folders!! :shock::shock:

Share this post


Link to post
Share on other sites
Similar issue here... all of a sudden I get a notice that an abnormality was detected and the volume(s) were unmounted. I searched around but it was not showing my disks, so I rebooted. When it came back, it looked like it had 5 fresh disks, but no volume.

 

This is a VM on my ESXi machine that I'm playing with so no harm done, but it is concerning. I have just now created a new volume to see if it happens again.

 

I just finally got around to updating mine and had the same exact warning on my first boot.. rebooting now... i hope my volume comes back. worst case I'd need to redo everything from a backup.

 

For a few minutes I thought they were gone on 2nd reboot, but it seems it hung on the reboot and was still stuck in previous state. had to force a reboot from esxi VM console and saw my volume soon as it booted.. hope it stays that way :smile:

Share this post


Link to post
Share on other sites
Similar issue here... all of a sudden I get a notice that an abnormality was detected and the volume(s) were unmounted. I searched around but it was not showing my disks, so I rebooted. When it came back, it looked like it had 5 fresh disks, but no volume.

 

This is a VM on my ESXi machine that I'm playing with so no harm done, but it is concerning. I have just now created a new volume to see if it happens again.

 

I just finally got around to updating mine and had the same exact warning on my first boot.. rebooting now... i hope my volume comes back. worst case I'd need to redo everything from a backup.

 

For a few minutes I thought they were gone on 2nd reboot, but it seems it hung on the reboot and was still stuck in previous state. had to force a reboot from esxi VM console and saw my volume soon as it booted.. hope it stays that way :smile:

 

Good luck. Please let me know if it doesn't :smile:. Thx

Share this post


Link to post
Share on other sites

Strange;

- Downloaded update

- sed'd

- Installed update

- Systen won't come back. Power cycle.

- System works fine altough 1 drive giving SMART error.

 

No problems besides the SMART error.

I may have been lucky? :mrgreen:

 

EDIT: Can't remember if u4 or u5 tough..

Share this post


Link to post
Share on other sites