HDD fail to hibernate after upgrade from 6.2.2 to 6.2.3


Recommended Posts

Unfortunately I am using extra.lzma (0.11) and hibernation does not work. I think I'm going to mount my disks at a linux distribution like this:

and do:

dd if=/dev/zero of=/dev/sda1

dd if=/dev/zero of=/dev/sdb1

dd if=/dev/zero of=/dev/sdc1

...

you'll get the idea.

Basically I going to wipe the DSM partition and make a clean install. This won't make you lose data (I already did this on the past) just all the configs... But anyhow, backing up data first shouldn't be a bad idea.

If anyone has a better approach, please let me know!

Thanks!

 

 

3 hours ago, powerplyer said:

Anyone having HDD hibernation issues find a workaround/fix? Downgrading seems to the only solution, but I think going back to 6.2.2 would wipe all my data. Thanks again to the people whom have contributed. This weekend I plan on updating my 6.2.3 to IG-88 extra.lzma (.11), but am not very hopeful. 

Link to post
Share on other sites
Posted (edited)
On 8/13/2020 at 1:48 PM, flyride said:

 

If continuous writing spurious errors to log files is in fact the reason hibernation can't occur, there are two fairly feasible solutions... 1: repoint scemd.log to ramdisk.  or 2: adapt the log filter that I posted for SMART error suppression... anyway, take a look and see if it can help.

 

https://xpenology.com/forum/topic/29581-suppress-virtual-disk-smart-errors-from-varlogmessages/

 

 

Dear flyride,

 

Thanks for your info and need your further help. In order to suppress this log pattern:

2020-08-12T10:30:57 nas scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T10:30:57 nas scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T10:30:57 nas scemd: polling_fan_speed_rpm.c:35 ioctl device failed

I created two files:

1) /usr/local/etc/syslog-ng/patterndb.d/scemd.conf

# /usr/local/etc/syslog-ng/patterndb.d/scemd.conf
# scemd.log to suppress ioctl device failed since DSM 6.2.3

filter fs_pollingsysthermal { match("scemd: polling_sys_thermal\.c:35 ioctl device failed" value("MESSAGE")); };
filter fs_pollingsysvoltage { match("scemd: polling_sys_voltage\.c:35 ioctl device failed" value("MESSAGE")); };
filter fs_pollingfanspeedrpm { match("scemd: polling_fan_speed_rpm\.c:35 ioctl device failed" value("MESSAGE")); };
filter fs_ioctldevicefailed { match("ioctl device failed" value("MESSAGE")); };

filter f_allioctlfailmsgs { filter(fs_ioctldevicefailed) or filter(fs_pollingsysthermal) or filter(fs_pollingsysvoltage) or filter(fs_pollingfanspeedrpm); };
log { source(src); filter(f_allioctlfailmsgs); };
2) /usr/local/etc/syslog-ng/patterndb.d/include/not2msg/scemd

and not filter(f_allioctlfailmsgs)

Then I restarted the service by your command:

# synoservice --restart syslog-ng

But it doesn't work. I don't know what I got wrong. Can you help to comment what did I done wrong and how to fix it? Thanks a lot!

Edited by e-ghost
Link to post
Share on other sites
11 hours ago, e-ghost said:

But it doesn't work. I don't know what I got wrong. Can you help to comment what did I done wrong and how to fix it? Thanks a lot!

 

There's quite a lot going on here.

filter fs_pollingsysthermal { match("scemd: polling_sys_thermal\.c:35 ioctl device failed" value("MESSAGE")); };
filter fs_pollingsysvoltage { match("scemd: polling_sys_voltage\.c:35 ioctl device failed" value("MESSAGE")); };
filter fs_pollingfanspeedrpm { match("scemd: polling_fan_speed_rpm\.c:35 ioctl device failed" value("MESSAGE")); };

The matches won't work as "scemd:" isn't present in the source data; it gets added by syslog-ng along with the timestamp when the log file is output. At a minimum that needs to be removed from the match statement.  Better yet, all of these log instances can be met with one regular expression: "polling_.*ioctl\ device\ failed$"

 

11 hours ago, e-ghost said:

I created two files:

1) /usr/local/etc/syslog-ng/patterndb.d/scemd.conf

2) /usr/local/etc/syslog-ng/patterndb.d/include/not2msg/scemd

 

These configuration files affect the syslog messages that Synology delivers to /var/log/messages.  The log you are trying to affect is /var/log/scemd.log and unfortunately there is no extensible user configuration for this. The file that controls /var/log/scemd.log is /etc.defaults/syslog-ng/patterndb.d/scemd.conf which must be edited directly in order to make the change you want.

 

You will be inclined to make a backup of this file in the same directory, but don't. Store a backup elsewhere. Regardless of how it's named, if it's in the directory, the parser will grab it and try to execute it - which will either not do what you intend, or crash syslog-ng.

 

The other consequence of modifying a file in this location is that it may be overwritten by a DSM upgrade.  If that happens, you'll need to verify that the new file's function is the same, and then reapply the edits below, making any required adjustments:

 

Original /etc.defaults/syslog-ng/patterndb.d/scemd.conf from DS3617xs DSM 6.2.3

filter f_scemd { program(scemd);  };
filter f_scemd_sev { level(err..emerg) };
destination d_scemd { file("/var/log/scemd.log"); };
log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); };

Example modified /etc.defaults/syslog-ng/patterndb.d/scemd.conf to suppress the desired error messages

# begin scemd.conf patch to suppress ioctl messages
filter fs_scemd { program(scemd);  };
filter fs_scemd_ioctl { match("polling_.*ioctl\ device\ failed$" value("MESSAGE")); };
filter f_scemd { filter(fs_scemd) and not filter(fs_scemd_ioctl); };
# end scemd.conf patch

filter f_scemd_sev { level(err..emerg) };
destination d_scemd { file("/var/log/scemd.log"); };
log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); };

synoservice --restart syslog-ng does not always implement changes to the files in /etc.defaults/syslog-ng (this may be because they are copied to /etc before execution, but I've found that sometimes it works and sometimes it doesn't).  To be certain, restart the NAS to be certain that syslog-ng is completely reloaded and the file structure is reinitialized. You should see some initialization events in the log post-boot, if everything is working okay.

Edited by flyride
Link to post
Share on other sites
12 hours ago, flyride said:

Example modified /etc.defaults/syslog-ng/patterndb.d/scemd.conf to suppress the desired error messages

Hi @flyride, thanks a lot for your guide! I got these 3 logs suppressed!

Unfortunately, the HDD are still not hibernated. 😬

Link to post
Share on other sites
On 8/12/2020 at 3:54 AM, richv31 said:

 

Not to that particular version but to the two immediately prior. You are welcome to look at the errors that log every 6 secs in /var/log/scemd.log... Most folks on the forum dont care about HDD hibernation so that is why it has not been reported more widely. But yes, it is also busted in u2. It stll works correctly in 1.04b/918+ 6.2.3 u2, however you need to switch to use ahci based controllers like the JMB585-based ones, SAS/LSI does not work correctly on 918+ for hibernation either.

 

Hi @richv31, I have got the ioctl error suppressed from scemd.log with great help from flyride. However I found that HDD hibernation is still not working as before. Would you have further advise on this? Thanks a lot!

Link to post
Share on other sites
7 hours ago, e-ghost said:

 

Hi @richv31, I have got the ioctl error suppressed from scemd.log with great help from flyride. However I found that HDD hibernation is still not working as before. Would you have further advise on this? Thanks a lot!

Sorry man, no idea where else to go from here. I either reverted to 6.2.2 or moved to 1.04b. Maybe see what is generating load on the system but no idea how....

Link to post
Share on other sites

Hi @richv31, is it any WRITE action to /var/log will stop the HDD hibernation? May I ask where is the hibernation log be stored in DSM?

 

I told some time to test it. I turned it on at 23:00 last night and then let it idle. Then SSH login at 04:34 and I found the log activities like this:

drwxr-xr-x  2 root   root   4096 Aug 20 20:20 disk-latency
drwxr-xr-x 17 root   root   4096 Aug 20 23:01 ..
-rw-rw----  1 system log    1917 Aug 20 23:01 apparmor.log
-rw-rw-rw-  1 root   root   2269 Aug 20 23:01 space_operation_error.log
-rw-rw----  1 system log    2391 Aug 20 23:02 datascrubbing.log
-rw-rw----  1 system log  171881 Aug 20 23:02 kern.log
-rw-rw----  1 system log    9811 Aug 20 23:02 postgresql.log
-rw-rw----  1 system log  202434 Aug 20 23:02 disk.log
-rw-r--r--  1 root   root   1218 Aug 20 23:02 synocmsclient.log
-rw-r--r--  1 root   root 509695 Aug 20 23:02 dmesg
-rw-rw----  1 system log   29554 Aug 20 23:02 syslog.log
-rw-rw----  1 system log  146347 Aug 20 23:02 synocrond.log
-rw-r--r--  1 root   root   2124 Aug 20 23:02 esynoscheduler.log
-rw-r--r--  1 root   root   4227 Aug 20 23:02 disk_overview.xml
-rw-rw----  1 system log     757 Aug 20 23:02 sysnotify.log
-rw-rw----  1 system log  176371 Aug 20 23:02 scemd.log
-rw-r--r--  1 root   root  19650 Aug 20 23:02 synopkg.log
drwxr-xr-x  2 root   root   4096 Aug 21 00:01 diskprediction
-rw-rw----  1 system log   33933 Aug 21 03:13 iscsi.log
-rw-rw----  1 system log  109788 Aug 21 04:05 rm.log
-rw-rw----  1 system log  614237 Aug 21 04:05 synoservice.log
-rw-rw----  1 system log   82380 Aug 21 04:05 scsi_plugin.log
-rw-r--r--  1 root   root   4104 Aug 21 04:09 synocrond-execute.log
-rw-rw----  1 system log  627685 Aug 21 04:34 messages
drwx------  2 system log    4096 Aug 21 04:35 synolog
-rw-r-----  1 root   log   15463 Aug 21 04:35 auth.log
-rw-rw----  1 system log   22591 Aug 21 04:35 bash_history.log

I think all log write on Aug 20 are normal due to system start up. The last 4 since 04:34 were caused by my SSH login. Those in between are unknown....

 

Thanks a lot!

 

 

Link to post
Share on other sites

I need to get the HDD hibernation working again as HDD temperature is high in here. Is downgrade be the only method up to this point? If so, may I ask how an I downgrade back to DSM 6.2 with all my data preserved? Thanks a lot!

Link to post
Share on other sites

search the forum, you will need to delete certain partitions from each HDD. Or you can migrate to 918+/1.04b and then back to 1.03b/6.2.2, likely you will lose all your settings but data should be intact.

Link to post
Share on other sites
  • 2 weeks later...
  • 1 month later...

Simple and dirty fix based on written above. Modified /etc.defaults/syslog-ng/patterndb.d/scemd.conf:

 

filter f_scemd { program(scemd);  };
filter f_scemd_sev { level(err..emerg) };
# destination d_scemd { file("/var/log/scemd.log"); };
destination d_scemd { file("/dev/null"); };
log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); };

 

Edit file and reboot. Result - HDD hybernation works fine now...

 

My config: DS3617xs, DSM 6.2.3-25426-U2, extra.lzma v0.11.2_test.

  • Thanks 1
Link to post
Share on other sites
  • 2 weeks later...
On 10/15/2020 at 6:12 AM, vasiliy_gr said:

Simple and dirty fix based on written above. Modified /etc.defaults/syslog-ng/patterndb.d/scemd.conf:

 


filter f_scemd { program(scemd);  };
filter f_scemd_sev { level(err..emerg) };
# destination d_scemd { file("/var/log/scemd.log"); };
destination d_scemd { file("/dev/null"); };
log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); };

 

Edit file and reboot. Result - HDD hybernation works fine now...

 

My config: DS3617xs, DSM 6.2.3-25426-U2, extra.lzma v0.11.2_test.

Thanks for the findings, also working for me

 

DS3615xs, DSM 6.2.3 Update 2 no extra.lzma, just the one comming within the loader img file

Link to post
Share on other sites
  • 3 weeks later...

i did read that here in the thread and it worked on my 3617 test system

mod the following file

/etc.defaults/syslog-ng/patterndb.d/scemd.conf


 

destination d_scemd { file("/var/log/scemd.log"); };

->

#destination d_scemd { file("/var/log/scemd.log"); };

destination d_scemd { file("/dev/null"); };

 

so you comment out the original line with a "#"

and add a new one that will move the log data of scemd to /dev/null instead of writing them to disk

 

Edited by IG-88
  • Like 1
Link to post
Share on other sites

My variant of fix for/etc.defaults/syslog-ng/patterndb.d/scemd.conf

Works fine at Microserver N54L, DSM 6.2.3-25426 Update 2

# begin scemd.conf patch to suppress ioctl messages
filter f_scemd_ioctl { program(scemd) and match("polling_.*ioctl\ device\ failed$" value("MESSAGE")); };
#log with no destination and final flag = discard
log { source(src); filter(f_scemd_ioctl); flags(final); };
# end scemd.conf patch
filter f_scemd { program(scemd);  };
filter f_scemd_sev { level(err..emerg) };
destination d_scemd { file("/var/log/scemd.log"); };
log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); };

 

 

Link to post
Share on other sites
On 11/15/2020 at 3:27 AM, azhur said:

My variant of fix for/etc.defaults/syslog-ng/patterndb.d/scemd.conf

Works fine at Microserver N54L, DSM 6.2.3-25426 Update 2


# begin scemd.conf patch to suppress ioctl messages
filter f_scemd_ioctl { program(scemd) and match("polling_.*ioctl\ device\ failed$" value("MESSAGE")); };
#log with no destination and final flag = discard
log { source(src); filter(f_scemd_ioctl); flags(final); };
# end scemd.conf patch
filter f_scemd { program(scemd);  };
filter f_scemd_sev { level(err..emerg) };
destination d_scemd { file("/var/log/scemd.log"); };
log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); };

 

 

 

Many thanks - worked absolutely perfect and hard drives are sleeping once again 😴

Link to post
Share on other sites
  • 2 weeks later...
On 11/18/2020 at 3:48 AM, Lexizilla said:

How can  check, if the HDD hibernation is working ?
I enabled the function and the log, but doesn´t find any entries on the protocol center about it

did you find if hibernation is working ? 

how can i find if hdd hibernation is working or now..

Link to post
Share on other sites
4 hours ago, jithuraj said:

how can i find if hdd hibernation is working or now..

you might hear the disk spin doen and up again and it also takes 5-10 seconds longer when accessing the nas when the disks are down

 

9 hours ago, e-ghost said:

I saw DSM 6.2.3-25426 Update 3 was out. Is this issue fixed by official DSM? Thanks a lot!

afair the messages are about some missing power management thats present in the original unit so it might not change with a minor update like u3

as its only a minor update the config file is not replaced so the change will still work with u3 (i dont see any log entry's on my 3617 system)

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.