e-ghost
-
Posts
30 -
Joined
-
Last visited
Posts posted by e-ghost
-
-
On 6/1/2020 at 6:03 AM, richv31 said:
Interesting, I have a jmb585 5 port card on order. My j4105 MB has only 2 onboard ports and a single pcie 2-lane slot that gives me 7 drives (my u-nas chassis can handle 8). Regarding my current system, on wake-up, the system seems to do a staggered (one drive at a time) wake-up. I looked up the error codes (ASC=0x4 ASCQ=0x2) - "Logical Unit Not Ready, Init. Cmd Required Indicates that the drive is ready to go, but it is not spinning. Generally, the drive needs to be spun up." So it looks like DSM is not waiting for the disks to be spun up before attempting to access them.
[ 51.663675] FAT-fs (synoboot2): fat_set_state: set=1 process=synopkicompatsy pid=11143
[ 51.679804] FAT-fs (synoboot2): fat_set_state: set=0 process=synopkicompatsy pid=11143
[ 53.741388] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
[ 53.741680] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
[ 53.741744] ata1.00: configured for UDMA/133
[ 53.743542] ata1: EH complete
[ 53.743563] ata1.00: Enabling discard_zeroes_data
[ 58.460556] iSCSI:target_core_rodsp_server.c:1027:rodsp_server_init RODSP server started, login_key(001132123456).
[ 58.492479] iSCSI:extent_pool.c:766:ep_init syno_extent_pool successfully initialized
[ 58.517653] iSCSI:target_core_device.c:617:se_dev_align_max_sectors Rounding down aligned max_sectors from 4294967295 to 4294967288
[ 58.517780] iSCSI:target_core_lunbackup.c:361:init_io_buffer_head 512 buffers allocated, total 2097152 bytes successfully
[ 58.593806] iSCSI:target_core_file.c:146:fd_attach_hba RODSP plugin for fileio is enabled.
[ 58.593813] iSCSI:target_core_file.c:153:fd_attach_hba ODX Token Manager is enabled.
[ 58.593827] iSCSI:target_core_multi_file.c:91:fd_attach_hba RODSP plugin for multifile is enabled.
[ 58.593839] iSCSI:target_core_ep.c:786:ep_attach_hba RODSP plugin for epio is enabled.
[ 58.593840] iSCSI:target_core_ep.c:793:ep_attach_hba ODX Token Manager is enabled.
[ 58.687936] capability: warning: `nginx' uses 32-bit capabilities (legacy support in use)
[ 59.079129] loop: module loaded
[ 4851.309911] sd 2:0:0:0: [sdc] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4851.309926] sd 2:0:0:0: [sdc] tag#0 Sense Key : 0x2 [current]
[ 4851.309934] sd 2:0:0:0: [sdc] tag#0 ASC=0x4 ASCQ=0x2
[ 4851.309942] sd 2:0:0:0: [sdc] tag#0 CDB: opcode=0x28 28 00 00 4c 06 80 00 00 08 00
[ 4851.309949] blk_update_request: I/O error, dev sdc, sector in range 4980736 + 0-2(12)
[ 4856.022445] sd 2:0:0:0: [sdc] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.022459] sd 2:0:0:0: [sdc] tag#0 Sense Key : 0x2 [current]
[ 4856.022466] sd 2:0:0:0: [sdc] tag#0 ASC=0x4 ASCQ=0x2
[ 4856.022474] sd 2:0:0:0: [sdc] tag#0 CDB: opcode=0x28 28 00 00 90 00 08 00 00 08 00
[ 4856.022480] blk_update_request: I/O error, dev sdc, sector in range 9437184 + 0-2(12)
[ 4856.023055] sd 2:0:2:0: [sde] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.023062] sd 2:0:2:0: [sde] tag#1 Sense Key : 0x2 [current]
[ 4856.023069] sd 2:0:2:0: [sde] tag#1 ASC=0x4 ASCQ=0x2
[ 4856.023075] sd 2:0:2:0: [sde] tag#1 CDB: opcode=0x28 28 00 00 4c 06 80 00 00 08 00
[ 4856.023080] blk_update_request: I/O error, dev sde, sector in range 4980736 + 0-2(12)
[ 4856.023624] sd 2:0:2:0: [sde] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.023630] sd 2:0:2:0: [sde] tag#2 Sense Key : 0x2 [current]
[ 4856.023636] sd 2:0:2:0: [sde] tag#2 ASC=0x4 ASCQ=0x2
[ 4856.023642] sd 2:0:2:0: [sde] tag#2 CDB: opcode=0x28 28 00 00 90 00 08 00 00 08 00
[ 4856.023646] blk_update_request: I/O error, dev sde, sector in range 9437184 + 0-2(12)
[ 4856.024265] sd 2:0:3:0: [sdf] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.024272] sd 2:0:3:0: [sdf] tag#3 Sense Key : 0x2 [current]
[ 4856.024281] sd 2:0:3:0: [sdf] tag#3 ASC=0x4 ASCQ=0x2
[ 4856.024291] sd 2:0:3:0: [sdf] tag#3 CDB: opcode=0x28 28 00 00 4c 06 80 00 00 08 00
[ 4856.024296] blk_update_request: I/O error, dev sdf, sector in range 4980736 + 0-2(12)
[ 4856.024849] sd 2:0:3:0: [sdf] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.024855] sd 2:0:3:0: [sdf] tag#0 Sense Key : 0x2 [current]
[ 4856.024860] sd 2:0:3:0: [sdf] tag#0 ASC=0x4 ASCQ=0x2
[ 4856.024866] sd 2:0:3:0: [sdf] tag#0 CDB: opcode=0x28 28 00 00 90 00 08 00 00 08 00
[ 4856.024871] blk_update_request: I/O error, dev sdf, sector in range 9437184 + 0-2(12)
[ 4856.025431] sd 2:0:4:0: [sdg] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.025446] sd 2:0:4:0: [sdg] tag#1 Sense Key : 0x2 [current]
[ 4856.025451] sd 2:0:4:0: [sdg] tag#1 ASC=0x4 ASCQ=0x2
[ 4856.025457] sd 2:0:4:0: [sdg] tag#1 CDB: opcode=0x28 28 00 00 4c 06 80 00 00 08 00
[ 4856.025462] blk_update_request: I/O error, dev sdg, sector in range 4980736 + 0-2(12)
[ 4856.026010] sd 2:0:4:0: [sdg] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.026016] sd 2:0:4:0: [sdg] tag#2 Sense Key : 0x2 [current]
[ 4856.026021] sd 2:0:4:0: [sdg] tag#2 ASC=0x4 ASCQ=0x2
[ 4856.026027] sd 2:0:4:0: [sdg] tag#2 CDB: opcode=0x28 28 00 00 90 00 08 00 00 08 00
[ 4856.026032] blk_update_request: I/O error, dev sdg, sector in range 9437184 + 0-2(12)
[ 4856.865160] sd 2:0:2:0: [sde] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.865182] sd 2:0:2:0: [sde] tag#1 Sense Key : 0x2 [current]
[ 4856.865190] sd 2:0:2:0: [sde] tag#1 ASC=0x4 ASCQ=0x2
[ 4856.865200] sd 2:0:2:0: [sde] tag#1 CDB: opcode=0x35 35 00 00 00 00 00 00 00 00 00
[ 4856.865214] blk_update_request: I/O error, dev sde, sector in range 4980736 + 0-2(12)
[ 4856.865746] md: super_written gets error=-5
[ 4856.865758] md_error: sde1 is being to be set faulty
[ 4856.865763] raid1: Disk failure on sde1, disabling device.
Operation continuing on 4 devices
[ 4856.866456] sd 2:0:3:0: [sdf] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.866465] sd 2:0:3:0: [sdf] tag#2 Sense Key : 0x2 [current]
[ 4856.866472] sd 2:0:3:0: [sdf] tag#2 ASC=0x4 ASCQ=0x2
[ 4856.866480] sd 2:0:3:0: [sdf] tag#2 CDB: opcode=0x35 35 00 00 00 00 00 00 00 00 00
[ 4856.866489] blk_update_request: I/O error, dev sdf, sector 4982400
[ 4856.866918] md: super_written gets error=-5
[ 4856.866938] md_error: sdf1 is being to be set faulty
[ 4856.866952] raid1: Disk failure on sdf1, disabling device.
Operation continuing on 3 devices
[ 4856.867687] sd 2:0:4:0: [sdg] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4856.867693] sd 2:0:4:0: [sdg] tag#3 Sense Key : 0x2 [current]
[ 4856.867698] sd 2:0:4:0: [sdg] tag#3 ASC=0x4 ASCQ=0x2
[ 4856.867705] sd 2:0:4:0: [sdg] tag#3 CDB: opcode=0x35 35 00 00 00 00 00 00 00 00 00
[ 4856.867711] blk_update_request: I/O error, dev sdg, sector 4982400
[ 4856.868134] md: super_written gets error=-5
[ 4856.868140] md_error: sdg1 is being to be set faulty
[ 4856.868143] raid1: Disk failure on sdg1, disabling device.
Operation continuing on 2 devices
[ 4867.014224] sd 2:0:0:0: [sdc] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[ 4867.014238] sd 2:0:0:0: [sdc] tag#1 Sense Key : 0x2 [current]
[ 4867.014245] sd 2:0:0:0: [sdc] tag#1 ASC=0x4 ASCQ=0x2
[ 4867.014253] sd 2:0:0:0: [sdc] tag#1 CDB: opcode=0x35 35 00 00 00 00 00 00 00 00 00
[ 4867.014262] blk_update_request: I/O error, dev sdc, sector 4982400
[ 4867.014693] md: super_written gets error=-5
[ 4867.014705] md_error: sdc1 is being to be set faulty
[ 4867.014710] raid1: Disk failure on sdc1, disabling device.
Operation continuing on 1 devices
[ 4867.035989] RAID1 conf printout:
[ 4867.035997] --- wd:1 rd:16
[ 4867.036003] disk 0, wo:1, o:0, dev:sdc1
[ 4867.036008] disk 1, wo:0, o:1, dev:sdd1
[ 4867.036012] disk 2, wo:1, o:0, dev:sdg1
[ 4867.036016] disk 3, wo:1, o:0, dev:sde1
[ 4867.036019] disk 4, wo:1, o:0, dev:sdf1
[ 4867.040944] RAID1 conf printout:
[ 4867.040952] --- wd:1 rd:16
[ 4867.040958] disk 1, wo:0, o:1, dev:sdd1
[ 4867.040962] disk 2, wo:1, o:0, dev:sdg1
[ 4867.040966] disk 3, wo:1, o:0, dev:sde1
[ 4867.040970] disk 4, wo:1, o:0, dev:sdf1
[ 4867.040982] RAID1 conf printout:
[ 4867.040985] --- wd:1 rd:16
[ 4867.040988] disk 1, wo:0, o:1, dev:sdd1
[ 4867.040992] disk 2, wo:1, o:0, dev:sdg1
[ 4867.040995] disk 3, wo:1, o:0, dev:sde1
[ 4867.040999] disk 4, wo:1, o:0, dev:sdf1
[ 4867.049937] RAID1 conf printout:
[ 4867.049945] --- wd:1 rd:16
[ 4867.049951] disk 1, wo:0, o:1, dev:sdd1
[ 4867.049956] disk 2, wo:1, o:0, dev:sdg1
[ 4867.049960] disk 4, wo:1, o:0, dev:sdf1
[ 4867.049968] RAID1 conf printout:
[ 4867.049971] --- wd:1 rd:16
[ 4867.049975] disk 1, wo:0, o:1, dev:sdd1
[ 4867.049978] disk 2, wo:1, o:0, dev:sdg1
[ 4867.049982] disk 4, wo:1, o:0, dev:sdf1
[ 4867.056935] RAID1 conf printout:
[ 4867.056942] --- wd:1 rd:16
[ 4867.056948] disk 1, wo:0, o:1, dev:sdd1
[ 4867.056952] disk 2, wo:1, o:0, dev:sdg1
[ 4867.056960] RAID1 conf printout:
[ 4867.056963] --- wd:1 rd:16
[ 4867.056966] disk 1, wo:0, o:1, dev:sdd1
[ 4867.056970] disk 2, wo:1, o:0, dev:sdg1
[ 4867.063954] RAID1 conf printout:
[ 4867.063962] --- wd:1 rd:16
[ 4867.063968] disk 1, wo:0, o:1, dev:sdd1
I understand and I don't need temps or fan speeds. The interesting part is that something changed for ds3615 between 6.2.2 and 6.2.3 with these errors now showing up regularly in scemd.log. Also at the same time HDD hibernation stopped working in 3615/6.2.3 (drives will not hibernate) and I assume this is due to these errors and the log writing. I don't know if this issue is LSI controller specific or it also applies to ahci drives, I will test. Also, these errors are not present in the 918+/6.2.3 logs.
This is how it looks for me in using the lsi 9211-8i controller:
Loader OS/ver/driver HDD go into hibernate HDD come out of hibernate 1.03b 3615/6.2.2/mpt2sas OK OK 1.03b 3615/6.2.3/mpt2sas NO n/a 1.04b 918+/6.2.2/mpt3sas OK OK for basic volume, Crash for RAID volumes 1.04b 918+/6.2.3/mpt3sas OK OK for basic volume, Crash for RAID volumes Hi, may I ask if v1.04b loader for 918+ with mpt2sas under 6.2.3 still haven't issue with HDD Hibernation? Is there any solution to re-gain the HDD hibernation under this combination? Thanks a lot!
-
1 hour ago, vasiliy_gr said:
My scemd.log:
root@XPEnologyX:~# cat /var/log/scemd.log | tail -10 2020-10-15T03:54:36+03:00 XPEnologyX scemd: polling_fan_speed_rpm.c:35 ioctl device failed 2020-10-15T03:55:36+03:00 XPEnologyX scemd: polling_sys_thermal.c:35 ioctl device failed 2020-10-15T03:55:36+03:00 XPEnologyX scemd: polling_sys_voltage.c:35 ioctl device failed 2020-10-15T03:55:36+03:00 XPEnologyX scemd: polling_fan_speed_rpm.c:35 ioctl device failed 2020-10-15T03:55:40+03:00 XPEnologyX scemd: manage_services.c:464 hw polling thread exit 2020-10-15T03:55:41+03:00 XPEnologyX scemd: manage_services.c:464 scemd connector thread exit 2020-10-15T03:55:43+03:00 XPEnologyX scemd: manage_services.c:464 led ctrl thread exit 2020-10-15T03:55:43+03:00 XPEnologyX scemd: manage_services.c:464 disk led ctrl thread exit 2020-10-15T03:55:43+03:00 XPEnologyX scemd: event_handler.c:318 event handler claimed 2020-10-15T03:55:43+03:00 XPEnologyX scemd: scemd.c:327 ************************SCEMD End**************************
I have no idea what is the problem with your setup...
sorry I found out the problem and resolved! I copied /etc.defaults/syslog-ng/patterndb.d/scemd.conf as /etc.defaults/syslog-ng/patterndb.d/scemd.conf_orig as a backup. The system also loaded both scemd.conf and scemd.conf_orig also. After I removed scemd.conf_orig then it works!
Sorry for stupid question but you still keep on helping! ^_^
-
9 hours ago, vasiliy_gr said:
Check access rights and ownership:
root@XPEnologyX:~# ls -l /etc.defaults/syslog-ng/patterndb.d/scemd.conf -rw-r--r-- 1 root root 260 Oct 15 04:22 /etc.defaults/syslog-ng/patterndb.d/scemd.conf
Recheck again the contents:
root@XPEnologyX:~# cat /etc.defaults/syslog-ng/patterndb.d/scemd.conf filter f_scemd { program(scemd); }; filter f_scemd_sev { level(err..emerg) }; # destination d_scemd { file("/var/log/scemd.log"); }; destination d_scemd { file("/dev/null"); }; log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); };
Thanks a lot! I got these:
bash-4.3# ls -l /etc.defaults/syslog-ng/patterndb.d/scemd.conf -rw-r--r-- 1 root root 261 Dec 27 01:46 /etc.defaults/syslog-ng/patterndb.d/scemd.conf
bash-4.3# cat /etc.defaults/syslog-ng/patterndb.d/scemd.conf filter f_scemd { program(scemd); }; filter f_scemd_sev { level(err..emerg) }; # destination d_scemd { file("/var/log/scemd.log"); }; destination d_scemd { file("/dev/null"); }; log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); };
bash-4.3# ls -l /etc/syslog-ng/patterndb.d/scemd.conf -rw-r--r-- 1 system log 261 Dec 27 16:33 /etc/syslog-ng/patterndb.d/scemd.conf
Seems the file content and permission are the same as your, but it just unable to point the logs to null device. Really strange.... Not sure where did the system pick up the scemd log settings from....
-
21 hours ago, vasiliy_gr said:
Yes - you only need to redirect your scemd messages to /dev/null. If your log file keeps growing after that - you missed something... As it should not.
Can you help to take a look what I did wrong? I already fully copied your file to replace my /etc.defaults/syslog-ng/patterndb.d/scemd.conf. After rebooted, the /etc/syslog-ng/patterndb.d/scemd.conf is the same as it. However, I see the /var/log/scemd.log is still keep on growing 8-lines per minute:
2020-12-27T02:10:49+08:00 fevernas03 scemd: SYSTEM: Last message 'polling_fan_speed_rp' repeated 1 times, suppressed by syslog-ng on fevernas03 2020-12-27T02:10:49+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed 2020-12-27T02:10:49+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed 2020-12-27T02:10:49+08:00 fevernas03 scemd: SYSTEM: Last message 'polling_sys_thermal.' repeated 1 times, suppressed by syslog-ng on fevernas03 2020-12-27T02:10:49+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed 2020-12-27T02:10:49+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed 2020-12-27T02:10:49+08:00 fevernas03 scemd: SYSTEM: Last message 'polling_sys_voltage.' repeated 1 times, suppressed by syslog-ng on fevernas03 2020-12-27T02:10:49+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed 2020-12-27T02:10:49+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
Is there any other factor that I can re-check? Thanks a lot!
-
On 10/15/2020 at 12:12 PM, vasiliy_gr said:
Simple and dirty fix based on written above. Modified /etc.defaults/syslog-ng/patterndb.d/scemd.conf:
filter f_scemd { program(scemd); }; filter f_scemd_sev { level(err..emerg) }; # destination d_scemd { file("/var/log/scemd.log"); }; destination d_scemd { file("/dev/null"); }; log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); };
Edit file and reboot. Result - HDD hybernation works fine now...
My config: DS3617xs, DSM 6.2.3-25426-U2, extra.lzma v0.11.2_test.
Hi, I cannot get my HDD hibernated. May I confirm with you that the only change to the system is to amend the 3rd line of that file? (as you commented the orginal 3rd line with # then added a new line to point the logs to /dev/null?)
My config: DS3615xs, DSM 6.2.3-25426-U2 & U3, extra.lzma v0.11_test.
I have amended that line and reboot. U see the /var/log/scemd.log is keep on growing. The HDDs are not willing to sleep at all. Keep spinning....
Thank you in advance!
-
4 hours ago, e-ghost said:
Hi IG-88, thanks for your advise! Still trying to figure out how to enable CSM mode from my Biostar A68N-5455 as the manual doesn't has such option.
Also, may I ask if jun's loader v1.04b 918+ support MPT HBA? My is LSI_SAS2308_LSI-9207-8i (HP220). Thanks a lot!
Sorry IG-88 please ignore the above post. Just got the CSM enabled and now the RTL8111H can get the IP. Thanks a lot!
Also, may I ask if jun's loader v1.04b 918+ support MPT HBA? My is LSI_SAS2308_LSI-9207-8i (HP220). Thanks a lot!
-
2 hours ago, IG-88 said:
its a uefi bios but 1.03b does not support this it needs csm mode active and you need to boot from the non uefi usb boot device (that might only be seen after rebotting once with csm active)
maybe read this
https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
i guess the cpu might also work with 918+ loader (that can boot uefi), try 1.04b 918+ 1st
not the nic driver for sure
also the extra.lzma for 6.2.2 will not work properly, with dsm 6.2.3 its jun's original or the one made for 6.2.3
Hi IG-88, thanks for your advise! Still trying to figure out how to enable CSM mode from my Biostar A68N-5455 as the manual doesn't has such option.
Also, may I ask if jun's loader v1.04b 918+ support MPT HBA? My is LSI_SAS2308_LSI-9207-8i (HP220). Thanks a lot!
-
Hi, I am using 3615xs v1.03b loader with my motherboard but the on-board 8111H cannot get any IP. Can you help to advise what was wrong?
1) my motherboard is Biostar A68N-5545. It uses AMD CPU so I chose 3615xs v1.03b loader
2) Tried the 3 extra.lzma for loader 1.03b ds3615 (original, v0.5, v0.11_test)
All 3 extra.lzma cannot get any IP from router. Router cannot see this motherboard is in network. Please help. Thanks a lot!
-
I saw DSM 6.2.3-25426 Update 3 was out. Is this issue fixed by official DSM? Thanks a lot!
-
Hi @richv31, thanks a lot! Will you know which partition that needed to be deleted? Any guide I can follow? I worry about data lost so much. I only found another guide which is quite diff:
Thanks a lot!
-
I need to get the HDD hibernation working again as HDD temperature is high in here. Is downgrade be the only method up to this point? If so, may I ask how an I downgrade back to DSM 6.2 with all my data preserved? Thanks a lot!
-
after suppressing the scemd.log, then the 3 error message will be logged into /var/log/messages
-
Hi @richv31, is it any WRITE action to /var/log will stop the HDD hibernation? May I ask where is the hibernation log be stored in DSM?
I told some time to test it. I turned it on at 23:00 last night and then let it idle. Then SSH login at 04:34 and I found the log activities like this:
drwxr-xr-x 2 root root 4096 Aug 20 20:20 disk-latency drwxr-xr-x 17 root root 4096 Aug 20 23:01 .. -rw-rw---- 1 system log 1917 Aug 20 23:01 apparmor.log -rw-rw-rw- 1 root root 2269 Aug 20 23:01 space_operation_error.log -rw-rw---- 1 system log 2391 Aug 20 23:02 datascrubbing.log -rw-rw---- 1 system log 171881 Aug 20 23:02 kern.log -rw-rw---- 1 system log 9811 Aug 20 23:02 postgresql.log -rw-rw---- 1 system log 202434 Aug 20 23:02 disk.log -rw-r--r-- 1 root root 1218 Aug 20 23:02 synocmsclient.log -rw-r--r-- 1 root root 509695 Aug 20 23:02 dmesg -rw-rw---- 1 system log 29554 Aug 20 23:02 syslog.log -rw-rw---- 1 system log 146347 Aug 20 23:02 synocrond.log -rw-r--r-- 1 root root 2124 Aug 20 23:02 esynoscheduler.log -rw-r--r-- 1 root root 4227 Aug 20 23:02 disk_overview.xml -rw-rw---- 1 system log 757 Aug 20 23:02 sysnotify.log -rw-rw---- 1 system log 176371 Aug 20 23:02 scemd.log -rw-r--r-- 1 root root 19650 Aug 20 23:02 synopkg.log drwxr-xr-x 2 root root 4096 Aug 21 00:01 diskprediction -rw-rw---- 1 system log 33933 Aug 21 03:13 iscsi.log -rw-rw---- 1 system log 109788 Aug 21 04:05 rm.log -rw-rw---- 1 system log 614237 Aug 21 04:05 synoservice.log -rw-rw---- 1 system log 82380 Aug 21 04:05 scsi_plugin.log -rw-r--r-- 1 root root 4104 Aug 21 04:09 synocrond-execute.log -rw-rw---- 1 system log 627685 Aug 21 04:34 messages drwx------ 2 system log 4096 Aug 21 04:35 synolog -rw-r----- 1 root log 15463 Aug 21 04:35 auth.log -rw-rw---- 1 system log 22591 Aug 21 04:35 bash_history.log
I think all log write on Aug 20 are normal due to system start up. The last 4 since 04:34 were caused by my SSH login. Those in between are unknown....
Thanks a lot!
-
On 8/12/2020 at 3:54 AM, richv31 said:
Not to that particular version but to the two immediately prior. You are welcome to look at the errors that log every 6 secs in /var/log/scemd.log... Most folks on the forum dont care about HDD hibernation so that is why it has not been reported more widely. But yes, it is also busted in u2. It stll works correctly in 1.04b/918+ 6.2.3 u2, however you need to switch to use ahci based controllers like the JMB585-based ones, SAS/LSI does not work correctly on 918+ for hibernation either.
Hi @richv31, I have got the ioctl error suppressed from scemd.log with great help from flyride. However I found that HDD hibernation is still not working as before. Would you have further advise on this? Thanks a lot!
-
12 hours ago, flyride said:
Example modified /etc.defaults/syslog-ng/patterndb.d/scemd.conf to suppress the desired error messages
Hi @flyride, thanks a lot for your guide! I got these 3 logs suppressed!
Unfortunately, the HDD are still not hibernated. 😬
-
Hi @flyride , can you help to advise me how to use syslog-ng to suppress that 3 error message from writing into scemd.log? Thanks a lot!
-
On 8/13/2020 at 1:48 PM, flyride said:
If continuous writing spurious errors to log files is in fact the reason hibernation can't occur, there are two fairly feasible solutions... 1: repoint scemd.log to ramdisk. or 2: adapt the log filter that I posted for SMART error suppression... anyway, take a look and see if it can help.
https://xpenology.com/forum/topic/29581-suppress-virtual-disk-smart-errors-from-varlogmessages/
Dear flyride,
Thanks for your info and need your further help. In order to suppress this log pattern:
2020-08-12T10:30:57 nas scemd: polling_sys_thermal.c:35 ioctl device failed 2020-08-12T10:30:57 nas scemd: polling_sys_voltage.c:35 ioctl device failed 2020-08-12T10:30:57 nas scemd: polling_fan_speed_rpm.c:35 ioctl device failed
I created two files:
1) /usr/local/etc/syslog-ng/patterndb.d/scemd.conf # /usr/local/etc/syslog-ng/patterndb.d/scemd.conf # scemd.log to suppress ioctl device failed since DSM 6.2.3 filter fs_pollingsysthermal { match("scemd: polling_sys_thermal\.c:35 ioctl device failed" value("MESSAGE")); }; filter fs_pollingsysvoltage { match("scemd: polling_sys_voltage\.c:35 ioctl device failed" value("MESSAGE")); }; filter fs_pollingfanspeedrpm { match("scemd: polling_fan_speed_rpm\.c:35 ioctl device failed" value("MESSAGE")); }; filter fs_ioctldevicefailed { match("ioctl device failed" value("MESSAGE")); }; filter f_allioctlfailmsgs { filter(fs_ioctldevicefailed) or filter(fs_pollingsysthermal) or filter(fs_pollingsysvoltage) or filter(fs_pollingfanspeedrpm); }; log { source(src); filter(f_allioctlfailmsgs); };
2) /usr/local/etc/syslog-ng/patterndb.d/include/not2msg/scemd and not filter(f_allioctlfailmsgs)
Then I restarted the service by your command:
# synoservice --restart syslog-ng
But it doesn't work. I don't know what I got wrong. Can you help to comment what did I done wrong and how to fix it? Thanks a lot!
-
I also want to know too. My place is very hot and will not use the HDD very often so I really need HDD hibernation to keep HDD colder. Is there any way to resolve this? Or I will need to downgrade it? If downgrade is the only solution, how can I do it?
Thanks a lot!
-
12 hours ago, richv31 said:
1.03b/3615/6.2.3+/sas_controller will never enter HDD hibernation, most likely due to the errors logged in scemd.log that started with 6.2.3. The 918+ issue with sas controller is completely different as you say.
Hi @richv31, is this the ioctl error that causing the HDD hibernation fail? (I have changed the 3615xs's extra.lzma to v0.11_test that is for DSM 6.2.3 and I can see these: )
bash-4.3# tail -f /var/log/scemd.log 2020-08-12T21:24:57+08:00 fevernas03 scemd: sysnotify_get_title_key.c:65 Can get category description from /var/cache/texts/enu/notification_category 2020-08-12T21:24:58+08:00 fevernas03 scemd: sysnotify_send_notification.c:615 SYSNOTIFY: [DataVolumeFull] was sent to desktop 2020-08-12T21:24:58+08:00 fevernas03 scemd: data_volume_check.c:464 Volume1 Full 2020-08-12T21:24:58+08:00 fevernas03 scemd: system_status.c:278 Deep sleep timer:-1 min(s) 2020-08-12T21:24:58+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed 2020-08-12T21:24:58+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed 2020-08-12T21:24:58+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed 2020-08-12T21:25:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed 2020-08-12T21:25:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed 2020-08-12T21:25:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed 2020-08-12T21:26:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed 2020-08-12T21:26:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed 2020-08-12T21:26:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed 2020-08-12T21:27:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed 2020-08-12T21:27:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed 2020-08-12T21:27:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed 2020-08-12T21:28:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed 2020-08-12T21:28:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed 2020-08-12T21:28:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
Thanks a lot!
-
1 hour ago, richv31 said:
HDD hibernation is broken from 6.2.3 onwards for 1.03b/3615/3617 due to new ioctl errors in hardware monitoring (most likely).
Hi @richv31, I saw IG-88 mentioned that there is serious problem with 918+ and scsi/scs drivers, at least with mpt2sas/mpt3sas, after not properly waking up from hdd hibernation. But hasn't mentioned the issue of 3615.
Is 1.03b/3615 from 6.2.3 also suffer the same issue? I am exactly with this combination. Thanks a lot!
-
Hi IG-88,
I am using 3615 v1.03b with the latest v0.5
extra.lzma. Yes the extra was not replaced when changing from 6.2.2 to 6.2.3. Will this be the problem source? Thanks a lot!
-
I was using DSM 6.2.2-24922 Update 4 and I can have HDD hibernation working with all previous version. I lost this func after upgraded to DSM 6.2.3-25426 Update 2. I have no change in both hardware, software, package and settings. May I ask if anyone know if there is any changes in DSM 6.2.3-25426 Update 2 that will stop HDD hibernation from working?
My hardware is ASUS E35M1i-Deluxe + LSI_SAS2308_LSI-9207-8i (HP220). Thanks a lot!
Best regards,
e-Ghost
-
- Outcome of the update: SUCCESSFUL
- DSM version prior update: DSM 6.2.2-24922 Update 4
- Loader version and model: Jun's Loader v1.03b DS3615XS
- Using custom extra.lzma: No
- Installation type: BAREMETAL - ASUS E35M1i-Deluxe + LSI_SAS2308_LSI-9207-8i (HP220)
- Additional comment: Need to upgrade to DSM 6.2.3-25426 first then automatically upgraded to Update 2 after 2nd reboot. Then cannot found the LAN on LAN. Xpenology boot USB drive required remake then can boot in and all 10 HDD recognized. However HDD hibernation not working since this upgrade.
-
On 10/26/2018 at 12:27 AM, Olegin said:
Try to migrate to 918 with 1.04b loader.
Hi Olegin, may I ask if N54L can boot with v1.04b for 918+ and load DSM 6.2.1? I thought it was Intel newer CPU only.
Thanks!
Driver extension jun 1.03b/1.04b for DSM6.2.2 for 3615xs / 3617xs / 918+
in Additional Compiled Modules
Posted
Hi @IG-88, read what you wrote and understood the situation now. Thanks a lot!