Jump to content
XPEnology Community

HDD fail to hibernate after upgrade from 6.2.2 to 6.2.3


e-ghost

Recommended Posts

I was using DSM 6.2.2-24922 Update 4 and I can have HDD hibernation working with all previous version. I lost this func after upgraded to DSM 6.2.3-25426 Update 2. I have no change in both hardware, software, package and settings. May I ask if anyone know if there is any changes in DSM 6.2.3-25426 Update 2 that will stop HDD hibernation from working?

 

My hardware is ASUS E35M1i-Deluxe + LSI_SAS2308_LSI-9207-8i (HP220). Thanks a lot!

Best regards,

e-Ghost

Link to comment
Share on other sites

On 8/10/2020 at 5:41 AM, e-ghost said:

extra.lzma. Yes the extra was not replaced when changing from 6.2.2 to 6.2.3. Will this be the problem source?

i'd say yes, the 6.2.2 drivers are special build for a kernel config change synology made with 6.2.2, some drivers may crash on boot and thats know to even course problems with shutdown (you could see this problems when checking dmesg log)

either use jun's original extra.lzma that came with the loader (can be extracted with 7zip from the img file) or use my extended extra.lzma that is made for 6.2.3

https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/

 

Link to comment
Share on other sites

8 minutes ago, richv31 said:

HDD hibernation is broken from 6.2.3 onwards for 1.03b/3615/3617 due to new ioctl errors in hardware monitoring (most likely).

 

Oh man that is bad news. I have a noisy server. Do you think if I got to a 918 set-up it may work, or is it across the board?

Link to comment
Share on other sites

1 hour ago, powerplyer said:

@e-ghost did you update the extra.lzma (0.11_2) to see if it worked?

 

Not to that particular version but to the two immediately prior. You are welcome to look at the errors that log every 6 secs in /var/log/scemd.log... Most folks on the forum dont care about HDD hibernation so that is why it has not been reported more widely. But yes, it is also busted in u2. It stll works correctly in 1.04b/918+ 6.2.3 u2, however you need to switch to use ahci based controllers like the JMB585-based ones, SAS/LSI does not work correctly on 918+ for hibernation either.

Link to comment
Share on other sites

1 hour ago, richv31 said:

HDD hibernation is broken from 6.2.3 onwards for 1.03b/3615/3617 due to new ioctl errors in hardware monitoring (most likely).

 

Hi @richv31, I saw IG-88 mentioned that there is serious problem with 918+ and scsi/scs drivers, at least with mpt2sas/mpt3sas, after not properly waking up from hdd hibernation. But hasn't mentioned the issue of 3615.

 

Is 1.03b/3615 from 6.2.3 also suffer the same issue? I am exactly with this combination. Thanks a lot!

Edited by e-ghost
Link to comment
Share on other sites

2 hours ago, richv31 said:

HDD hibernation is broken from 6.2.3 onwards for 1.03b/3615/3617 due to new ioctl errors in hardware monitoring (most likely).

wasn't that problem just about 918+ because it does not have native scsi/sas support?

3615/17 already comes with its own libscsi and libsas and lsi mpt drivers, that cant be broken or at least it sould be able to get it working by relying on the original/native drivers

 

EDIT: just checke my own tread about that, its just 918+

 

24 minutes ago, e-ghost said:

Is 1.03b/3615 from 6.2.3 also suffer the same issue? I am exactly with this combination

no

Link to comment
Share on other sites

34 minutes ago, richv31 said:

however you need to switch to use ahci based controllers like the JMB585-based ones

thats just one i prefer because it has pcie 3.0 capabilities (double speed for its 2 pcie lanes compared with pcie 2.0, 2000 MB/s vs 1000 MB/s)

the often used apollo lake only supports pcie 2.0 so its no gain in a lot of systems

there are also two pcie 2.0, two lane controllers with 8 ports (asm and marvel chips)

Link to comment
Share on other sites

5 minutes ago, powerplyer said:

. I would like not redo my 3615, if I can fix it via the extra.lzma that would be great. I am looking to recreate my USB based of Jun 1.03b with extra.lzma and hope it boots. I do not have another box with 10x 10TB drives. 

you dont have to recreate it, when you already have 6.2.3 running you would just replace the extra.lzma on the 2nd partition with the one made for 6.2.3 or with the old one from juns loader

 

when recreating it from loader 1.03b you would have the new 6.2.3 kernel missing on the 2nd partition (rd.gz, zImage, can be extracted with 7zip from the 6.2.3 *.pat file)

Edited by IG-88
Link to comment
Share on other sites

@IG-88 Thank you for the information. Just so I do not completely screw up my configuration, can you please help to confirm if the steps are correct to update my system. Sorry in advance for all the questions.

  1. Download extra3615_v0.11_test from your post. 
  2. Remove USB from my system. 
  3. Open in Windows system with open USB with OSFMount
  4. use osfmount> mount image> select partition 1> untick Read only drive> ok> double click the mounted DRV
  5. Copy the extra.lzma from the zip file (~4MB) (your version extra3615_v0.11_test)
  6.  close> osfimage> dismount all & exit>
  7. plug back in to my NAS box

 

A) Are my steps correct?

B) Anything I need to use on the NAS side?

 

I am bit confused by this statement "when recreating it from loader 1.03b you would have the new 6.2.3 kernel missing on the 2nd partition (rd.gz, zImage, can be extracted with 7zip from the 6.2.3 *.pat file)". I assume this is only needed if I recreating the USB drive?

Edited by powerplyer
Link to comment
Share on other sites

Please refer to this post on what works and when: 

 

 

Basically in 3615/1.03b/6.2.3+ with a sas controller (have not tested ahci) will never hibernate. Works fine in 6.2.2.

In 918+/1.04b/any_OS_version with a sas controller, drives will go into hibernation but will not wake up correctly (controller errors) for non-basic volumes. Henc you need to disable hibernation in the UI to prevent possible data loss. Everything works fine/as expected with an ahci controller.

Link to comment
Share on other sites

4 hours ago, IG-88 said:

wasn't that problem just about 918+ because it does not have native scsi/sas support?

3615/17 already comes with its own libscsi and libsas and lsi mpt drivers, that cant be broken or at least it sould be able to get it working by relying on the original/native drivers

 

EDIT: just checke my own tread about that, its just 918+

 

no

 

1.03b/3615/6.2.3+/sas_controller will never enter HDD hibernation, most likely due to the errors logged in scemd.log that started with 6.2.3. The 918+ issue with sas controller is completely different as you say.

Link to comment
Share on other sites

12 hours ago, richv31 said:

 

1.03b/3615/6.2.3+/sas_controller will never enter HDD hibernation, most likely due to the errors logged in scemd.log that started with 6.2.3. The 918+ issue with sas controller is completely different as you say.

 

Hi @richv31, is this the ioctl error that causing the HDD hibernation fail? (I have changed the 3615xs's extra.lzma to v0.11_test that is for DSM 6.2.3 and I can see these: )

 

bash-4.3# tail -f /var/log/scemd.log
2020-08-12T21:24:57+08:00 fevernas03 scemd: sysnotify_get_title_key.c:65 Can get category description from /var/cache/texts/enu/notification_category
2020-08-12T21:24:58+08:00 fevernas03 scemd: sysnotify_send_notification.c:615 SYSNOTIFY: [DataVolumeFull] was sent to desktop
2020-08-12T21:24:58+08:00 fevernas03 scemd: data_volume_check.c:464 Volume1 Full
2020-08-12T21:24:58+08:00 fevernas03 scemd: system_status.c:278 Deep sleep timer:-1 min(s)
2020-08-12T21:24:58+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:24:58+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:24:58+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
2020-08-12T21:25:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:25:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:25:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
2020-08-12T21:26:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:26:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:26:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
2020-08-12T21:27:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:27:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:27:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
2020-08-12T21:28:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:28:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:28:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed

Thanks a lot!

Link to comment
Share on other sites

yup, these were not in 6.2.2....

3 minutes ago, e-ghost said:

 

Hi @richv31, is this the ioctl error that causing the HDD hibernation fail? (I have changed the 3615xs's extra.lzma to v0.11_test that is for DSM 6.2.3 and I can see these: )

 


bash-4.3# tail -f /var/log/scemd.log
2020-08-12T21:24:57+08:00 fevernas03 scemd: sysnotify_get_title_key.c:65 Can get category description from /var/cache/texts/enu/notification_category
2020-08-12T21:24:58+08:00 fevernas03 scemd: sysnotify_send_notification.c:615 SYSNOTIFY: [DataVolumeFull] was sent to desktop
2020-08-12T21:24:58+08:00 fevernas03 scemd: data_volume_check.c:464 Volume1 Full
2020-08-12T21:24:58+08:00 fevernas03 scemd: system_status.c:278 Deep sleep timer:-1 min(s)
2020-08-12T21:24:58+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:24:58+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:24:58+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
2020-08-12T21:25:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:25:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:25:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
2020-08-12T21:26:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:26:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:26:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
2020-08-12T21:27:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:27:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:27:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed
2020-08-12T21:28:57+08:00 fevernas03 scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T21:28:57+08:00 fevernas03 scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T21:28:57+08:00 fevernas03 scemd: polling_fan_speed_rpm.c:35 ioctl device failed

Thanks a lot!

 

Link to comment
Share on other sites

Hi.

I already tried with your (@IG-88) extra.lzma and I still have those 3 errors every minute:

Quote

2020-08-12T10:30:57 nas scemd: polling_sys_thermal.c:35 ioctl device failed
2020-08-12T10:30:57 nas scemd: polling_sys_voltage.c:35 ioctl device failed
2020-08-12T10:30:57 nas scemd: polling_fan_speed_rpm.c:35 ioctl device failed

 

I migrated from 6.2.2 U2 to 6.2.3 U2, and in the prior version hibernation worked perfectly.

I have a HP Microserver G7 (NL54) with ds3615 v1.03b with the latest extra.lzma (v0.11_test ).

 

On 8/9/2020 at 10:33 PM, IG-88 said:

what dsm type? 3615/3617/918+

any extra.lzma, what version, was the extra also replaced when changing from 6.2.2 to 6.2.3? there are some power management changes that might require different drivers

 

Does anyone know how to put it back to work? Is this a loader issue or synology? Thanks!

Edited by nfarias
added extra.lzma version
Link to comment
Share on other sites

I also want to know too. My place is very hot and will not use the HDD very often so I really need HDD hibernation to keep HDD colder. Is there any way to resolve this? Or I will need to downgrade it? If downgrade is the only solution, how can I do it?

 

Thanks a lot!

Link to comment
Share on other sites

you need to downgrade to 6.2.2 to get HDD hibernation working. You may try removing and re-pointing scemd.log to /dev/null in case the continuous file writing is preventing the drives from going to sleep. But it might be a more serious issue where the code generating the error actually keeps creating load on the system or something....

Edited by richv31
Link to comment
Share on other sites

51 minutes ago, richv31 said:

you need to downgrade to 6.2.2 to get HDD hibernation working. You may try removing and re-pointing scemd.log to /dev/null in case the continuous file writing is preventing the drives from going to sleep. But it might be a more serious issue where the code generating the error actually keeps creating load on the system or something....

 

If continuous writing spurious errors to log files is in fact the reason hibernation can't occur, there are two fairly feasible solutions... 1: repoint scemd.log to ramdisk.  or 2: adapt the log filter that I posted for SMART error suppression... anyway, take a look and see if it can help.

 

https://xpenology.com/forum/topic/29581-suppress-virtual-disk-smart-errors-from-varlogmessages/

 

Link to comment
Share on other sites

Anyone having HDD hibernation issues find a workaround/fix? Downgrading seems to the only solution, but I think going back to 6.2.2 would wipe all my data. Thanks again to the people whom have contributed. This weekend I plan on updating my 6.2.3 to IG-88 extra.lzma (.11), but am not very hopeful. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...