vasiliy_gr

Members
  • Content Count

    45
  • Joined

  • Last visited

Community Reputation

1 Neutral

About vasiliy_gr

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Simple and dirty fix based on written above. Modified /etc.defaults/syslog-ng/patterndb.d/scemd.conf: filter f_scemd { program(scemd); }; filter f_scemd_sev { level(err..emerg) }; # destination d_scemd { file("/var/log/scemd.log"); }; destination d_scemd { file("/dev/null"); }; log { source(src); filter(f_scemd); filter(f_scemd_sev); destination(d_scemd); }; Edit file and reboot. Result - HDD hybernation works fine now... My config: DS3617xs, DSM 6.2.3-25426-U2, extra.lzma v0.11.2_test.
  2. Still the same problem exist in DSM6.2.3 for ds3615 with v0.11_test: 2020-05-27T18:28:47+03:00 XPEnologyX scemd: polling_sys_thermal.c:35 ioctl device failed 2020-05-27T18:28:47+03:00 XPEnologyX scemd: polling_sys_voltage.c:35 ioctl device failed 2020-05-27T18:28:47+03:00 XPEnologyX scemd: polling_fan_speed_rpm.c:35 ioctl device failed Repeats every minute. And this prevents HDDs from hybernation. I did not tested ds3617 variant with new extra.lzma as I think it will be the same in this case. But I can test it also if there is some sense. For installat
  3. Sorry for offtopic... About DSM6.2.3 with 1.03b (original jun's) and ds3615/ds3617 (i tryed both). "HDD Hibernation" stop working. I think that the reason is in some problems with drivers for scemd. And so on - error messages every minute prevent drives from sleeping. So we still need special extensions for ds3615/ds3617 with DSM6.2.3...
  4. My report on update. I have 3 Xpenology running (with different versions of DSM/loader), so I waited for ds918+ extras to update them all at once. 1. Hardware: i5-7600, Intel X540-T2, LSI 9305-16i (LSISAS3224), 16*HDD. Previous version: ds3617xs, DSM 6.1.7, loader 1.02b. Goal: updated DSM and driver modules versions. Updated version: ds3617xs, DSM 6.2.2-24922, loader 1.03b+extra3617_v.0.5. Update method: migration with new loader files on usb drive. Result: absolutely flawless update. Finally I had to manually edit synoinfo.conf-s to make all my 16 HDDs vis
  5. Thank you! Very good news. I also tried this method with nvme (Kingston A1000 240GB) on motherboard's internal nvme connector. So I changed those values to "0000:04:00.0". And it works! The only one thing I still do not understand is - what about second nvme drive for RW cache setup? Does it have the same pci ids as the first one on add-on card? If so what about my second nvme slot on motherboard, no chance?..
  6. Yes... I spent a lot of hours yesterday trying to compile alx.ko from backports by myself. The best result was: module insmoded and immediately crashed. It may be some incompatibility with the real kernel used or its config (I compiled module against official latest syno kernel with its official config). As for latest gnoboot - there alx.ko is present and works fine. So we have to wait for next nanoboot releases. By the way does anyone know if this thread is a source for feedback and drivers requests for the nanoboot's author? If so I'd like to ask the author for alx.ko backport in nano
  7. I think you should eliminate overlap in *portcfg. Try setting both esataportcfg and usbportcfg to zero value. Also expand your internalportcfg to obviously high value (0xfffff for example). If you will find all your hdd-s - ok. If not - try higher MaxDisk settings. After finding all hdd's - reduce internalportcfg value to actual bitmap and increase other *portcfg values to actual values as bitmaps. As for your current settings - they are incorrect. As they have 'one' bits set simultaneously in internalportcfg and two other *portcfg-s.
  8. For syno password I used the following code: #include #include #include int gcd(int a, int b) { return (b?gcd(b,a%b):a); } void main() {
  9. I have both perc h310 and n54l in baremetal under gnoboot 10.5. But they are - two different baremetal xpenologies. Seriously speaking I do use h310 reflashed to 8211-8i/IT. Reflash procedure was rather complex. I had to consecutively flash it to latest dell's hba fw, then to 8211/IR and only third was 8211/IT. I used three different tool-chaines (dell's one, official lsi's and lsiutil correspondingly for those three stages). Also I used two different mobos (one with dos environment and other with efi-shell). And also had to cover two pins on pci-e. So I do not know if you can do it i
  10. Today I decided to change hardware configuration on one of my xpenologies. Previously it had Asus P8Z77-I mobo with Pentium G2130 (1155). I changed mobo to GIGABYTE GA-H87N with i5-4440 (1150). It was only the case of having two NICs on mobo. So - system is running dsm 5.0-4458u2 under gnoboot 10.5. All onboard sata controllers disabled. 8 HDD-s connected via lsi 9211-8i. On the previous hardware configuration all my 8 hdd-s were enumerated in DSM web-interface as disks 1-8. But now they are enumerated as disks 2-9 with disk one empty. I looked through dmesg output but I do not see any
  11. I migrated my third xpenology from 4.3-3810 (trantors build r1.0) to 5.0-4458 just an hour ago. All my data and setting stayed unharmed. Except of remote permissions on all the shares. So I had to restore permissions manually (I mean - manually in DSM gui) for both nfs and smb access. May be it is also your case.
  12. I received my H310 yesterday and tried to reflash it into 9211-8i/IT. It was a little bit complex... I had to cover its pci-e pins B5/B6 with tape to make it work on non-uefi mobo. Then I flashed it to Dell official HBA firmware with dell's tools (previously killed its own firmware with megarec). Then I flashed it to 9211/IR firmware with lsi's tools. And at last took it to uefi mobo and with efi version of lsiutil flashed it to 9211/IT. As a result I have 9211/IT from H310. No performance or compatibility problems. But I still need pci-e pins B5/B6 to be covered ,if I want it to work with
  13. Let me know if ever you still get kernel traceback. So I can include the patch for 10.5. 34 hours of runtime: [spoiler=][123490.605535] irq 16: nobody cared (try booting with the "irqpoll" option) [123490.605540] Pid: 0, comm: swapper/0 Tainted: P C O 3.2.40 #6 [123490.605543] Call Trace: [123490.605545] [] ? __report_bad_irq+0x2c/0xb4 [123490.605556] [] ? note_interrupt+0x15d/0x1b7 [123490.605560] [] ? handle_irq_event_percpu+0xfa/0x117 [123490.605564] [] ? handle_irq_event+0x48/0x70 [123490.605569] [] ? handle_fasteoi_irq+0x74/0xae [123490.605572] [] ? handle_irq+
  14. Checked with irqpoll boot parameter. 15 hours of runtime. No problem with irq 16 in dmesg. So it shall be the problem you imply.
  15. It might be related to cpu_idle driver that I backported from kernel.org. Try it and let me know if it helps During previous test session I obtained that error with irq16 in dmesg twice straight on two reboots within less than 10 minutes from dsm start. Today I tried to reproduce the situation (no changes in hardware or software) with no luck during 2 hours (with all the activities I had previously). A lot of segfaults with dsmnotify.cgi, but no crashes on irq... So as I can't reproduce the bug, I also have no ability to test sensibly the kernel option you mentioned. Sorry... Three hour