vasiliy_gr

Members
  • Content Count

    42
  • Joined

  • Last visited

Community Reputation

0 Neutral

About vasiliy_gr

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. My report on update. I have 3 Xpenology running (with different versions of DSM/loader), so I waited for ds918+ extras to update them all at once. 1. Hardware: i5-7600, Intel X540-T2, LSI 9305-16i (LSISAS3224), 16*HDD. Previous version: ds3617xs, DSM 6.1.7, loader 1.02b. Goal: updated DSM and driver modules versions. Updated version: ds3617xs, DSM 6.2.2-24922, loader 1.03b+extra3617_v.0.5. Update method: migration with new loader files on usb drive. Result: absolutely flawless update. Finally I had to manually edit synoinfo.conf-s to make all my 16 HDDs visible. 2. Hardware: i3-8100, Intel X550-T1. LSI 9207-8i (LSISAS2308), 8*HDD. Previous version: ds918+, DSM 6.2.2, loader 1.04b. Goal: updated driver modules versions. Updated version: ds918+, DSM untouched, loader 1.04b+extra918_v.0.8_std. Update method: deleted old /usr/lib/modules/update/ and /usr/lib/firmware/i915/ and rebooted with new loader files on usb drive. Result: no problems detected. Still not tested, if something changed. Comment: I had previously a problem with HDD hibernation on this configuration . After hibernation HDDs did not start and required hard reboot (as DSM web interface or ssh also did not work). 3. Hardware: i5-7400, Asus XG-C100C (atlantic), LSI 9207-8i (LSISAS2308), 8*HDD. Previous version: ds3617xs, DSM 6.1.7, loader 1.02b. Goal: updated DSM and driver modules versions plus change to ds918 for NVMe cache support. Updated version: ds918+, DSM 6.2.2-24922, loader 1.04b+extra918_v.0.8_std. Update method: migration (both DSM version and hardware type) with new loader files on usb drive. Result: absolutely flawless update. Comment: I also wanted to check the hibernation feature as for (2) above. And here I checked it... After hibernations HDDs did not return to life without hard reset by button. So... First of all - many thanks to IG-88. Really amazing work! Second - sorry for offtopic, but may be someone can quickly help, if this issue was already discussed somewhere. In my two configurations HDD hibernation does not work. To be more exact - it works by going to hibernation, but does not wake up after it up to hardware reset. In both configurations I have dsm 6.2.2 for ds918+. Previously on DSM 6.1.7 for ds3617xs it worked fine on the same hardware. Currently on DSM 6.2.2 for ds3617xs it also works without problems. This problem is not directly related to extra.lzma from this topic, as I had the same problem on pure jun-s 1.04b loader. And if you are interested in - what for I need HDD hibernation and ds918 together... I want to try NVMe cache together with hibernated drives. May be it is impossible for hundred reasons... But I even can't try as hibernation does not work on the only hardware version that supports NVMe...
  2. Thank you! Very good news. I also tried this method with nvme (Kingston A1000 240GB) on motherboard's internal nvme connector. So I changed those values to "0000:04:00.0". And it works! The only one thing I still do not understand is - what about second nvme drive for RW cache setup? Does it have the same pci ids as the first one on add-on card? If so what about my second nvme slot on motherboard, no chance?..
  3. Yes... I spent a lot of hours yesterday trying to compile alx.ko from backports by myself. The best result was: module insmoded and immediately crashed. It may be some incompatibility with the real kernel used or its config (I compiled module against official latest syno kernel with its official config). As for latest gnoboot - there alx.ko is present and works fine. So we have to wait for next nanoboot releases. By the way does anyone know if this thread is a source for feedback and drivers requests for the nanoboot's author? If so I'd like to ask the author for alx.ko backport in nanoboot. As for me I need this driver for second onboard NIC on gigabyte ga-h87n and ga-h87n-wifi. And it is the only one problem preventing me from migration to nanoboot on those two configurations (other two configurations do not have any problems - many thanks for lsi 9211 support).
  4. I think you should eliminate overlap in *portcfg. Try setting both esataportcfg and usbportcfg to zero value. Also expand your internalportcfg to obviously high value (0xfffff for example). If you will find all your hdd-s - ok. If not - try higher MaxDisk settings. After finding all hdd's - reduce internalportcfg value to actual bitmap and increase other *portcfg values to actual values as bitmaps. As for your current settings - they are incorrect. As they have 'one' bits set simultaneously in internalportcfg and two other *portcfg-s.
  5. For syno password I used the following code: #include #include #include int gcd(int a, int b) { return (b?gcd(b,a%b):a); } void main() { struct timeval tvTime; struct tm tmOutput,tmOutputUTC; gettimeofday(&tvTime, 0); localtime_r(&(tvTime.tv_sec), &tmOutput); gmtime_r(&(tvTime.tv_sec), &tmOutputUTC); tmOutput.tm_mon += 1; tmOutputUTC.tm_mon += 1; printf("password for today is: %x%02d-%02x%02d\n", tmOutput.tm_mon, tmOutput.tm_mon, tmOutput.tm_mday, gcd(tmOutput.tm_mon, tmOutput.tm_mday)); printf("password for today (UTC) is: %x%02d-%02x%02d\n\n", tmOutputUTC.tm_mon, tmOutputUTC.tm_mon, tmOutputUTC.tm_mday, gcd(tmOutputUTC.tm_mon, tmOutputUTC.tm_mday)); } Compile it and run (I used gcc). Use those password that correspond to the clock settings on target (local or UTC).
  6. I have both perc h310 and n54l in baremetal under gnoboot 10.5. But they are - two different baremetal xpenologies. Seriously speaking I do use h310 reflashed to 8211-8i/IT. Reflash procedure was rather complex. I had to consecutively flash it to latest dell's hba fw, then to 8211/IR and only third was 8211/IT. I used three different tool-chaines (dell's one, official lsi's and lsiutil correspondingly for those three stages). Also I used two different mobos (one with dos environment and other with efi-shell). And also had to cover two pins on pci-e. So I do not know if you can do it inside n54l. But you can try. As for me - I decided not to buy H310 adapters any more, but only original lsi 9211 (or 9240) as I did previously for two other hardware setups. As the little price difference do not excuse the complexity of making it functional.
  7. Today I decided to change hardware configuration on one of my xpenologies. Previously it had Asus P8Z77-I mobo with Pentium G2130 (1155). I changed mobo to GIGABYTE GA-H87N with i5-4440 (1150). It was only the case of having two NICs on mobo. So - system is running dsm 5.0-4458u2 under gnoboot 10.5. All onboard sata controllers disabled. 8 HDD-s connected via lsi 9211-8i. On the previous hardware configuration all my 8 hdd-s were enumerated in DSM web-interface as disks 1-8. But now they are enumerated as disks 2-9 with disk one empty. I looked through dmesg output but I do not see any reason for appearance of empty disk 1. When the mpt2sas module is loading it enumerates disks as sda-sdb- so on up to sdh. But suddenly it changes to sdb-sdi. I have no idea why. Here is a part of dmesg where the shift is comming: [spoiler=][ 11.007078] sd 0:0:0:0: [sda] Synchronizing SCSI cache [ 11.007098] sd 0:0:0:0: [sda] Result: hostbyte=0x01 driverbyte=0x00 [ 11.009110] mpt2sas0: removing handle(0x000d), sas_addr(0x4433221103000000) [ 11.014967] sd 0:0:1:0: [sdb] Synchronizing SCSI cache [ 11.014988] sd 0:0:1:0: [sdb] Result: hostbyte=0x01 driverbyte=0x00 [ 11.018166] mpt2sas0: removing handle(0x0009), sas_addr(0x4433221100000000) [ 11.023911] sd 0:0:2:0: [sdc] Synchronizing SCSI cache [ 11.023930] sd 0:0:2:0: [sdc] Result: hostbyte=0x01 driverbyte=0x00 [ 11.025235] mpt2sas0: removing handle(0x000a), sas_addr(0x4433221106000000) [ 11.031808] sd 0:0:3:0: [sdd] Synchronizing SCSI cache [ 11.031827] sd 0:0:3:0: [sdd] Result: hostbyte=0x01 driverbyte=0x00 [ 11.035376] mpt2sas0: removing handle(0x000b), sas_addr(0x4433221105000000) [ 11.040601] sd 0:0:4:0: [sde] Synchronizing SCSI cache [ 11.040620] sd 0:0:4:0: [sde] Result: hostbyte=0x01 driverbyte=0x00 [ 11.042523] mpt2sas0: removing handle(0x000c), sas_addr(0x4433221104000000) [ 11.048235] sd 0:0:5:0: [sdf] Synchronizing SCSI cache [ 11.048254] sd 0:0:5:0: [sdf] Result: hostbyte=0x01 driverbyte=0x00 [ 11.051655] mpt2sas0: removing handle(0x000e), sas_addr(0x4433221102000000) [ 11.057310] sd 0:0:6:0: [sdg] Synchronizing SCSI cache [ 11.057329] sd 0:0:6:0: [sdg] Result: hostbyte=0x01 driverbyte=0x00 [ 11.059828] mpt2sas0: removing handle(0x000f), sas_addr(0x4433221107000000) [ 11.065696] sd 0:0:7:0: [sdh] Synchronizing SCSI cache [ 11.065715] sd 0:0:7:0: [sdh] Result: hostbyte=0x01 driverbyte=0x00 [ 11.068470] mpt2sas0: removing handle(0x0010), sas_addr(0x4433221101000000) [ 11.072872] mpt2sas0: sending message unit reset !! [ 11.074675] mpt2sas0: message unit reset: SUCCESS [ 11.074707] mpt2sas 0000:01:00.0: PCI INT A disabled [ 11.245386] mpt2sas version 13.100.00.00 loaded [ 11.245452] scsi1 : Fusion MPT SAS Host [ 11.246147] mpt2sas 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 11.246156] mpt2sas 0000:01:00.0: setting latency timer to 64 [ 11.246159] mpt2sas0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (3967220 kB) [ 11.246216] mpt2sas 0000:01:00.0: irq 42 for MSI/MSI-X [ 11.246227] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 42 [ 11.246229] mpt2sas0: iomem(0x00000000f05c0000), mapped(0xffffc900043a0000), size(16384) [ 11.246230] mpt2sas0: ioport(0x000000000000e000), size(256) [ 11.363118] mpt2sas0: Allocated physical memory: size(7418 kB) [ 11.363120] mpt2sas0: Current Controller Queue Depth(3307), Max Controller Queue Depth(3432) [ 11.363121] mpt2sas0: Scatter Gather Elements per IO(128) [ 11.421874] mpt2sas0: LSISAS2008: FWVersion(18.00.00.00), ChipRevision(0x03), BiosVersion(07.35.00.00) [ 11.421876] mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ) [ 11.421923] mpt2sas0: sending port enable !! [ 11.425539] mpt2sas0: host_add: handle(0x0001), sas_addr(0x500605b036c03400), phys(8) [ 11.432583] mpt2sas0: port enable: SUCCESS [ 11.452539] scsi 1:0:0:0: Direct-Access WDC WD20EARX-00PASB0 AB51 PQ: 0 ANSI: 6 [ 11.452544] scsi 1:0:0:0: SATA: handle(0x000d), sas_addr(0x4433221103000000), phy(3), device_name(0x50014ee206c310b2) [ 11.452545] scsi 1:0:0:0: SATA: enclosure_logical_id(0x500605b036c03400), slot(0) [ 11.452619] scsi 1:0:0:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y) [ 11.452621] scsi 1:0:0:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1) [ 11.452715] syno_disk_type_get: disk driver 'Fusion MPT SAS Host' [ 11.452717] syno_disk_type_get: Got UNKNOWN port type 0 [ 11.452720] sd_probe: Got UNKNOWN dev_prefix (sd), index (1), disk_name (sdb), disk_len (20) [ 11.452979] sd 1:0:0:0: Attached scsi generic sg0 type 0 [ 11.458889] sd 1:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 11.458891] sd 1:0:0:0: [sdb] 4096-byte physical blocks [ 11.466946] scsi 1:0:1:0: Direct-Access WDC WD20EARS-00MVWB0 AB51 PQ: 0 ANSI: 6 [ 11.466950] scsi 1:0:1:0: SATA: handle(0x0009), sas_addr(0x4433221100000000), phy(0), device_name(0x50014ee6ab72732a) [ 11.466951] scsi 1:0:1:0: SATA: enclosure_logical_id(0x500605b036c03400), slot(3) [ 11.467022] scsi 1:0:1:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y) [ 11.467024] scsi 1:0:1:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1) [ 11.467110] syno_disk_type_get: disk driver 'Fusion MPT SAS Host' [ 11.467112] syno_disk_type_get: Got UNKNOWN port type 0 [ 11.467114] sd_probe: Got UNKNOWN dev_prefix (sd), index (2), disk_name (sdc), disk_len (20) [ 11.467447] sd 1:0:1:0: Attached scsi generic sg1 type 0 [ 11.471056] sd 1:0:1:0: [sdc] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Any ideas?..
  8. I migrated my third xpenology from 4.3-3810 (trantors build r1.0) to 5.0-4458 just an hour ago. All my data and setting stayed unharmed. Except of remote permissions on all the shares. So I had to restore permissions manually (I mean - manually in DSM gui) for both nfs and smb access. May be it is also your case.
  9. I received my H310 yesterday and tried to reflash it into 9211-8i/IT. It was a little bit complex... I had to cover its pci-e pins B5/B6 with tape to make it work on non-uefi mobo. Then I flashed it to Dell official HBA firmware with dell's tools (previously killed its own firmware with megarec). Then I flashed it to 9211/IR firmware with lsi's tools. And at last took it to uefi mobo and with efi version of lsiutil flashed it to 9211/IT. As a result I have 9211/IT from H310. No performance or compatibility problems. But I still need pci-e pins B5/B6 to be covered ,if I want it to work with old non-uefi mobos. Sorry for offtopic...
  10. Let me know if ever you still get kernel traceback. So I can include the patch for 10.5. 34 hours of runtime: [spoiler=][123490.605535] irq 16: nobody cared (try booting with the "irqpoll" option) [123490.605540] Pid: 0, comm: swapper/0 Tainted: P C O 3.2.40 #6 [123490.605543] Call Trace: [123490.605545] [] ? __report_bad_irq+0x2c/0xb4 [123490.605556] [] ? note_interrupt+0x15d/0x1b7 [123490.605560] [] ? handle_irq_event_percpu+0xfa/0x117 [123490.605564] [] ? handle_irq_event+0x48/0x70 [123490.605569] [] ? handle_fasteoi_irq+0x74/0xae [123490.605572] [] ? handle_irq+0x87/0x90 [123490.605576] [] ? do_IRQ+0x48/0x9f [123490.605580] [] ? common_interrupt+0x6e/0x6e [123490.605582] [] ? lapic_next_event+0x10/0x15 [123490.605590] [] ? IO_APIC_get_PCI_irq_vector+0x1c7/0x1c7 [123490.605595] [] ? intel_idle+0xd6/0x10c [123490.605598] [] ? intel_idle+0xb6/0x10c [123490.605603] [] ? cpuidle_idle_call+0x6d/0xb4 [123490.605607] [] ? cpu_idle+0x60/0x8c [123490.605612] [] ? start_kernel+0x327/0x333 [123490.605617] [] ? early_idt_handlers+0x140/0x140 [123490.605621] [] ? x86_64_start_kernel+0x10b/0x118 [123490.605623] handlers: [123490.605639] [] usb_hcd_irq [123490.605642] Disabling IRQ #16 And to be sure on irqpoll: [spoiler=]XPEnologyU> dmesg |grep irqpoll [ 0.000000] Command line: irqpoll [ 0.000000] Kernel command line: root=/dev/md0 ihd_num=1 netif_num=4 syno_hw_version=DS3612xs irqpoll [123490.605535] irq 16: nobody cared (try booting with the "irqpoll" option)
  11. Checked with irqpoll boot parameter. 15 hours of runtime. No problem with irq 16 in dmesg. So it shall be the problem you imply.
  12. It might be related to cpu_idle driver that I backported from kernel.org. Try it and let me know if it helps During previous test session I obtained that error with irq16 in dmesg twice straight on two reboots within less than 10 minutes from dsm start. Today I tried to reproduce the situation (no changes in hardware or software) with no luck during 2 hours (with all the activities I had previously). A lot of segfaults with dsmnotify.cgi, but no crashes on irq... So as I can't reproduce the bug, I also have no ability to test sensibly the kernel option you mentioned. Sorry... Three hours of run-time, no activity for the last hour at all. dmesg output: [spoiler=][ 7175.109848] dsmnotify.cgi[3916]: segfault at 0 ip 000000000804ccb8 sp 00000000fff006a0 error 6 [ 7205.289947] dsmnotify.cgi[4066]: segfault at 0 ip 000000000804ccb8 sp 00000000ff8724c0 error 6 [ 7235.774105] dsmnotify.cgi[4142]: segfault at 0 ip 000000000804ccb8 sp 00000000fff986b0 error 6 [11292.139798] irq 16: nobody cared (try booting with the "irqpoll" option) [11292.139802] Pid: 0, comm: swapper/0 Tainted: P O 3.2.40 #6 [11292.139805] Call Trace: [11292.139807] [] ? add_interrupt_randomness+0x39/0x157 [11292.139819] [] ? __report_bad_irq+0x2c/0xb4 [11292.139823] [] ? note_interrupt+0x15d/0x1b7 [11292.139828] [] ? handle_irq_event_percpu+0xfa/0x117 [11292.139831] [] ? handle_irq_event+0x48/0x70 [11292.139836] [] ? handle_fasteoi_irq+0x74/0xae [11292.139840] [] ? handle_irq+0x87/0x90 [11292.139843] [] ? do_IRQ+0x48/0x9f [11292.139848] [] ? common_interrupt+0x6e/0x6e [11292.139850] [] ? lapic_next_event+0x10/0x15 [11292.139858] [] ? IO_APIC_get_PCI_irq_vector+0x1c7/0x1c7 [11292.139863] [] ? intel_idle+0xd6/0x10c [11292.139866] [] ? intel_idle+0xb6/0x10c [11292.139871] [] ? cpuidle_idle_call+0x6d/0xb4 [11292.139874] [] ? cpu_idle+0x60/0x8c [11292.139880] [] ? start_kernel+0x327/0x333 [11292.139884] [] ? early_idt_handlers+0x140/0x140 [11292.139889] [] ? x86_64_start_kernel+0x10b/0x118 [11292.139891] handlers: [11292.139907] [] usb_hcd_irq [11292.139909] Disabling IRQ #16 So again - it is real, but I have no idea how to test it.
  13. It might be related to cpu_idle driver that I backported from kernel.org. Try it and let me know if it helps During previous test session I obtained that error with irq16 in dmesg twice straight on two reboots within less than 10 minutes from dsm start. Today I tried to reproduce the situation (no changes in hardware or software) with no luck during 2 hours (with all the activities I had previously). A lot of segfaults with dsmnotify.cgi, but no crashes on irq... So as I can't reproduce the bug, I also have no ability to test sensibly the kernel option you mentioned. Sorry...
  14. I do not use ISCSI on those two configurations. Anyway the delay problem seems to be not a real delay but some notification problem between dsm and its web-gui. As for HBA is second configuration - no problems at all. None of the hdds is missing, all numerated correctly from 1 to 8, system array is undamaged after reboot. A little problem occured: I lost my nfs access configuration on exported folders. So I had to repair it manually. But I believe that it is a problem of upgrade from 4.3 to 5.0 dsm version. No, I haven't. I do not even know this option. I'll try it later when I learn what does this option mean. Anyway I do not had such a problem on 4.3 with trantor's hba build. Sorry, I did not understand your idea about kexec-ing. I am using chrome. And in the situation described I saw moving around progress indicator on the dsm window for 4.5 minutes. But if I pressed reload button after smth about 1 minute from the start of reboot sequence - I instantly got into running dsm gui. Really it sounds strange but it was already running gui (without need to login). May be it is some trick from google in their chrome...
  15. Hi! Checked the 10.4 version with two completely different configurations: N54L + 2GB + 6hdds and i5 + 4GB + lsi 9211 + 8hdds. Both with 5.0-4458 dsm. The first problem found is a boot time (from reboot confirmation button press in gui and up to login screen in browser). It takes something above 4.5 minutes to complete this process. During the first minute the progress is visible: bios, sas/sata data, grub, kernel, modules... But the other 3.5 minutes it simply shows "Post init" message and login prompt on local console and "Starting services" in SA. Ok, lets look on dmesg in this time position. First configuration: [spoiler=][ 34.694436] ata6.00: configured for UDMA/133 [ 34.694440] ata6: EH complete [ 36.047627] loop: module loaded [ 36.066011] bond0: no IPv6 routers present [ 219.274928] findhostd uses obsolete (PF_INET,SOCK_PACKET) [ 227.490571] dsmnotify.cgi[22338]: segfault at 0 ip 000000000804ccb8 sp 00000000ffcdd3f0 error 6 [ 227.570891] storagehandler.[22339]: segfault at 0 ip 000000000809e298 sp 00000000ffa00850 error 6 Second configuration: [spoiler=][ 23.642702] netlink: 12 bytes leftover after parsing attributes. [ 25.971033] EXT4-fs (sdu1): mounted filesystem without journal. Opts: nodelalloc,synoacl,data=ordered,oldalloc [ 26.970741] init: synoindexd main process (19890) killed by USR2 signal [ 26.970756] init: synoindexd main process ended, respawning [ 27.639291] loop: module loaded [ 31.240239] eth0: no IPv6 routers present [ 64.841983] findhostd uses obsolete (PF_INET,SOCK_PACKET) [ 217.709290] dsmnotify.cgi[24394]: segfault at 0 ip 000000000804ccb8 sp 00000000ffe1d100 error 6 [ 217.754241] storagehandler.[24395]: segfault at 0 ip 000000000809e298 sp 00000000ffd5c8e0 error 6 Some incompatibility with 4458 version?.. Moreover here is dmesg output after some time without any activity from my side. First: [spoiler=][ 258.145105] dsmnotify.cgi[22624]: segfault at 0 ip 000000000804ccb8 sp 00000000ff85ff60 error 6 [ 288.606194] dsmnotify.cgi[22698]: segfault at 0 ip 000000000804ccb8 sp 00000000ffcfca90 error 6 [ 319.265492] dsmnotify.cgi[22837]: segfault at 0 ip 000000000804ccb8 sp 00000000ff8319f0 error 6 [ 351.257970] dsmnotify.cgi[22896]: segfault at 0 ip 000000000804ccb8 sp 00000000ffa0c170 error 6 [ 383.259143] dsmnotify.cgi[23025]: segfault at 0 ip 000000000804ccb8 sp 00000000ffa95490 error 6 [ 415.260392] dsmnotify.cgi[23079]: segfault at 0 ip 000000000804ccb8 sp 00000000ff969d10 error 6 [ 447.259978] dsmnotify.cgi[23209]: segfault at 0 ip 000000000804ccb8 sp 00000000fff31ab0 error 6 [ 478.265114] dsmnotify.cgi[23269]: segfault at 0 ip 000000000804ccb8 sp 00000000ffac5330 error 6 [ 509.262446] dsmnotify.cgi[23397]: segfault at 0 ip 000000000804ccb8 sp 00000000ff949830 error 6 [ 540.263957] dsmnotify.cgi[23450]: segfault at 0 ip 000000000804ccb8 sp 00000000ff886d70 error 6 [ 573.263792] dsmnotify.cgi[23580]: segfault at 0 ip 000000000804ccb8 sp 00000000ffec8220 error 6 [ 604.296849] dsmnotify.cgi[23640]: segfault at 0 ip 000000000804ccb8 sp 00000000ffe0c970 error 6 [ 636.265761] dsmnotify.cgi[23768]: segfault at 0 ip 000000000804ccb8 sp 00000000fff98a80 error 6 [ 668.266895] dsmnotify.cgi[23821]: segfault at 0 ip 000000000804ccb8 sp 00000000ffb42670 error 6 [ 700.267653] dsmnotify.cgi[23951]: segfault at 0 ip 000000000804ccb8 sp 00000000ff82af70 error 6 [ 731.275980] dsmnotify.cgi[24012]: segfault at 0 ip 000000000804ccb8 sp 00000000ffb08ab0 error 6 [ 763.286462] dsmnotify.cgi[24141]: segfault at 0 ip 000000000804ccb8 sp 00000000ffc1ca20 error 6 And so on every 31 or 32 seconds while dsm gui is opened. The second configuration (dsm was closed and then opened again in half an hour): [spoiler=][ 281.254090] dsmnotify.cgi[24913]: segfault at 0 ip 000000000804ccb8 sp 00000000ff9324b0 error 6 [ 281.258375] dsmnotify.cgi[24914]: segfault at 0 ip 000000000804ccb8 sp 00000000ffa0abc0 error 6 [ 281.362402] dsmnotify.cgi[24916]: segfault at 0 ip 000000000804ccb8 sp 00000000ffd44720 error 6 [ 337.758887] irq 16: nobody cared (try booting with the "irqpoll" option) [ 337.758890] Pid: 0, comm: swapper/0 Tainted: P O 3.2.40 #6 [ 337.758891] Call Trace: [ 337.758892] [] ? add_interrupt_randomness+0x39/0x157 [ 337.758900] [] ? __report_bad_irq+0x2c/0xb4 [ 337.758902] [] ? note_interrupt+0x15d/0x1b7 [ 337.758904] [] ? handle_irq_event_percpu+0xfa/0x117 [ 337.758906] [] ? handle_irq_event+0x48/0x70 [ 337.758908] [] ? handle_fasteoi_irq+0x74/0xae [ 337.758910] [] ? handle_irq+0x87/0x90 [ 337.758912] [] ? do_IRQ+0x48/0x9f [ 337.758915] [] ? common_interrupt+0x6e/0x6e [ 337.758915] [] ? lapic_next_event+0x10/0x15 [ 337.758920] [] ? IO_APIC_get_PCI_irq_vector+0x1c7/0x1c7 [ 337.758923] [] ? intel_idle+0xd6/0x10c [ 337.758924] [] ? intel_idle+0xb6/0x10c [ 337.758927] [] ? cpuidle_idle_call+0x6d/0xb4 [ 337.758929] [] ? cpu_idle+0x60/0x8c [ 337.758932] [] ? start_kernel+0x327/0x333 [ 337.758935] [] ? early_idt_handlers+0x140/0x140 [ 337.758937] [] ? x86_64_start_kernel+0x10b/0x118 [ 337.758938] handlers: [ 337.758948] [] usb_hcd_irq [ 337.758950] Disabling IRQ #16 [ 2208.023057] dsmnotify.cgi[25225]: segfault at 0 ip 000000000804ccb8 sp 00000000ffc54850 error 6 [ 2208.073520] storagehandler.[25226]: segfault at 0 ip 000000000809e298 sp 00000000ffa17f00 error 6 [ 2238.800620] dsmnotify.cgi[25640]: segfault at 0 ip 000000000804ccb8 sp 00000000ff832cb0 error 6 [ 2269.391056] dsmnotify.cgi[25729]: segfault at 0 ip 000000000804ccb8 sp 00000000ffb5e2c0 error 6 Here we see the same problem with dsmnotify.cgi while dsm is opened and some strange problem with irq. Also I tried to reboot with dsm web-gui closed. The same result in dmesg (dsmnotify.cgi and storagehandler segfault) and the same time according to SA. But if I manually refresh/reload browser page with dsm after 1 minute of reboot process (when I see local login prompt) - it shows dsm gui already up and running. The last thought on this problem... The same delay for 3.5 minutes was also on reboot during installation. So it may exceed installation timeout with some configurations and will lead to installation error. May be it is a reason for installation problems with 4458 already reported. But I think that in this case it is necessary to repeat installation with SA (without rebooting) to make it possible for SA to configure already installed dsm.