Jump to content
XPEnology Community

staratlas

Transition Member
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

staratlas's Achievements

Newbie

Newbie (1/7)

2

Reputation

  1. @pocopico Thank you for your Great work. I want to use BCM57810 10 Gigabit Ethernet Virtual Function device. I use BCM57810 10 Gigabit Ethernet Virtual Function ( SR-IOV ) device with your bnx2x extension. But I cannnot find sr-iov nic card. Loading kmod #1 "bnx2x.ko" for pocopico.bnx2x (args: ) [ 3.168399] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.712.30-0 (2014/02/10) [ 3.169688] bnx2x 0000:0b:00.0: msix capability found [ 3.170005] bnx2x 0000:0b:00.0: enabling device (0000 -> 0002) [ 3.170329] bnx2x 0000:0b:00.0: Cannot find second PCI device base address, aborting [ 3.170358] bnx2x 0000:13:00.0: msix capability found [ 3.170388] bnx2x 0000:13:00.0: enabling device (0000 -> 0002) [ 3.170417] bnx2x 0000:13:00.0: Cannot find second PCI device base address, aborting Can you add SR-IOV support to bnx2x ? Thank you. ( Sorry, I make same issue at your Github repo )
  2. Anyone can use mlx4_core ( mellanox NIC card ) at DSM 7.1 on DS3622xs ? I have a problem about Unknown symbol error with mlx4_core. Thank you for your help.
  3. Thank you for your rapid reply. I check failed state services on DSM 7.1 and DS3617xs root@DiskStation:~# systemctl list-units --state failed ● pgsql-adapter.service loaded failed failed PostgreSQL adapter ● syno-mount-usbfs.service loaded failed failed Mount usb fs ● synoindex-checkpackage.service loaded failed failed synoindex check if there are any synoindex-related packages ● SynoInitEth.service loaded failed failed Adjust NIC sequence ● systemd-modules-load.service loaded failed failed Load Kernel Modules and DSM 7.1 and DS918+ ( work fine ) root@DiskStation:~# systemctl list-units --state failed UNIT LOAD ACTIVE SUB DESCRIPTION ● syno-mount-usbfs.service loaded failed failed Mount usb fs ● synoindex-checkpackage.service loaded failed failed synoindex check if there are any synoindex-related packages ● SynoInitEth.service loaded failed failed Adjust NIC sequence and then I check "systemd-modules-load.service" root@DiskStation:~# systemctl start systemd-modules-load [ 541.149389] mlx4_en: Unknown symbol mlx4_get_vf_rate (err 0) [ 541.150213] mlx4_en: Unknown symbol mlx4_test_async (err 0) [ 541.151023] mlx4_en: Unknown symbol mlx4_set_vf_vlan_next (err 0) [ 541.151968] mlx4_en: Unknown symbol mlx4_test_interrupt (err 0) [ 541.152877] mlx4_en: Unknown symbol mlx4_get_vport_ethtool_stats (err 0) [ 541.153859] mlx4_en: Unknown symbol mlx4_SET_PORT_disable_mc_loopback (err 0) [ 541.154883] mlx4_en: Unknown symbol mlx4_rename_eq (err 0) [ 541.155626] mlx4_en: Unknown symbol mlx4_reset_vlan_policy (err 0) [ 541.156510] mlx4_en: Unknown symbol mlx4_max_tc (err 0) [ 541.157272] mlx4_en: Unknown symbol mlx4_is_available_mac (err 0) [ 541.158080] mlx4_en: Unknown symbol syno_restart (err 0) [ 541.158834] mlx4_en: Unknown symbol mlx4_get_vf_vlan_set (err 0) [ 541.159718] mlx4_en: Unknown symbol mlx4_get_is_vlan_offload_disabled (err 0) [ 541.160742] mlx4_en: Unknown symbol mlx4_get_vf_vlan_info (err 0) [ 541.161577] mlx4_en: Unknown symbol mlx4_get_vf_stats_netdev (err 0) [ 541.162598] mlx4_en: Unknown symbol mlx4_SET_PORT_user_mtu (err 0) [ 541.163445] mlx4_en: Unknown symbol mlx4_get_vf_link_state (err 0) this problem occurred extension "mlx4_core". Can I fix this problem ? ( I know remove this extension fix this problem, but i want to use mellanox NIC card )
  4. Thanks for the reply. One correction, when I was testing DS918+ DSM 7.1 https://github.com/pocopico/redpill-load/raw/develop/redpill-misc/rpext-index.json I had not yet included the extension here, so I may have needed to modify the file. However, I have already installed and tested redpill-misc on DS3617 and DS3622, but I have not been able to solve the problem of unexpectedly shutting down when accessing the web GUI.
  5. I am currently trying to get DSM 7.1 to work on vmware esxi 6.7. On DS918+ DSM 7.1 had a problem, DSM would start shutting down when trying to access the web GUI after installing DSM, so I modified "/usr/lib/modules-load.d/70-cpufreq-kernel.conf" from a previous post, and it now works fine. On DS3617xs and DS3622xsp, the installation of DSM7.1 is successful. But after first reboot the VM shutdown process starts by itself when accessing the Web GUI. I have tried to modify "/usr/lib/modules-load.d/70-cpufreq-kernel.conf" to no avail. I cannot access Web GUI. Does anyone succeed DSM 7.1 DS3617xs or DS3622xsp running on vmware esxi? ( On DS3617xs and DS3622xsp, DSM 7.0.1 is successful and can access Web GUI, but 7.1 is not ) By the way, I am using the following extensions https://github.com/tossp/redpill-tool-chain/raw/master/extensions/redpill-acpid.json https://raw.githubusercontent.com/pocopico/rp-ext/master/mlx4_core/rpext-index.json https://raw.githubusercontent.com/pocopico/rp-ext/master/mpt3sas/rpext-index.json https://raw.githubusercontent.com/pocopico/rp-ext/master/vmw_pvscsi/rpext-index.json https://raw.githubusercontent.com/pocopico/rp-ext/master/vmxnet3/rpext-index.json https://raw.githubusercontent.com/pocopico/rp-ext/master/redpill-boot-wait/rpext-index.json https://github.com/pocopico/redpill-load/raw/develop/redpill-misc/rpext-index.json Thank you.
  6. Thank you for your reply and uploading 0.11, and I tested your 0.11 on my xpenology, SMART info is OK. Test enveronment: VM on ESXi 6.7, LSI(Avago) HBA SAS3008 (9300-8I) (paththrough), 2 x VMXNET 3 dmesg about mpt3sas: [ 1.711502] mpt3sas version 27.00.01.00 loaded [ 1.713052] mpt3sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (8169472 kB) [ 1.789491] mpt3sas_cm0: IOC Number : 0 [ 1.789494] mpt3sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k [ .... ] [ 2.106582] mpt3sas_cm0: port enable: SUCCESS
  7. Today, I tried this method, SMART info are fixed and were appeared in dsm, Thank you !
  8. Can i get latest mpt3sas.ko ? I think driver version of mpt3sas in extension 0.10 is "09.102.00.00". Is this correct ? here is my dmesg with extension 0.10 [ 1.904610] mpt3sas version 09.102.00.00 loaded [ 1.907235] mpt3sas 0000:1b:00.0: enabling device (0000 -> 0002) [ 1.907310] mpt3sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (8168964 kB) [ 1.966042] mpt3sas_cm0: MSI-X vectors supported: 96, no of cores: 2, max_msix_vectors: -1 [ 1.966315] mpt3sas0-msix0: PCI-MSI-X enabled: IRQ 58 [ 1.966316] mpt3sas0-msix1: PCI-MSI-X enabled: IRQ 59 [ 1.966318] mpt3sas_cm0: iomem(0x00000000fd140000), mapped(0xffffc90000240000), size(65536) [ 1.966319] mpt3sas_cm0: ioport(0x0000000000002000), size(256) [ 2.087367] mpt3sas_cm0: Allocated physical memory: size(16224 kB) [ 2.087368] mpt3sas_cm0: Current Controller Queue Depth(9643),Max Controller Queue Depth(9856) [ 2.087369] mpt3sas_cm0: Scatter Gather Elements per IO(128) [ 2.132823] mpt3sas_cm0: LSISAS3008: FWVersion(16.00.10.00), ChipRevision(0x02), BiosVersion(08.37.00.00) [ 2.132825] mpt3sas_cm0: Protocol=( [ 2.134220] mpt3sas_cm0: sending port enable !! [ 2.135667] mpt3sas_cm0: host_add: handle(0x0001), sas_addr(0x500605b009156198), phys(8) [ 2.144346] mpt3sas_cm0: port enable: SUCCESS
  9. @Eduardo, @IG-88 I have a same issue about SMART info. my xpenology is running on esxi 6.7, use jun's loader 1.04b and IG-88's extension 0.10. I paththrough LSI(Avago) HBA SAS3008 (9300-8I) and attached vm. I connected 4 SSDs to HBA and Storage Manager show 4 SSD devices (drive 2,4,5,6 and drive 3 is vmware virtual disk). But in my Storage Manager, device icon is not HDD, like a USB device ? (Please see my attached image) and not show up SMART info. After that i tried to replace "mpt3sas.ko" to juns original one in /usr/lib/modules/ folder, nothing changed.
  10. Thank you for your driver extension, i try to use my intel 10G card "x710" (paththrough device on vmware). but kernel crashed when driver was loaded. serial console log is below. When i remove my intel card only, everything is ok. ===== trigger device plug event ===== [ 18.676716] BUG: unable to handle kernel paging request at 0000000000001001 [ 18.678191] IP: [<ffffffffa0573ff0>] i40e_ndo_fdb_add+0x0/0xc0 [i40e] [ 18.679345] PGD ba215067 PUD b78cd067 PMD 0 [ 18.679784] Oops: 0000 [#1] PREEMPT SMP [ 18.679786] Modules linked in: i40e(E+) ixgbe(OE) be2net(E) igb(E) i2c_algo_bit e1000e(OE) vxlan ip6_udp_tunnel udp_tunnel fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha256_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_powersave cpufreq_performance acpi_cpufreq processor cpufreq_stats dm_snapshot dm_bufio crc_itu_t(E) crc_ccitt(E) quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram sg etxhci_hcd sx8(E) aic94xx(E) mvumi(E) mvsas(E) isci(E) hptiop(E) hpsa(E) gdth(E) arcmsr(E) aacraid(E) 3w_sas(E) 3w_9xxx(E) rtc_cmos(E) mdio(E) mpt3sas(E) mptsas(E) megaraid_sas(E) megaraid(E) mptctl(E) mptspi(E) mptscsih(E) mptbase(E) raid_class(E) libsas(E) scsi_transport_sas(E) scsi_transport_spi(E) megaraid_mbox(E) megaraid_mm(E) vmw_pvscsi(E) BusLogic(E) usb_storage xhci_pci xhci_hcd ohci_hcd(E) usbcore usb_common zfkyhpcseszp(OE) [last unloaded: apollolake_synobios] [ 18.679813] CPU: 3 PID: 9068 Comm: insmod Tainted: P OE 4.4.59+ #24922 [ 18.679813] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 [ 18.679814] task: ffff880236b00f80 ti: ffff8800bba4c000 task.ti: ffff8800bba4c000 [ 18.679819] RIP: 0010:[<ffffffffa0573ff0>] [<ffffffffa0573ff0>] i40e_ndo_fdb_add+0x0/0xc0 [i40e] [ 18.679820] RSP: 0018:ffff8800bba4f778 EFLAGS: 00010286 [ 18.679821] RAX: ffffffffa0573ff0 RBX: 0000080744594bb3 RCX: ffffea0008d0959f [ 18.679821] RDX: 0000000000000801 RSI: 0000080744594bb3 RDI: ffff8800bab81000 [ 18.679821] RBP: ffff8800bba4f7e8 R08: 0000000000000000 R09: ffffffff812bc700 [ 18.679822] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ 18.679822] R13: ffffffff81862080 R14: ffff8800bab810b0 R15: ffff8800bab81000 [ 18.679823] FS: 00007f323310d700(0000) GS:ffff88023fd80000(0000) knlGS:0000000000000000 [ 18.679823] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 18.679824] CR2: 0000000000001001 CR3: 0000000037331000 CR4: 00000000003606f0 [ 18.679847] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 18.679847] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 18.679848] Stack: [ 18.679849] ffffffff8149f6e4 ffff8800bab81000 0000000000000000 ffff8800bab81488 [ 18.679850] 0000000000000040 0000000000000000 0000080744594bb3 ffffffff814b81ce [ 18.679851] 0000004081064e91 ffff8800bab81000 0000000000000000 ffffffff81862080 [ 18.679851] Call Trace: [ 18.679854] [<ffffffff8149f6e4>] ? __netdev_update_features+0x204/0x4f0 [ 18.679856] [<ffffffff814b81ce>] ? netdev_register_kobject+0x15e/0x170 [ 18.679857] [<ffffffff8149fd8f>] register_netdevice+0x22f/0x460 [ 18.679858] [<ffffffff8149ffd5>] register_netdev+0x15/0x30 [ 18.679863] [<ffffffffa057d2ef>] i40e_vsi_setup+0x7ef/0xb40 [i40e] [ 18.679867] [<ffffffffa057dd97>] i40e_setup_pf_switch+0x3e7/0x4e0 [i40e] [ 18.679870] [<ffffffffa0580b43>] i40e_probe.part.60+0xcf3/0x18b0 [i40e] [ 18.679872] [<ffffffff81094cab>] ? irq_modify_status+0x9b/0xc0 [ 18.679873] [<ffffffff810974de>] ? __irq_domain_alloc_irqs+0x1ee/0x2b0 [ 18.679875] [<ffffffff812be4bd>] ? radix_tree_lookup+0xd/0x10 [ 18.679877] [<ffffffff81091962>] ? irq_to_desc+0x12/0x20 [ 18.679878] [<ffffffff81094d49>] ? irq_get_irq_data+0x9/0x20 [ 18.679879] [<ffffffff81034e43>] ? mp_map_pin_to_irq+0xb3/0x2c0 [ 18.679880] [<ffffffff81035635>] ? mp_map_gsi_to_irq+0xb5/0xe0 [ 18.679881] [<ffffffff8102dc55>] ? acpi_register_gsi_ioapic+0x55/0x60 [ 18.679883] [<ffffffff8147b094>] ? pci_conf1_read+0xb4/0x100 [ 18.679884] [<ffffffff8147d5ae>] ? raw_pci_read+0x1e/0x40 [ 18.679886] [<ffffffff812f4763>] ? pci_bus_read_config_word+0x83/0xa0 [ 18.679888] [<ffffffff812fc741>] ? do_pci_enable_device+0x91/0xc0 [ 18.679889] [<ffffffff812fd9ce>] ? pci_enable_device_flags+0xbe/0x110 [ 18.679892] [<ffffffffa0581719>] i40e_probe+0x19/0x20 [i40e] [ 18.679893] [<ffffffff812ffb8c>] pci_device_probe+0x8c/0x100 [ 18.679895] [<ffffffff81387641>] driver_probe_device+0x1f1/0x310 [ 18.679896] [<ffffffff813877e2>] __driver_attach+0x82/0x90 [ 18.679896] [<ffffffff81387760>] ? driver_probe_device+0x310/0x310 [ 18.679898] [<ffffffff813856d1>] bus_for_each_dev+0x61/0xa0 [ 18.679899] [<ffffffff813870d9>] driver_attach+0x19/0x20 [ 18.679900] [<ffffffff81386d03>] bus_add_driver+0x1b3/0x230 [ 18.679901] [<ffffffffa05ab000>] ? 0xffffffffa05ab000 [ 18.679902] [<ffffffff81387feb>] driver_register+0x5b/0xe0 [ 18.679903] [<ffffffff812fe667>] __pci_register_driver+0x47/0x50 [ 18.679907] [<ffffffffa05ab062>] i40e_init_module+0x62/0x64 [i40e] [ 18.679908] [<ffffffff810003b6>] do_one_initcall+0x86/0x1b0 [ 18.679910] [<ffffffff810e1b48>] do_init_module+0x56/0x1be [ 18.679912] [<ffffffff810b7d8d>] load_module+0x1dfd/0x2080 [ 18.679913] [<ffffffff810b51f0>] ? __symbol_put+0x50/0x50 [ 18.679915] [<ffffffff810b8199>] SYSC_finit_module+0x79/0x80 [ 18.679916] [<ffffffff810b81b9>] SyS_finit_module+0x9/0x10 [ 18.679918] [<ffffffff8156a58a>] entry_SYSCALL_64_fastpath+0x1e/0x92 [ 18.679928] Code: 44 24 18 00 00 00 00 89 44 24 10 c7 04 24 00 00 00 00 45 31 c9 e8 11 c8 f3 e0 48 8d 64 24 20 5b 41 5e 5d c3 0f 1f 80 00 00 00 00 <48> 8b 82 00 08 00 00 48 8b 80 a8 05 00 00 f6 80 6a 07 00 00 08 [ 18.679932] RIP [<ffffffffa0573ff0>] i40e_ndo_fdb_add+0x0/0xc0 [i40e] [ 18.679932] RSP <ffff8800bba4f778> [ 18.679932] CR2: 0000000000001001 [ 18.679933] ---[ end trace 1275282321de2936 ]--- [ 18.679973] ------------[ cut here ]------------ [ 18.679975] WARNING: CPU: 3 PID: 9068 at kernel/softirq.c:150 __local_bh_enable_ip+0x65/0x90() [ 18.679989] Modules linked in: i40e(E+) ixgbe(OE) be2net(E) igb(E) i2c_algo_bit e1000e(OE) vxlan ip6_udp_tunnel udp_tunnel fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha256_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_powersave cpufreq_performance acpi_cpufreq processor cpufreq_stats dm_snapshot dm_bufio crc_itu_t(E) crc_ccitt(E) quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram sg etxhci_hcd sx8(E) aic94xx(E) mvumi(E) mvsas(E) isci(E) hptiop(E) hpsa(E) gdth(E) arcmsr(E) aacraid(E) 3w_sas(E) 3w_9xxx(E) rtc_cmos(E) mdio(E) mpt3sas(E) mptsas(E) megaraid_sas(E) megaraid(E) mptctl(E) mptspi(E) mptscsih(E) mptbase(E) raid_class(E) libsas(E) scsi_transport_sas(E) scsi_transport_spi(E) megaraid_mbox(E) megaraid_mm(E) vmw_pvscsi(E) BusLogic(E) usb_storage xhci_pci xhci_hcd ohci_hcd(E) usbcore usb_common zfkyhpcseszp(OE) [last unloaded: apollolake_synobios] [ 18.679992] CPU: 3 PID: 9068 Comm: insmod Tainted: P D OE 4.4.59+ #24922 [ 18.679993] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 [ 18.679994] 0000000000000000 ffff8800bba4f478 ffffffff812b96bd 0000000000000000 [ 18.679995] ffffffff817228bc ffff8800bba4f4b0 ffffffff8104917d 0000000000000201 [ 18.679995] ffff880236b00f80 ffff880236b015f8 ffff880236b00f80 ffff880231bfd5a8 [ 18.679996] Call Trace: [ 18.679997] [<ffffffff812b96bd>] dump_stack+0x4d/0x70 [ 18.679999] [<ffffffff8104917d>] warn_slowpath_common+0x7d/0xc0 [ 18.680000] [<ffffffff81049276>] warn_slowpath_null+0x16/0x20 [ 18.680001] [<ffffffff8104c9e5>] __local_bh_enable_ip+0x65/0x90 [ 18.680002] [<ffffffff8156a095>] _raw_spin_unlock_bh+0x15/0x20 [ 18.680004] [<ffffffff810c464b>] cgroup_exit+0x4b/0xa0 [ 18.680005] [<ffffffff8104bacd>] do_exit+0x36d/0xab0 [ 18.680006] [<ffffffff810072a4>] oops_end+0x84/0xc0 [ 18.680008] [<ffffffff8103b44b>] no_context+0xfb/0x2b0 [ 18.680009] [<ffffffff8103b670>] __bad_area_nosemaphore+0x70/0x1f0 [ 18.680010] [<ffffffff8103b7fe>] bad_area_nosemaphore+0xe/0x10 [ 18.680011] [<ffffffff8103bb96>] __do_page_fault+0x1c6/0x390 [ 18.680013] [<ffffffff8103bd9c>] do_page_fault+0xc/0x10 [ 18.680013] [<ffffffff8156bd42>] page_fault+0x22/0x30 [ 18.680015] [<ffffffff812bc700>] ? cleanup_uevent_env+0x10/0x10 [ 18.680018] [<ffffffffa0573ff0>] ? i40e_ndo_bridge_getlink+0xb0/0xb0 [i40e] [ 18.680021] [<ffffffffa0573ff0>] ? i40e_ndo_bridge_getlink+0xb0/0xb0 [i40e] [ 18.680022] [<ffffffff8149f6e4>] ? __netdev_update_features+0x204/0x4f0 [ 18.680023] [<ffffffff814b81ce>] ? netdev_register_kobject+0x15e/0x170 [ 18.680024] [<ffffffff8149fd8f>] register_netdevice+0x22f/0x460 [ 18.680025] [<ffffffff8149ffd5>] register_netdev+0x15/0x30 [ 18.680029] [<ffffffffa057d2ef>] i40e_vsi_setup+0x7ef/0xb40 [i40e] [ 18.680031] [<ffffffffa057dd97>] i40e_setup_pf_switch+0x3e7/0x4e0 [i40e] [ 18.680034] [<ffffffffa0580b43>] i40e_probe.part.60+0xcf3/0x18b0 [i40e] [ 18.680035] [<ffffffff81094cab>] ? irq_modify_status+0x9b/0xc0 [ 18.680036] [<ffffffff810974de>] ? __irq_domain_alloc_irqs+0x1ee/0x2b0 [ 18.680038] [<ffffffff812be4bd>] ? radix_tree_lookup+0xd/0x10 [ 18.680039] [<ffffffff81091962>] ? irq_to_desc+0x12/0x20 [ 18.680040] [<ffffffff81094d49>] ? irq_get_irq_data+0x9/0x20 [ 18.680041] [<ffffffff81034e43>] ? mp_map_pin_to_irq+0xb3/0x2c0 [ 18.680042] [<ffffffff81035635>] ? mp_map_gsi_to_irq+0xb5/0xe0 [ 18.680043] [<ffffffff8102dc55>] ? acpi_register_gsi_ioapic+0x55/0x60 [ 18.680044] [<ffffffff8147b094>] ? pci_conf1_read+0xb4/0x100 [ 18.680045] [<ffffffff8147d5ae>] ? raw_pci_read+0x1e/0x40 [ 18.680046] [<ffffffff812f4763>] ? pci_bus_read_config_word+0x83/0xa0 [ 18.680048] [<ffffffff812fc741>] ? do_pci_enable_device+0x91/0xc0 [ 18.680049] [<ffffffff812fd9ce>] ? pci_enable_device_flags+0xbe/0x110 [ 18.680052] [<ffffffffa0581719>] i40e_probe+0x19/0x20 [i40e] [ 18.680052] [<ffffffff812ffb8c>] pci_device_probe+0x8c/0x100 [ 18.680054] [<ffffffff81387641>] driver_probe_device+0x1f1/0x310 [ 18.680054] [<ffffffff813877e2>] __driver_attach+0x82/0x90 [ 18.680055] [<ffffffff81387760>] ? driver_probe_device+0x310/0x310 [ 18.680056] [<ffffffff813856d1>] bus_for_each_dev+0x61/0xa0 [ 18.680057] [<ffffffff813870d9>] driver_attach+0x19/0x20 [ 18.680058] [<ffffffff81386d03>] bus_add_driver+0x1b3/0x230 [ 18.680059] [<ffffffffa05ab000>] ? 0xffffffffa05ab000 [ 18.680060] [<ffffffff81387feb>] driver_register+0x5b/0xe0 [ 18.680061] [<ffffffff812fe667>] __pci_register_driver+0x47/0x50 [ 18.680065] [<ffffffffa05ab062>] i40e_init_module+0x62/0x64 [i40e] [ 18.680065] [<ffffffff810003b6>] do_one_initcall+0x86/0x1b0 [ 18.680067] [<ffffffff810e1b48>] do_init_module+0x56/0x1be [ 18.680068] [<ffffffff810b7d8d>] load_module+0x1dfd/0x2080 [ 18.680069] [<ffffffff810b51f0>] ? __symbol_put+0x50/0x50 [ 18.680071] [<ffffffff810b8199>] SYSC_finit_module+0x79/0x80 [ 18.680072] [<ffffffff810b81b9>] SyS_finit_module+0x9/0x10 [ 18.680074] [<ffffffff8156a58a>] entry_SYSCALL_64_fastpath+0x1e/0x92 [ 18.680074] ---[ end trace 1275282321de2937 ]--- [ 18.699912] tn40xx: Tehuti Network Driver, 0.3.6.17.2 [ 18.699913] tn40xx: Supported phys : MV88X3120 MV88X3310 MV88E2010 QT2025 TLK10232 AQR105 MUSTANG [ 18.787025] Compat-mlnx-ofed backport release: f36c870 [ 18.787025] Backport based on mlnx_ofed/mlnx-ofa_kernel-4.0.git f36c870 [ 18.787026] compat.git: mlnx_ofed/mlnx-ofa_kernel-4.0.git [ 18.881774] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.712.30-0 (2014/02/10) [ 19.040542] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 19.049997] ACPI: Power Button [PWRF] [ 19.116801] Linux agpgart interface v0.103 [ 19.157836] agpgart-intel 0000:00:00.0: Intel 440BX Chipset [ 19.162852] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0x0 [ 19.379415] Btrfs loaded, crc32c=crc32c-intel [ 19.390794] exFAT: Version 1.2.9 [ 19.433183] jme: JMicron JMC2XX ethernet driver version 1.0.8 [ 19.439213] sky2: driver version 1.30 [ 19.447702] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver v5.3.63
  11. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2-23739 Update 2 - Loader version and model: JUN'S LOADER v1.03b - DS3617+ - Using custom extra.lzma: NO - Installation type: VM - vmware ESXi 6.7 - Additional comments: VMXNET 3 did not work anymore. I changed virtual NIC type from VMXNET 3 to E1000E. I connected all 12 SSDs to "LSI SAS3008(LSI 9300-8i)", and I use "Intel X710-DA2(10Gbit SFP+)", I think thay work perfect. All additional(SAS card and NIC) device were paththrough device.
  12. staratlas

    DSM 6.2 Loader

    I'm sorry for my wrong infromation. I think 1.03b work fine on VMware ESXi 6.7 with LSI 9300-8i(Host bus adapter, SAS3008, paththrough device). - Outcome of the installation/update: SUCCESSFUL - DSM version prior update: DSM 6.2-23739 UPDATE 2 - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: NO - Installation type: Virtual Machine, vm version is 14, VMware ESXi 6.7, All storage device is connected to LSI 9300-8i(Host bus adapter, SAS3008, paththrough device). - Additional comments: Update from 6.1.7 to 6.2
  13. staratlas

    DSM 6.2 Loader

    Thank you very much for you Greatest work. I tried to run 1.03b on VMware ESXi 6.7 with LSI 9300-8i(Host bus adapter, SAS3008, paththrough device). Synology DSM 6.1.x with loader 1.02b (with custom extra.lzma produced by IG-88) can find LSI 9300-8i and connected SSDs, everything is OK. But DSM web installer cannot find any devices at loader 1.03b. I'm waiting for driver that support SAS3008. Thank you.
×
×
  • Create New...