Driver extension jun 1.03b/1.04b for DSM6.2.2 for 3615xs / 3617xs / 918+


Recommended Posts

somehow the new extra.lzma does not fit on my 2nd partition.

how can i delete some unused modules from the extra.lzma file ?  ( from windows )

 

i can open the extra.lzma with 7zip but when i try to delete something it says its write protected.

so far i found no way to edit rights on the extra.lzma

Link to post
Share on other sites

delete the old extra.lzma before copying the new

the 2nd partition has 30MB, zImage and rd.gz is about 9MB, one extra.lzma is about 4-6MB

 

7 hours ago, norman said:

how can i delete some unused modules from the extra.lzma file ?  ( from windows )

 

afaik you can't

the part of unpacking and packing it ist just nor mal linux stuff, you can read about it here and for just unpacking, packing you can boot up a live linux in virtualbox on windows and nowdays windows 10 even can have a linux installed (but a vm und a cd image in virtualbox is easier to get rid of after wards

https://xpenology.com/forum/topic/7187-how-to-build-and-inject-missing-drivers-in-jun-loader-102a/

 

https://xpenology.club/compile-drivers-xpenology-with-windows-10-and-build-in-bash/

Link to post
Share on other sites

So I have a qnap 453be I have installed standard 1.04b/918/6.2.1, it works fine. Trying to do a fresh install of 918 6.2.2 (on clean hard drive), with the extra/2. lzma files from this thread on the USB, i cannot find the machine on the network to upload the 6.2.2 pat. It has dual i211 LAN ports. Any tips?

BTW, upgrading from 6.2.1 to 6.2.2 via the UI and using the original bootloader files also renders the machine un-discoverable on the network.

 

Link to post
Share on other sites
3 hours ago, richv31 said:

, i cannot find the machine on the network to upload the 6.2.2 pat. It has dual i211 LAN ports. Any tips?

 

wait for the next iteration, i will work on this in december when i'm done with 3615/17

i think i know how to get things right/better

in your case the driver is igb and the driver in the 1st post's list is commented with latest crashed so it uses the synology default driver and that one seems to be to old for your hardware

can you provide a lspci vendor id of the nic hardware?

Link to post
Share on other sites

I use 1.04b to guide my physical machine. my cpu is G5400t, the motherboard is Gigabyte b360m, and the network card is 8168. I have added an additional synology e10g18-t1 network card, but I can't find my physical machine after replacing extra / 2.lzma. Please help me.

Link to post
Share on other sites
17 hours ago, IG-88 said:

 

wait for the next iteration, i will work on this in december when i'm done with 3615/17

i think i know how to get things right/better

in your case the driver is igb and the driver in the 1st post's list is commented with latest crashed so it uses the synology default driver and that one seems to be to old for your hardware

can you provide a lspci vendor id of the nic hardware?

 

I got 6.2.2 working by replacing the extra.lzma on the 1.04b image by the real3x modified one that excludes the graphics driver.

I even have a /dev/dri directory (I dont use transcoding).

Link to post
Share on other sites
10 hours ago, mangogo said:

I have added an additional synology e10g18-t1 network card, but I can't find my physical machine after replacing extra / 2.lzma. Please help me.

try to the extra/extra2 with the originals fro m jun and reboot

you might also try to remove the e10g18-t1 (btw thats a aquantia aqc107 based card, driver is atlantic.ko)

if it worked before and you hat a synology natively supported nic, why did you use my extra/extra2 at all?

my suggestion is to install it for testing on a single empty disk and if that works with the hardware then it might be ok to use it with a active installation containing data

Link to post
Share on other sites
3 hours ago, richv31 said:

I got 6.2.2 working by replacing the extra.lzma on the 1.04b image by the real3x modified one that excludes the graphics driver.

I even have a /dev/dri directory (I dont use transcoding).

it seems that removing jun's newer drivers in favor of the original drivers (that then will be loaded) might break it for other people

needs more data collection on why ist fails on some systems and works on others, in theory it should depend an the processor generation, containing different gpu cores

that will have to wait until 3615/17 is in a usable state and has a tutorial

 

Link to post
Share on other sites
4 hours ago, IG-88 said:

try to the extra/extra2 with the originals fro m jun and reboot

I can boot normally when using the original guidance of Jun, but I can't recognize my e10g18-t1. When I use 1.03b, I can recognize e10g18-t1 normally. I'm thinking 1.04b lacks the driver I need. So I used extra / extra2 that you provided.

1.03b doesn't allow me to use GPU decoding, so I'm trying to use 1.04b.

Link to post
Share on other sites
16 hours ago, mangogo said:

1.04b lacks the driver I need.

indeed, as the 918+ does not have a pcie slot they dont budle drivers for optional cards, but a added tehuti und aquantia drivers (both 10G nic's)  to the extra/extra2

i could only test tehuti, the aquantia driver might not work, some drivers load when the hardware is not present but crash when the hardware for the driver is present

i will defiantly redo the drivers for aquantia in the next iteration but that will be in december

 

  • Like 1
Link to post
Share on other sites
On 11/20/2019 at 12:34 PM, IG-88 said:

indeed, as the 918+ does not have a pcie slot they dont budle drivers for optional cards, but a added tehuti und aquantia drivers (both 10G nic's)  to the extra/extra2

i could only test tehuti, the aquantia driver might not work, some drivers load when the hardware is not present but crash when the hardware for the driver is present

i will defiantly redo the drivers for aquantia in the next iteration but that will be in december

 

Just FYI - I tried two different aquantia cards and the driver crash and burn and halt the boot/system. Note this was done using esxi 6.7 and both cards in pass through.

I'm about to try with a 10gb x540 intel card but I didn't check it was supported or not.

Edited by riftangel
Link to post
Share on other sites
1 hour ago, riftangel said:

I'm about to try with a 10gb x540 intel card but I didn't check it was supported or not.

ixgbe is the driver, i added the latest to 918+ but can't say if it will work, as i could not test it

i tested the tehuti 10G and bnx2x 10g driver

 

Link to post
Share on other sites

Thank you for your driver extension, i try to use my intel 10G card "x710" (paththrough device on vmware). but kernel crashed when driver was loaded. serial console log is below.

When i remove my intel card only, everything is ok.

===== trigger device plug event =====
[   18.676716] BUG: unable to handle kernel paging request at 0000000000001001
[   18.678191] IP: [<ffffffffa0573ff0>] i40e_ndo_fdb_add+0x0/0xc0 [i40e]
[   18.679345] PGD ba215067 PUD b78cd067 PMD 0
[   18.679784] Oops: 0000 [#1] PREEMPT SMP
[   18.679786] Modules linked in: i40e(E+) ixgbe(OE) be2net(E) igb(E) i2c_algo_bit e1000e(OE) vxlan ip6_udp_tunnel udp_tunnel fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha256_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_powersave cpufreq_performance acpi_cpufreq processor cpufreq_stats dm_snapshot dm_bufio crc_itu_t(E) crc_ccitt(E) quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram sg etxhci_hcd sx8(E) aic94xx(E) mvumi(E) mvsas(E) isci(E) hptiop(E) hpsa(E) gdth(E) arcmsr(E) aacraid(E) 3w_sas(E) 3w_9xxx(E) rtc_cmos(E) mdio(E) mpt3sas(E) mptsas(E) megaraid_sas(E) megaraid(E) mptctl(E) mptspi(E) mptscsih(E) mptbase(E) raid_class(E) libsas(E) scsi_transport_sas(E) scsi_transport_spi(E) megaraid_mbox(E) megaraid_mm(E) vmw_pvscsi(E) BusLogic(E) usb_storage xhci_pci xhci_hcd ohci_hcd(E) usbcore usb_common zfkyhpcseszp(OE) [last unloaded: apollolake_synobios]
[   18.679813] CPU: 3 PID: 9068 Comm: insmod Tainted: P           OE   4.4.59+ #24922
[   18.679813] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018
[   18.679814] task: ffff880236b00f80 ti: ffff8800bba4c000 task.ti: ffff8800bba4c000
[   18.679819] RIP: 0010:[<ffffffffa0573ff0>]  [<ffffffffa0573ff0>] i40e_ndo_fdb_add+0x0/0xc0 [i40e]
[   18.679820] RSP: 0018:ffff8800bba4f778  EFLAGS: 00010286
[   18.679821] RAX: ffffffffa0573ff0 RBX: 0000080744594bb3 RCX: ffffea0008d0959f
[   18.679821] RDX: 0000000000000801 RSI: 0000080744594bb3 RDI: ffff8800bab81000
[   18.679821] RBP: ffff8800bba4f7e8 R08: 0000000000000000 R09: ffffffff812bc700
[   18.679822] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[   18.679822] R13: ffffffff81862080 R14: ffff8800bab810b0 R15: ffff8800bab81000
[   18.679823] FS:  00007f323310d700(0000) GS:ffff88023fd80000(0000) knlGS:0000000000000000
[   18.679823] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   18.679824] CR2: 0000000000001001 CR3: 0000000037331000 CR4: 00000000003606f0
[   18.679847] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   18.679847] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   18.679848] Stack:
[   18.679849]  ffffffff8149f6e4 ffff8800bab81000 0000000000000000 ffff8800bab81488
[   18.679850]  0000000000000040 0000000000000000 0000080744594bb3 ffffffff814b81ce
[   18.679851]  0000004081064e91 ffff8800bab81000 0000000000000000 ffffffff81862080
[   18.679851] Call Trace:
[   18.679854]  [<ffffffff8149f6e4>] ? __netdev_update_features+0x204/0x4f0
[   18.679856]  [<ffffffff814b81ce>] ? netdev_register_kobject+0x15e/0x170
[   18.679857]  [<ffffffff8149fd8f>] register_netdevice+0x22f/0x460
[   18.679858]  [<ffffffff8149ffd5>] register_netdev+0x15/0x30
[   18.679863]  [<ffffffffa057d2ef>] i40e_vsi_setup+0x7ef/0xb40 [i40e]
[   18.679867]  [<ffffffffa057dd97>] i40e_setup_pf_switch+0x3e7/0x4e0 [i40e]
[   18.679870]  [<ffffffffa0580b43>] i40e_probe.part.60+0xcf3/0x18b0 [i40e]
[   18.679872]  [<ffffffff81094cab>] ? irq_modify_status+0x9b/0xc0
[   18.679873]  [<ffffffff810974de>] ? __irq_domain_alloc_irqs+0x1ee/0x2b0
[   18.679875]  [<ffffffff812be4bd>] ? radix_tree_lookup+0xd/0x10
[   18.679877]  [<ffffffff81091962>] ? irq_to_desc+0x12/0x20
[   18.679878]  [<ffffffff81094d49>] ? irq_get_irq_data+0x9/0x20
[   18.679879]  [<ffffffff81034e43>] ? mp_map_pin_to_irq+0xb3/0x2c0
[   18.679880]  [<ffffffff81035635>] ? mp_map_gsi_to_irq+0xb5/0xe0
[   18.679881]  [<ffffffff8102dc55>] ? acpi_register_gsi_ioapic+0x55/0x60
[   18.679883]  [<ffffffff8147b094>] ? pci_conf1_read+0xb4/0x100
[   18.679884]  [<ffffffff8147d5ae>] ? raw_pci_read+0x1e/0x40
[   18.679886]  [<ffffffff812f4763>] ? pci_bus_read_config_word+0x83/0xa0
[   18.679888]  [<ffffffff812fc741>] ? do_pci_enable_device+0x91/0xc0
[   18.679889]  [<ffffffff812fd9ce>] ? pci_enable_device_flags+0xbe/0x110
[   18.679892]  [<ffffffffa0581719>] i40e_probe+0x19/0x20 [i40e]
[   18.679893]  [<ffffffff812ffb8c>] pci_device_probe+0x8c/0x100
[   18.679895]  [<ffffffff81387641>] driver_probe_device+0x1f1/0x310
[   18.679896]  [<ffffffff813877e2>] __driver_attach+0x82/0x90
[   18.679896]  [<ffffffff81387760>] ? driver_probe_device+0x310/0x310
[   18.679898]  [<ffffffff813856d1>] bus_for_each_dev+0x61/0xa0
[   18.679899]  [<ffffffff813870d9>] driver_attach+0x19/0x20
[   18.679900]  [<ffffffff81386d03>] bus_add_driver+0x1b3/0x230
[   18.679901]  [<ffffffffa05ab000>] ? 0xffffffffa05ab000
[   18.679902]  [<ffffffff81387feb>] driver_register+0x5b/0xe0
[   18.679903]  [<ffffffff812fe667>] __pci_register_driver+0x47/0x50
[   18.679907]  [<ffffffffa05ab062>] i40e_init_module+0x62/0x64 [i40e]
[   18.679908]  [<ffffffff810003b6>] do_one_initcall+0x86/0x1b0
[   18.679910]  [<ffffffff810e1b48>] do_init_module+0x56/0x1be
[   18.679912]  [<ffffffff810b7d8d>] load_module+0x1dfd/0x2080
[   18.679913]  [<ffffffff810b51f0>] ? __symbol_put+0x50/0x50
[   18.679915]  [<ffffffff810b8199>] SYSC_finit_module+0x79/0x80
[   18.679916]  [<ffffffff810b81b9>] SyS_finit_module+0x9/0x10
[   18.679918]  [<ffffffff8156a58a>] entry_SYSCALL_64_fastpath+0x1e/0x92
[   18.679928] Code: 44 24 18 00 00 00 00 89 44 24 10 c7 04 24 00 00 00 00 45 31 c9 e8 11 c8 f3 e0 48 8d 64 24 20 5b 41 5e 5d c3 0f 1f 80 00 00 00 00 <48> 8b 82 00 08 00 00 48 8b 80 a8 05 00 00 f6 80 6a 07 00 00 08
[   18.679932] RIP  [<ffffffffa0573ff0>] i40e_ndo_fdb_add+0x0/0xc0 [i40e]
[   18.679932]  RSP <ffff8800bba4f778>
[   18.679932] CR2: 0000000000001001
[   18.679933] ---[ end trace 1275282321de2936 ]---
[   18.679973] ------------[ cut here ]------------
[   18.679975] WARNING: CPU: 3 PID: 9068 at kernel/softirq.c:150 __local_bh_enable_ip+0x65/0x90()
[   18.679989] Modules linked in: i40e(E+) ixgbe(OE) be2net(E) igb(E) i2c_algo_bit e1000e(OE) vxlan ip6_udp_tunnel udp_tunnel fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha256_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_powersave cpufreq_performance acpi_cpufreq processor cpufreq_stats dm_snapshot dm_bufio crc_itu_t(E) crc_ccitt(E) quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram sg etxhci_hcd sx8(E) aic94xx(E) mvumi(E) mvsas(E) isci(E) hptiop(E) hpsa(E) gdth(E) arcmsr(E) aacraid(E) 3w_sas(E) 3w_9xxx(E) rtc_cmos(E) mdio(E) mpt3sas(E) mptsas(E) megaraid_sas(E) megaraid(E) mptctl(E) mptspi(E) mptscsih(E) mptbase(E) raid_class(E) libsas(E) scsi_transport_sas(E) scsi_transport_spi(E) megaraid_mbox(E) megaraid_mm(E) vmw_pvscsi(E) BusLogic(E) usb_storage xhci_pci xhci_hcd ohci_hcd(E) usbcore usb_common zfkyhpcseszp(OE) [last unloaded: apollolake_synobios]
[   18.679992] CPU: 3 PID: 9068 Comm: insmod Tainted: P      D    OE   4.4.59+ #24922
[   18.679993] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018
[   18.679994]  0000000000000000 ffff8800bba4f478 ffffffff812b96bd 0000000000000000
[   18.679995]  ffffffff817228bc ffff8800bba4f4b0 ffffffff8104917d 0000000000000201
[   18.679995]  ffff880236b00f80 ffff880236b015f8 ffff880236b00f80 ffff880231bfd5a8
[   18.679996] Call Trace:
[   18.679997]  [<ffffffff812b96bd>] dump_stack+0x4d/0x70
[   18.679999]  [<ffffffff8104917d>] warn_slowpath_common+0x7d/0xc0
[   18.680000]  [<ffffffff81049276>] warn_slowpath_null+0x16/0x20
[   18.680001]  [<ffffffff8104c9e5>] __local_bh_enable_ip+0x65/0x90
[   18.680002]  [<ffffffff8156a095>] _raw_spin_unlock_bh+0x15/0x20
[   18.680004]  [<ffffffff810c464b>] cgroup_exit+0x4b/0xa0
[   18.680005]  [<ffffffff8104bacd>] do_exit+0x36d/0xab0
[   18.680006]  [<ffffffff810072a4>] oops_end+0x84/0xc0
[   18.680008]  [<ffffffff8103b44b>] no_context+0xfb/0x2b0
[   18.680009]  [<ffffffff8103b670>] __bad_area_nosemaphore+0x70/0x1f0
[   18.680010]  [<ffffffff8103b7fe>] bad_area_nosemaphore+0xe/0x10
[   18.680011]  [<ffffffff8103bb96>] __do_page_fault+0x1c6/0x390
[   18.680013]  [<ffffffff8103bd9c>] do_page_fault+0xc/0x10
[   18.680013]  [<ffffffff8156bd42>] page_fault+0x22/0x30
[   18.680015]  [<ffffffff812bc700>] ? cleanup_uevent_env+0x10/0x10
[   18.680018]  [<ffffffffa0573ff0>] ? i40e_ndo_bridge_getlink+0xb0/0xb0 [i40e]
[   18.680021]  [<ffffffffa0573ff0>] ? i40e_ndo_bridge_getlink+0xb0/0xb0 [i40e]
[   18.680022]  [<ffffffff8149f6e4>] ? __netdev_update_features+0x204/0x4f0
[   18.680023]  [<ffffffff814b81ce>] ? netdev_register_kobject+0x15e/0x170
[   18.680024]  [<ffffffff8149fd8f>] register_netdevice+0x22f/0x460
[   18.680025]  [<ffffffff8149ffd5>] register_netdev+0x15/0x30
[   18.680029]  [<ffffffffa057d2ef>] i40e_vsi_setup+0x7ef/0xb40 [i40e]
[   18.680031]  [<ffffffffa057dd97>] i40e_setup_pf_switch+0x3e7/0x4e0 [i40e]
[   18.680034]  [<ffffffffa0580b43>] i40e_probe.part.60+0xcf3/0x18b0 [i40e]
[   18.680035]  [<ffffffff81094cab>] ? irq_modify_status+0x9b/0xc0
[   18.680036]  [<ffffffff810974de>] ? __irq_domain_alloc_irqs+0x1ee/0x2b0
[   18.680038]  [<ffffffff812be4bd>] ? radix_tree_lookup+0xd/0x10
[   18.680039]  [<ffffffff81091962>] ? irq_to_desc+0x12/0x20
[   18.680040]  [<ffffffff81094d49>] ? irq_get_irq_data+0x9/0x20
[   18.680041]  [<ffffffff81034e43>] ? mp_map_pin_to_irq+0xb3/0x2c0
[   18.680042]  [<ffffffff81035635>] ? mp_map_gsi_to_irq+0xb5/0xe0
[   18.680043]  [<ffffffff8102dc55>] ? acpi_register_gsi_ioapic+0x55/0x60
[   18.680044]  [<ffffffff8147b094>] ? pci_conf1_read+0xb4/0x100
[   18.680045]  [<ffffffff8147d5ae>] ? raw_pci_read+0x1e/0x40
[   18.680046]  [<ffffffff812f4763>] ? pci_bus_read_config_word+0x83/0xa0
[   18.680048]  [<ffffffff812fc741>] ? do_pci_enable_device+0x91/0xc0
[   18.680049]  [<ffffffff812fd9ce>] ? pci_enable_device_flags+0xbe/0x110
[   18.680052]  [<ffffffffa0581719>] i40e_probe+0x19/0x20 [i40e]
[   18.680052]  [<ffffffff812ffb8c>] pci_device_probe+0x8c/0x100
[   18.680054]  [<ffffffff81387641>] driver_probe_device+0x1f1/0x310
[   18.680054]  [<ffffffff813877e2>] __driver_attach+0x82/0x90
[   18.680055]  [<ffffffff81387760>] ? driver_probe_device+0x310/0x310
[   18.680056]  [<ffffffff813856d1>] bus_for_each_dev+0x61/0xa0
[   18.680057]  [<ffffffff813870d9>] driver_attach+0x19/0x20
[   18.680058]  [<ffffffff81386d03>] bus_add_driver+0x1b3/0x230
[   18.680059]  [<ffffffffa05ab000>] ? 0xffffffffa05ab000
[   18.680060]  [<ffffffff81387feb>] driver_register+0x5b/0xe0
[   18.680061]  [<ffffffff812fe667>] __pci_register_driver+0x47/0x50
[   18.680065]  [<ffffffffa05ab062>] i40e_init_module+0x62/0x64 [i40e]
[   18.680065]  [<ffffffff810003b6>] do_one_initcall+0x86/0x1b0
[   18.680067]  [<ffffffff810e1b48>] do_init_module+0x56/0x1be
[   18.680068]  [<ffffffff810b7d8d>] load_module+0x1dfd/0x2080
[   18.680069]  [<ffffffff810b51f0>] ? __symbol_put+0x50/0x50
[   18.680071]  [<ffffffff810b8199>] SYSC_finit_module+0x79/0x80
[   18.680072]  [<ffffffff810b81b9>] SyS_finit_module+0x9/0x10
[   18.680074]  [<ffffffff8156a58a>] entry_SYSCALL_64_fastpath+0x1e/0x92
[   18.680074] ---[ end trace 1275282321de2937 ]---
[   18.699912] tn40xx: Tehuti Network Driver, 0.3.6.17.2
[   18.699913] tn40xx: Supported phys : MV88X3120 MV88X3310 MV88E2010 QT2025 TLK10232 AQR105 MUSTANG
[   18.787025] Compat-mlnx-ofed backport release: f36c870
[   18.787025] Backport based on mlnx_ofed/mlnx-ofa_kernel-4.0.git f36c870
[   18.787026] compat.git: mlnx_ofed/mlnx-ofa_kernel-4.0.git
[   18.881774] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.712.30-0 (2014/02/10)
[   19.040542] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[   19.049997] ACPI: Power Button [PWRF]
[   19.116801] Linux agpgart interface v0.103
[   19.157836] agpgart-intel 0000:00:00.0: Intel 440BX Chipset
[   19.162852] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0x0
[   19.379415] Btrfs loaded, crc32c=crc32c-intel
[   19.390794] exFAT: Version 1.2.9
[   19.433183] jme: JMicron JMC2XX ethernet driver version 1.0.8
[   19.439213] sky2: driver version 1.30
[   19.447702] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver v5.3.63

 

Link to post
Share on other sites

we will see in about 2 weeks, i have got ideas what to try with 918+ when redoing the driver and i have a good kernel config to try out

good to know same names now being able to test ixgbe and even i40e

i will stop trying to get sas/scsi drivers working for 3615/16 on 6.2.2, network drivers seem to work fine and 3615 has mpt2sas and 3617 has mpt2sas and mpt3sas, good enough

Link to post
Share on other sites

Hi, I'm currently running D918+ DSM 6.2.2-24922 update4 on ASRock J3455-ITX and I've used extra.Izma modified by real3x.
Could you please clarify if it would be any benefit to try your pack of extras on my hardware? The on-board nic is realtek rtl8111gr.

Maybe your pack has a driver to support and use integrated graphics hd500 on this mobo? Thanks.

Link to post
Share on other sites
4 hours ago, vlaser said:

Hi, I'm currently running D918+ DSM 6.2.2-24922 update4 on ASRock J3455-ITX and I've used extra.Izma modified by real3x

as far as i have seen/read he removed jun's drivers for the internal intel gpu, leaving the system in question using the drivers coming directly with dsm

afair jun did made new drivers to support newer hardware (?) but thats one of the things i wanted to look into and test

 

4 hours ago, vlaser said:

ould you please clarify if it would be any benefit to try your pack of extras on my hardware?

in theory its the same hardware as the original 918+, J3445 cpu, ahci storage and realtek nic

 

4 hours ago, vlaser said:

Maybe your pack has a driver to support and

it does contain the driver from jun but the loading order is different @olegin seemed to have success with that so i took it int my test version (seemed better then to just remove it), that needs testing with different hardware, we can make test versions with one or the other method

 

4 hours ago, vlaser said:

use integrated graphics hd500 on this mobo?

what do you mean by that, dont you have /dev/dri after booting up to use hardware trancoding?

in general the original driver should work for you

 

Link to post
Share on other sites
15 hours ago, IG-88 said:

as far as i have seen/read he removed jun's drivers for the internal intel gpu, leaving the system in question using the drivers coming directly with dsm

afair jun did made new drivers to support newer hardware (?) but thats one of the things i wanted to look into and test

Thanks for your answers. Where can I get Jun's latest drivers for new hardware to test. please? Is any link to download them?

 

Quote

 

in theory its the same hardware as the original 918+, J3445 cpu, ahci storage and realtek nic

Yes, you're right. My hardware is working fine excepting unexpected shutdowns and reboots when using Video Station. They are related to hw decoding and hw transcoding of video files. This is a reason why I am looking for any improvements in this area. 

 

Quote

 

it does contain the driver from jun but the loading order is different @olegin seemed to have success with that so i took it int my test version (seemed better then to just remove it), that needs testing with different hardware, we can make test versions with one or the other method

I have tested my ASRocks J3455-itx with almost all extra files I have found on this forum:

 

Original Jun's extra from 1.04b loader, Olegin extra and your pack of extras from this post v0.6 (I have tried your pack yesterday) gave me the same result. I have always done a clean installation with new usb drive and new ssd. In all mentioned cases I have got successful installation, then manual reboot, then system started normally but /dev/dri didn't exist and hw transcoding gave error as well as reboot and shutdown didn't work.

 

With real3x and Hostilian extras, I have got successful installation, then system restated itself normally, /dev/dri was present, offline hw transcoding in Video Station was working ok at least I have tried to transcode some small video files like jellyfish ones but online hw transcoding in Video Station gave unpredictable result. It could work for short time and then freeze and reboot or system just goes to reboot straightaway :( . Reboot and shutdown work fine.

 

Quote

 

what do you mean by that, dont you have /dev/dri after booting up to use hardware trancoding?

in general the original driver should work for you

As I described above, the original driver fails but extra with extracted one gave a bit better result but it is very unstable...

 

Quote

 

Is there any software to test and benchmark freshly installed DSM? Thanks.

Edited by vlaser
Link to post
Share on other sites
9 hours ago, vlaser said:

Where can I get Jun's latest drivers for new hardware to test. please? Is any link to download them?

its part of jun's loader 1.04b already

https://xpenology.com/forum/topic/12952-dsm-62-loader/

 A new ds918 loader support 6.2/6.21 is uploaded.
whats new:
uefi issue fixed.
i915 driver updated.

 

12 hours ago, vlaser said:

. It could work for short time and then freeze and reboot or system just goes to reboot straightaway

monitor connected?

Link to post
Share on other sites
13 hours ago, IG-88 said:

its part of jun's loader 1.04b already

https://xpenology.com/forum/topic/12952-dsm-62-loader/


 A new ds918 loader support 6.2/6.21 is uploaded.
whats new:
uefi issue fixed.
i915 driver updated.

 

monitor connected?

This even updated driver doesn't work for me. I mean HW transcoding is not activated and etc.

Yes, my desktop computer and NAS are always connected to the same monitor but different HDMI inputs and I switch between them when required.

Why have you asked about it? Could it be a problem? Thanks.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.