Driver extension jun 1.03b/1.04b for DSM6.2.3 for 918+ / 3615xs / 3617xs


Recommended Posts

2 hours ago, mike861033 said:

None of the links is live.

The all are dead (tinyupload dead, gofile is just looping showing shortlink to download).

 

All gofile links work fine for me. Problem is on your side.

Link to post
Share on other sites
23 hours ago, IG-88 said:

 

did you try to use 3615 instead of 918+?

i've read complains lately that 918+ over all delvers less performace then 3615/17

 

I tried to use 3615 with jun's 1.03b bootloader without changing anything and below the results:

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.10.2.51, port 53179
[  5] local 10.10.2.11 port 5201 connected to 10.10.2.51 port 53180
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  65.2 MBytes   547 Mbits/sec
[  5]   1.00-2.06   sec   100 MBytes   797 Mbits/sec
[  5]   2.06-3.00   sec  84.2 MBytes   748 Mbits/sec
[  5]   3.00-4.00   sec  90.0 MBytes   755 Mbits/sec
[  5]   4.00-5.00   sec  98.6 MBytes   827 Mbits/sec
[  5]   5.00-6.00   sec   108 MBytes   909 Mbits/sec
[  5]   6.00-7.00   sec   113 MBytes   949 Mbits/sec
[  5]   7.00-8.02   sec   110 MBytes   906 Mbits/sec
[  5]   8.02-9.00   sec  92.3 MBytes   788 Mbits/sec
[  5]   9.00-10.00  sec   109 MBytes   913 Mbits/sec
[  5]  10.00-10.17  sec  14.5 MBytes   707 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.17  sec   985 MBytes   813 Mbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.10.2.51, port 53181
[  5] local 10.10.2.11 port 5201 connected to 10.10.2.51 port 53182
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   109 MBytes   916 Mbits/sec    0    479 KBytes
[  5]   1.00-2.00   sec   113 MBytes   950 Mbits/sec    0    529 KBytes
[  5]   2.00-3.00   sec   113 MBytes   949 Mbits/sec    0    540 KBytes
[  5]   3.00-4.00   sec   111 MBytes   931 Mbits/sec    0    572 KBytes
[  5]   4.00-5.00   sec   112 MBytes   939 Mbits/sec    0    586 KBytes
[  5]   5.00-6.00   sec   106 MBytes   886 Mbits/sec    0    593 KBytes
[  5]   6.00-7.00   sec   113 MBytes   947 Mbits/sec    0    597 KBytes
[  5]   7.00-8.00   sec   113 MBytes   952 Mbits/sec    0    606 KBytes
[  5]   8.00-9.00   sec   113 MBytes   948 Mbits/sec    0    615 KBytes
[  5]   9.00-10.00  sec  82.0 MBytes   688 Mbits/sec    0    500 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.06 GBytes   910 Mbits/sec    0             sender
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

Not perfect but better.

Link to post
Share on other sites

Hi IG-88 (or anyone else who can help),

 

I have a system that is using the R8169 driver for my LAN device, and I modified/loaded the extra.lzma/extra2.lzma for Jun's loader 1.04b ds918+ from your link on the first page. I am presented with the default screen that contains the note about loading find.synology.com, but it appears as though my device is still not recognized on the network/not assigned an IP address (nor does the searching work, as expected). Is there something else I can troubleshoot on my side? The router works fine, all devices are on the same network, the cable is verified as good, and I do see a solid green light with a blinking amber light, assumed indicating some sort of activity. Also, the USB IDs and MAC address were already updated in the first partition, along with the serial.

 

What else should I try here? Thanks in advance!

Edited by SomeTester
Link to post
Share on other sites

Actually, I was able to get the network portion to work once I used the older, 6.0.2 version (DS3615sx) with Jun's 1.01 version. However, after the device is detected on the network, it says that no hard disk is currently installed. After running lspci -k on the host with my Ubuntu Live USB, I did notice that pata_atiixp was in the list. I don't plan on doing any RAID'ing, but is this possibly the issue as to why it's not being detected? And if so, could I possibly use an older version of the software, or do I need to buy a PCI adapter that supports the available drivers for my SATA interface? Thanks!

Edited by SomeTester
Link to post
Share on other sites
On 9/29/2020 at 6:38 PM, IG-88 said:

afair the change braking the drivers was introduced in DSM 6.2.1-23824 so if you use the latest 6.2.1 it will be the same as 6.2.2

but the i915 driver in both are the same, we did not have the source jun used for this driver so it was reused in 6.2.2 and worked

so if hardware trancoding is your problem then it might be something diffrent, the best way in that case is 6.2.3, just follow the guide in that thread

the new i915 driver from synology works at least as good as jun's, does seem to have less problems with some different apollolake and geminilake boards

 

 

i would not clean out the firmware directory as there are firmwares from synology there that does not come with the loader, i file more there does not hurt and the loader will compare and replace firmware files by itself, so any old/different files will be replaced and new file will be copied

cleaning the update folder does not hurt but is not needed usually (if so it will be written in the guide i wrote to that extra.lzma in question)

I too found the performance of emulated vmxnet3 network adaptor very bad.  Is there any test build on virt-io for DS918 @ 6.2.3?

I think net driver is good enough as disk performance would not have much difference for lun passthrough.

qemu-guest-agent would be awsome but I guess it is not a driver only issue.  Thanks. 

Link to post
Share on other sites

Hello,
(Sorry for my english, I used google translate)

I have a problem with losing connection when the network is very busy.
For example, when I copy a large file from the NAS to my pc, it uses the network intensively and after about 15 seconds of transfer at 110MB/s the network connection is cut and the nas is no longer accessible.
I have to unplug the network cable from the motherboard and then plug it back in to have access to the NAS again (I don't even need to turn it off).
I tried to turn my router off and on again, it didn't change anything, so the problem is with the NAS.
I am using an Asrock H67m-itx (Realtek RTL8111E), dsm 6.2.3 and "extra.lzma for loader 1.03b ds3617 DSM 6.2.3 v0.11.2_test".
I had the same problem in 6.2.2

Have you ever encountered this problem?

Link to post
Share on other sites

Hi again,

 

Got a Dell PowerEdge T630, it has an PERC H730 Adapter (1028:1F43) using an LSI SAS3108 chipset, megaraid driver not working (as expected) in 6.2.2 or 6.2.3

 

It actually worked (at least to install) when the controller was in RAID mode, now I've set the controller in HBA mode (as I don't want the use the HW RAID at all), I've read that it might be possible to flash it to IT mode (Initiator Target), I've updated the controller with the latest firmware from Dell, have not yet updated the BIOS as I don't think that will make any difference, the BIOS SATA settings is set to AHCI

 

I maybe could get this working under 6.1.x, however IIRC 6.1 lacks some of the hardware virtualization acceleration I needed to run my VMs smoothly

 

I've gathered some logs if it would be of any help, but it seems that no one is lacking logs and that the problem is to backport drivers? Can I help in any way? As I've a fair amount of practice doing just this in my work

 

I've only tried DS3617xs as I have Xeon E3 processor, 8 drives slots and need HW accelerated virtualization, would DS918+ be worth a shot? I think there was something with DS3615xs that was problematic with what I was trying to do last time

 

So anyone have successfully flashed an LSI-based PERC card to IT mode? Can I help backport drivers? Can I completely remove the PERC card, buy something else to put in? The motherboard has no connectors for the 2 cables (0H3Y5T) coming from the disks, I think they are mini SAS connectors, they are marked "CTRL SAS A" on the controllers side, and "PB_A" on the disk side (aka Backplane?), never seen anything like these cables as I'm pretty new to OEM servers to begin with, if anyone has an answer to what connector that is, or can confirming it is mini SAS, or if there is any PCI card (gladly if its also supported and tested) that has that connector so I can swap it out or just add one in in the chassi

 

BR

Link to post
Share on other sites
4 hours ago, Eeso said:

 PERC H730 Adapter (1028:1F43) using an LSI SAS3108 chipset

that id is nowhere to be seen in the drivers, must be a sub id, i guess the main id for this would be 1000:005d aka invader

https://pci-ids.ucw.cz/read/PC/1000/005d

the driver would be megaraid_sas.ko and this hardware (chip) is supported by jun's base driver set already

can you do a factory reset of the controller when in hba mode?

also it seems that its about activating JBOD mode in "Ctrl Mgmt" and also mark the disks as JBOD in "PD Mgmt" - did you do both?

 

there is also a newer driver to download but i had no luck with external sas drivers befor (mpt3sas, crashing), might be because of changes from synologys kernel, so i guess the megaraid driver might have the same problem when compiled, i'm not packing a driver in the extra.lzma that is likely to fail without anyone testing it before

https://www.broadcom.com/products/storage/raid-controllers/megaraid-sas-9361-8i

so please check both jbod settings, test the existing driver and if that fails we can test a newer driver

 

Link to post
Share on other sites

Hi,

I did a fresh install with loader 1.04b ds918+ and installed DSM 6.2.3 two weeks ago. Yesterday I replaced the driver files with the link of "1.04b ds918+ DSM 6.2.3 v0.12.1". Are these the right driver files or do I need v0.13.3? How do I know which driver files are correct? With v0.12.1 HW transcoding works but not so smooth and the picture is not clear. I use a Asrock j5005. Do I have to change more things to let it work?

Link to post
Share on other sites
1 hour ago, Jawattus said:

 "1.04b ds918+ DSM 6.2.3 v0.12.1". Are these the right driver files or do I need v0.13.3?

yes, the 12.1 is named "special purpose and test" and thats not what you have in mind

 

1 hour ago, Jawattus said:

How do I know which driver files are correct?

 

usually the one with highest number is the newest, when not doing so you might think about why and have a reason

 

1 hour ago, Jawattus said:

 

With v0.12.1 HW transcoding works but not so smooth and the picture is not clear.

 

might be depending on you settings and partly it might be the compromise when down or upscaling with hardware, it can't be compared with a good 2pass algorithm, it also might depend on the setting synology uses when transcoding, i never looked what settings are used with video station and if it can be tweaked because i dont use that

 

1 hour ago, Jawattus said:

I use a Asrock j5005. Do I have to change more things to let it work?

afaik there is not much you can tweak in the video station gui, its just a binary switch to have hardware transcoding or not

there will be no difference with 12.1 and 13.3, its always the i915 driver that comes with DSM and video station

if you want to change things look into how video station calls ffmpeg to transcode the video and try to find out how to improve that, its also possible to install a newer version of ffmpeg from external by a 3rd party package

 

 

 

 

  • Thanks 1
Link to post
Share on other sites
On 10/16/2020 at 7:04 PM, IG-88 said:

that id is nowhere to be seen in the drivers, must be a sub id, i guess the main id for this would be 1000:005d aka invader

https://pci-ids.ucw.cz/read/PC/1000/005d

Correct

 

On 10/16/2020 at 7:04 PM, IG-88 said:

the driver would be megaraid_sas.ko and this hardware (chip) is supported by jun's base driver set already

Oh okey, I'll try to disable the megaraid from your package to try Jun's base driver, maybe as simple as removing all megaraid from DISK_MODULES in rc.modules inside the archive?

 

I actually didn't catch that Jun's had already packed that driver, but I still needed your igb driver so many thanks for that

 

On 10/16/2020 at 7:04 PM, IG-88 said:

can you do a factory reset of the controller when in hba mode?

I did this, I read that the controller need a reset right after it have been switched to HBA mode

 

On 10/16/2020 at 7:04 PM, IG-88 said:

also it seems that its about activating JBOD mode in "Ctrl Mgmt" and also mark the disks as JBOD in "PD Mgmt" - did you do both?

I'll check if I did this correctly

 

On 10/16/2020 at 7:04 PM, IG-88 said:

there is also a newer driver to download but i had no luck with external sas drivers befor (mpt3sas, crashing), might be because of changes from synologys kernel, so i guess the megaraid driver might have the same problem when compiled, i'm not packing a driver in the extra.lzma that is likely to fail without anyone testing it before

https://www.broadcom.com/products/storage/raid-controllers/megaraid-sas-9361-8i

so please check both jbod settings, test the existing driver and if that fails we can test a newer driver

I will report back on this ASAP

 

About the 6.11 minimum requirement for the firmware, I updated with Dell's Version 25.5.6.0009, A14 for the controller, which is the latest, I've tried to read the changelog but I can't find any reference on what broadcom firmware they are packing in it, would you possible have a clue where I find this? 

Link to post
Share on other sites
45 minutes ago, Eeso said:

Oh okey, I'll try to disable the megaraid from your package to try Jun's base driver, maybe as simple as removing all megaraid from DISK_MODULES in rc.modules inside the archive?

 

thats not going to work, the script inside extra.lzma handles this and replaces different or missing drivers from inside the extra.lzma

my extra.lzma also contains the driver and it should work too

the comment was meant in the way that if you already tried jun's vanilla loader without my extra then you already tested the driver, you can swap extra's at any time, they will copy there own drivers to disk and load them

 

1 hour ago, Eeso said:

About the 6.11 minimum requirement for the firmware, I updated with Dell's Version 25.5.6.0009, A14

 

on broadcom's homepage the MR 6.14 is associated with 24.21.0-0126 (4/2020), anything starting with 24.16.0-0082 should be ok (thtas where MR6.12 starts)

broadcom does not offer 25.x.x.x firmware, i guess if you have the latest firmware it should be ok

 

beside the jbod mode you can try to add a disk as raid0 and check if it shows up

also if you already have a running system (sata/ahci connected disks) then you can check the logs like dmesg to see whats in there for the controller/driver, is the controller found be the driver, maybe the driver crashes (i've seen this with 6.2.2 drivers i made for all sas/scsi controllers, loaded and detected hardware with out a disk connected, crashed when a disk was connected - must have been something about synology's specialtys with sas/scsi disk handling)

if it does not work you can use any other sas controller like the 9211-8i, the sas backplane in the server should not care about the controller brand

Link to post
Share on other sites
2 hours ago, IG-88 said:

thats not going to work, the script inside extra.lzma handles this and replaces different or missing drivers from inside the extra.lzma

my extra.lzma also contains the driver and it should work too

the comment was meant in the way that if you already tried jun's vanilla loader without my extra then you already tested the driver, you can swap extra's at any time, they will copy there own drivers to disk and load them

Okey, I removed the .ko files from the package as well, but you are saying its the same driver in both your extra and jun's vanilla loader? 

 

2 hours ago, IG-88 said:

on broadcom's homepage the MR 6.14 is associated with 24.21.0-0126 (4/2020), anything starting with 24.16.0-0082 should be ok (thtas where MR6.12 starts)

broadcom does not offer 25.x.x.x firmware, i guess if you have the latest firmware it should be ok

I hope so, in the RAID controller configuration utility under Controller Management it says the Package version is 25.5.7.0005, the Firmware version is 4.300.00-8364 and the NVDATA version is 3.1511.00-0028

 

2 hours ago, IG-88 said:

beside the jbod mode you can try to add a disk as raid0 and check if it shows up

also if you already have a running system (sata/ahci connected disks) then you can check the logs like dmesg to see whats in there for the controller/driver, is the controller found be the driver, maybe the driver crashes (i've seen this with 6.2.2 drivers i made for all sas/scsi controllers, loaded and detected hardware with out a disk connected, crashed when a disk was connected - must have been something about synology's specialtys with sas/scsi disk handling)

if it does not work you can use any other sas controller like the 9211-8i, the sas backplane in the server should not care about the controller brand

When I set the controller in RAID mode it works, I had this configuration at first, everything installed and worked good, but I didn't want the hardware raid or the controller to handle the disks in such way, that is why I switched to HBA mode and got problem

 

There is no JBOD mode, there is only HBA and RAID mode in the Controller Managment, in the Pyhiscal Disk Management I can only choose RAID or Non-RAID mode, its in Non-RAID, I suppose this is what you mean with JBOD

 

But as of now it seems that my best option is to just change to another sas controller card maybe, wonder how that plays out with an OEM server like this, should probably work however, I've got tons of spare PCI slots

Link to post
Share on other sites
On 10/19/2020 at 3:24 PM, Eeso said:

Okey, I removed the .ko files from the package as well, but you are saying its the same driver in both your extra and jun's vanilla loader? 

not binary the same but same version (devices  supported), there is no documentation or source about jun's driver and my driver is made from 6.2.2 kernel source (before 3/2020 it was dsm 6.2 beta kernel source >2 years old)

 

On 10/19/2020 at 3:24 PM, Eeso said:

When I set the controller in RAID mode it works, I had this configuration at first, everything installed and worked good, but I didn't want the hardware raid or the controller to handle the disks in such way, that is why I switched to HBA mode and got problem

that indicates the driver in the extra.lzma is working in general and it should be checked whats in the dmesg about disks when in non raid

 

On 10/19/2020 at 3:24 PM, Eeso said:

this is what you mean with JBOD

thats how its named by broadcom, if you look into the 2nd link from above

https://www.broadcom.com/support/knowledgebase/1211161496893/megaraid-3ware-and-hba-support-for-various-raid-levels-and-jbod-

then the non raid mode is named that way and it might be the same naming in the controller when defining a disk as non-raid aka jbod or belonging to  a raid

 

to test if that is working properly you could use a normal (recent) live/recovery  linux and check if a single non raid disk is found

in theory that mode could have been introduced later in the product cycle and our older driver we have in the extra.lzma does not support that mode and to rule out problems with configuration of the controller and driver version ...

 

Edited by IG-88
Link to post
Share on other sites

Here is some logging after starting the installation of DSM:

https://pastebin.com/ePCiuUqz

Extended: https://pastebin.com/zunNz0aa

 

I've tried a linux livecd and it seems to work, I think DSM has problem hibernating the disk and restart the system, 120s hung task message confirms this I think, it finds the controller fw in a fault state and can't handle it from there, I've found other on this forum with the same problem, and they also confirmed that it works on a normal linux dist

 

I tried to install ubuntu 14.04, it found the drive and installed on the drive but didn't want to boot back afterwards, dumping me in the grub rescue console, but that could also be a BIOS/UEFI problem

Edited by Eeso
Link to post
Share on other sites

Okey same with Ubuntu 20.04, running in BIOS mode not UEFI mode, I can see that it installed correctly and shut down however, so I believe this problem is of another origin, I can also confirm DSM finds and installs on the disk as I had to erase them multiple times and there I can see the partitions and the data, if I run without disks I get error 38 as expected, and dmesg also just says it cannot recover from a faulty fw state, would you believe the controller might be broken? I believe not however. Everything was working good in RAID mode and before I removed the old OS.

 

I guess I can either set the controller in RAID mode and then mark the individual disks as Non-RAID, or just have it in HBA mode. But the later doesn't seems to work. It worked good when I initially had it in RAID mode, as soon as I switched it stopped working however. I can retry the factory reset directly after setting the controller to HBA mode. It was something that several people had reported on the internet must be done.

 

The controller might just be too old to work good with linux in HBA mode? 

 

I'm looking at maybe buy an LSI SAS 9300-8i SGL, which has a SAS3008 chipset, would you recommend this one? The PERC H730 has an SAS3108, but since Dell has branded it might not be the same, SAS3108 might be an older chipset to begin with, or is not updated by Dell in the same way LSI cards are?

Link to post
Share on other sites

I wished I found this post before, I was stuck with the old driver extension from top menu : DSM 6.2.3-25423 intermediate update

 

I changed the extra3617_v0.5_test with the extra3617_v0.11.2_test and now I am able to run DSM 6.2.3-25426 update 2

 

I have a SuperMicro X10SDV-6C-TLN4F

 

2 NIC are working out of 5 .

The 2 x intel 1GB ones are working, the 2 x 10Gbe are showing but not working (driver is attached for reference)

The Realtek nic is not working (IPMI link)

 

Despite all :

cat /proc/cpuinfo is showing 6 cores (i thought there was a DSM limit of 4 cores ?)

SSD drives are recognized (i thought they did not work on this DSM ? )

Would M.2 M-Key SATA work as well ?

 

Thank you to make this possible !

 

image.thumb.png.b48409e53107b2f47a24cb70ce2a26bb.png

 

Before that, I also tried Jun's Loader v1.04b DS918+ which worked with the DSM_DS918+_25426 without any extension.

It was more responsive than this one but CPU was overheating (over 95°C in idle mode).

 

 

LINUX.zip

image.png

Link to post
Share on other sites
2 hours ago, David Rollin said:

The 2 x intel 1GB ones are working, the 2 x 10Gbe are showing but not working (driver is attached for reference)

 

your driver (ixgbe)  is version 4.1.1 and 5 years old, my extra contains a newer driver 5.6.3

whats in dmesg about the driver, whats the vid:pid you see with lspci?

 

Quote

The Realtek nic is not working (IPMI link)

 

it supposed to be that way, some bios or ipmi might contain a option to make this available as nic but the normal default is exclusive management network without os

 

2 hours ago, David Rollin said:

cat /proc/cpuinfo is showing 6 cores (i thought there was a DSM limit of 4 cores ?)

 

HT "vitual" cores are counted as cores so its 4 with HT (or 8 without) for 3515 and 918+

3617 comes with 16 (default kernel config from synology and as we only can add drivers its fixed)

 

Quote

SSD drives are recognized (i thought they did not work on this DSM ? )

 

sata ssd's work as normal disks and also can be used as exclusive cache drives (write cache when redundant)

its nvme that can only be used as cache drive, only with 918+ and only after patching a file (and when updating dsm the cache needs to be disabled or data loss can happen because the patched file might be replaced when updating)

 

Quote

Would M.2 M-Key SATA work as well ?

 

imho yes, it should show up as sata drive, but that kind of defeats the purpose of the nvme slot, with its 4 pcie lanes it could be so much more, like when used with a M.2 to PCIE 4x adapter with 20cm or 50cnm cable  (i'm about to test this adapter) it could be used to have 6 or more disks attached

Edited by IG-88
Link to post
Share on other sites
14 hours ago, IG-88 said:

dmesg about the ethernet driver

[   16.449173] e1000e: Intel(R) PRO/1000 Network Driver - 3.6.0-NAPI
[   16.449175] e1000e: Copyright(c) 1999 - 2019 Intel Corporation.
[   16.456867] Intel(R) Gigabit Ethernet Linux Driver - version 5.3.5.39
[   16.456870] Copyright(c) 2007 - 2019 Intel Corporation.
[   16.457185] igb 0000:05:00.0: irq 43 for MSI/MSI-X
[   16.457191] igb 0000:05:00.0: irq 44 for MSI/MSI-X
[   16.517347] igb 0000:05:00.0 eth0: mixed HW and IP checksum settings.
[   16.517560] igb 0000:05:00.0: added PHC on eth0
[   16.517562] igb 0000:05:00.0: Intel(R) Gigabit Ethernet Linux Driver
[   16.517564] igb 0000:05:00.0: eth0: (PCIe:5.0GT/s:Width x4)
[   16.517565] igb 0000:05:00.0 eth0: MAC: ac:1f:6b:1b:1a:8a
[   16.517645] igb 0000:05:00.0: eth0: PBA No: 010A00-000
[   16.519441] igb 0000:05:00.0: LRO is disabled
[   16.519450] igb 0000:05:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
[   16.519852] igb 0000:05:00.1: irq 45 for MSI/MSI-X
[   16.519867] igb 0000:05:00.1: irq 46 for MSI/MSI-X
[   16.582048] igb 0000:05:00.1 eth1: mixed HW and IP checksum settings.
[   16.582191] igb 0000:05:00.1: added PHC on eth1
[   16.582194] igb 0000:05:00.1: Intel(R) Gigabit Ethernet Linux Driver
[   16.582195] igb 0000:05:00.1: eth1: (PCIe:5.0GT/s:Width x4)
[   16.582197] igb 0000:05:00.1 eth1: MAC: ac:1f:6b:1b:1a:8b
[   16.582278] igb 0000:05:00.1: eth1: PBA No: 010A00-000
[   16.583409] igb 0000:05:00.1: LRO is disabled
[   16.583413] igb 0000:05:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
[   16.591541] Intel(R) 10GbE PCI Express Linux Network Driver - version 5.6.3
[   16.591544] Copyright(c) 1999 - 2019 Intel Corporation.
[   16.591682] ACPI Warning: For \_SB_.PCI0.BR2C._PRT: Return Package has no elements (empty) (20130328/nsprepkg-125)
[   17.290833] systemd-udevd[6383]: starting version 204
[   17.685077] ixgbe 0000:03:00.0: irq 47 for MSI/MSI-X
[   17.685087] ixgbe 0000:03:00.0: irq 48 for MSI/MSI-X
[   17.685094] ixgbe 0000:03:00.0: irq 49 for MSI/MSI-X
[   17.685101] ixgbe 0000:03:00.0: irq 50 for MSI/MSI-X
[   17.685108] ixgbe 0000:03:00.0: irq 51 for MSI/MSI-X
[   17.685115] ixgbe 0000:03:00.0: irq 52 for MSI/MSI-X
[   17.685122] ixgbe 0000:03:00.0: irq 53 for MSI/MSI-X
[   17.685129] ixgbe 0000:03:00.0: irq 54 for MSI/MSI-X
[   17.685136] ixgbe 0000:03:00.0: irq 55 for MSI/MSI-X
[   17.685143] ixgbe 0000:03:00.0: irq 56 for MSI/MSI-X
[   17.685150] ixgbe 0000:03:00.0: irq 57 for MSI/MSI-X
[   17.685157] ixgbe 0000:03:00.0: irq 58 for MSI/MSI-X
[   17.685164] ixgbe 0000:03:00.0: irq 59 for MSI/MSI-X
[   17.685199] ixgbe 0000:03:00.0: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12
[   17.755544] ixgbe 0000:03:00.0 eth2: MAC: 5, PHY: 7, PBA No: 023B00-000
[   17.755548] ixgbe 0000:03:00.0: ac:1f:6b:1b:1b:b6
[   17.755551] ixgbe 0000:03:00.0 eth2: Enabled Features: RxQ: 12 TxQ: 12 FdirHash vxlan_rx
[   17.761744] ixgbe 0000:03:00.0 eth2: Intel(R) 10 Gigabit Network Connection
[   19.172816] ixgbe 0000:03:00.1: irq 60 for MSI/MSI-X
[   19.172826] ixgbe 0000:03:00.1: irq 61 for MSI/MSI-X
[   19.172834] ixgbe 0000:03:00.1: irq 62 for MSI/MSI-X
[   19.172841] ixgbe 0000:03:00.1: irq 63 for MSI/MSI-X
[   19.172849] ixgbe 0000:03:00.1: irq 64 for MSI/MSI-X
[   19.172857] ixgbe 0000:03:00.1: irq 65 for MSI/MSI-X
[   19.172864] ixgbe 0000:03:00.1: irq 66 for MSI/MSI-X
[   19.172871] ixgbe 0000:03:00.1: irq 67 for MSI/MSI-X
[   19.172879] ixgbe 0000:03:00.1: irq 68 for MSI/MSI-X
[   19.172886] ixgbe 0000:03:00.1: irq 69 for MSI/MSI-X
[   19.172893] ixgbe 0000:03:00.1: irq 70 for MSI/MSI-X
[   19.172900] ixgbe 0000:03:00.1: irq 71 for MSI/MSI-X
[   19.172908] ixgbe 0000:03:00.1: irq 72 for MSI/MSI-X
[   19.172944] ixgbe 0000:03:00.1: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12
[   19.243886] ixgbe 0000:03:00.1 eth3: MAC: 5, PHY: 7, PBA No: 023B00-000
[   19.243891] ixgbe 0000:03:00.1: ac:1f:6b:1b:1b:b7
[   19.243894] ixgbe 0000:03:00.1 eth3: Enabled Features: RxQ: 12 TxQ: 12 FdirHash vxlan_rx
[   19.250091] ixgbe 0000:03:00.1 eth3: Intel(R) 10 Gigabit Network Connection
[   19.254699] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.4.10
[   19.254702] i40e: Copyright(c) 2013 - 2018 Intel Corporation.

[   19.259278] tn40xx: Tehuti Network Driver, 0.3.6.17.2
[   19.264328] tn40xx: Supported phys : MV88X3120 MV88X3310 MV88E2010 QT2025 TLK10232 AQR105 MUSTANG

14 hours ago, IG-88 said:

vid:pid you see with lspci

   Kernel driver in use: ixgbe
0000:03:00.1 Class 0200: Device 8086:15ad
        Subsystem: Device 15d9:15ad
        Kernel driver in use: ixgbe
0000:05:00.0 Class 0200: Device 8086:1521 (rev 01)
        Subsystem: Device 15d9:1521
        Kernel driver in use: igb
0000:05:00.1 Class 0200: Device 8086:1521 (rev 01)
        Subsystem: Device 15d9:1521
        Kernel driver in use: igb
0000:06:00.0 Class 0604: Device 1a03:1150 (rev 03)
0000:07:00.0 Class 0300: Device 1a03:2000 (rev 30)
        Subsystem: Device 15d9:086d
0000:ff:0b.0 Class 0880: Device 8086:6f81 (rev 03)
        Subsystem: Device 8086:6f81
0000:ff:0b.1 Class 1101: Device 8086:6f36 (rev 03)
        Subsystem: Device 8086:6f36
0000:ff:0b.2 Class 1101: Device 8086:6f37 (rev 03)
        Subsystem: Device 8086:6f37
0000:ff:0b.3 Class 0880: Device 8086:6f76 (rev 03)
0000:ff:0c.0 Class 0880: Device 8086:6fe0 (rev 03)
        Subsystem: Device 8086:6fe0
0000:ff:0c.1 Class 0880: Device 8086:6fe1 (rev 03)
        Subsystem: Device 8086:6fe1
0000:ff:0c.2 Class 0880: Device 8086:6fe2 (rev 03)
        Subsystem: Device 8086:6fe2
0000:ff:0c.3 Class 0880: Device 8086:6fe3 (rev 03)
        Subsystem: Device 8086:6fe3
0000:ff:0c.4 Class 0880: Device 8086:6fe4 (rev 03)
        Subsystem: Device 8086:6fe4
0000:ff:0c.5 Class 0880: Device 8086:6fe5 (rev 03)
        Subsystem: Device 8086:6fe5
0000:ff:0f.0 Class 0880: Device 8086:6ff8 (rev 03)
        Subsystem: Device 8086:6ff8
0000:ff:0f.4 Class 0880: Device 8086:6ffc (rev 03)
        Subsystem: Device 8086:6fe0
0000:ff:0f.5 Class 0880: Device 8086:6ffd (rev 03)
        Subsystem: Device 8086:6fe0
0000:ff:0f.6 Class 0880: Device 8086:6ffe (rev 03)
        Subsystem: Device 8086:6fe0
0000:ff:10.0 Class 0880: Device 8086:6f1d (rev 03)
        Subsystem: Device 8086:6f1d
0000:ff:10.1 Class 1101: Device 8086:6f34 (rev 03)
        Subsystem: Device 8086:6f34
0000:ff:10.5 Class 0880: Device 8086:6f1e (rev 03)
        Subsystem: Device 8086:6f1e
0000:ff:10.6 Class 1101: Device 8086:6f7d (rev 03)
        Subsystem: Device 8086:6f7d
0000:ff:10.7 Class 0880: Device 8086:6f1f (rev 03)
        Subsystem: Device 8086:6f1f
0000:ff:12.0 Class 0880: Device 8086:6fa0 (rev 03)
        Subsystem: Device 8086:6fa0
0000:ff:12.1 Class 1101: Device 8086:6f30 (rev 03)
        Subsystem: Device 8086:6f30
0000:ff:13.0 Class 0880: Device 8086:6fa8 (rev 03)
        Subsystem: Device 8086:6fa8
0000:ff:13.1 Class 0880: Device 8086:6f71 (rev 03)
        Subsystem: Device 8086:6f71
0000:ff:13.2 Class 0880: Device 8086:6faa (rev 03)
        Subsystem: Device 8086:6faa
0000:ff:13.3 Class 0880: Device 8086:6fab (rev 03)
        Subsystem: Device 8086:6fab
0000:ff:13.4 Class 0880: Device 8086:6fac (rev 03)
        Subsystem: Device 8086:6fac
0000:ff:13.5 Class 0880: Device 8086:6fad (rev 03)
        Subsystem: Device 8086:6fad
0000:ff:13.6 Class 0880: Device 8086:6fae (rev 03)
0000:ff:13.7 Class 0880: Device 8086:6faf (rev 03)
0000:ff:14.0 Class 0880: Device 8086:6fb0 (rev 03)
        Subsystem: Device 8086:6fb0
0000:ff:14.1 Class 0880: Device 8086:6fb1 (rev 03)
        Subsystem: Device 8086:6fb1
0000:ff:14.2 Class 0880: Device 8086:6fb2 (rev 03)
        Subsystem: Device 8086:6fb2
0000:ff:14.3 Class 0880: Device 8086:6fb3 (rev 03)
        Subsystem: Device 8086:6fb3
0000:ff:14.4 Class 0880: Device 8086:6fbc (rev 03)
0000:ff:14.5 Class 0880: Device 8086:6fbd (rev 03)
0000:ff:14.6 Class 0880: Device 8086:6fbe (rev 03)
0000:ff:14.7 Class 0880: Device 8086:6fbf (rev 03)
0000:ff:15.0 Class 0880: Device 8086:6fb4 (rev 03)
        Subsystem: Device 8086:6fb4
0000:ff:15.1 Class 0880: Device 8086:6fb5 (rev 03)
        Subsystem: Device 8086:6fb5
0000:ff:15.2 Class 0880: Device 8086:6fb6 (rev 03)
        Subsystem: Device 8086:6fb6
0000:ff:15.3 Class 0880: Device 8086:6fb7 (rev 03)
        Subsystem: Device 8086:6fb7
0000:ff:1e.0 Class 0880: Device 8086:6f98 (rev 03)
        Subsystem: Device 8086:6f98
0000:ff:1e.1 Class 0880: Device 8086:6f99 (rev 03)
        Subsystem: Device 8086:6f99
0000:ff:1e.2 Class 0880: Device 8086:6f9a (rev 03)
        Subsystem: Device 8086:6f9a
0000:ff:1e.3 Class 0880: Device 8086:6fc0 (rev 03)
        Subsystem: Device 8086:6fc0
0000:ff:1e.4 Class 0880: Device 8086:6f9c (rev 03)
        Subsystem: Device 8086:6f9c
0000:ff:1f.0 Class 0880: Device 8086:6f88 (rev 03)
0000:ff:1f.2 Class 0880: Device 8086:6f8a (rev 03)

 

19 hours ago, IG-88 said:

i'm about to test this adapter

Let me know if you succeed getting 6 or more disks attached, this would be awesome !

Link to post
Share on other sites
6 hours ago, David Rollin said:

[   16.591541] Intel(R) 10GbE PCI Express Linux Network Driver - version 5.6.3
[   16.591544] Copyright(c) 1999 - 2019 Intel Corporation

...

[   17.685199] ixgbe 0000:03:00.0: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12
[   17.755544] ixgbe 0000:03:00.0 eth2: MAC: 5, PHY: 7, PBA No: 023B00-000
[   17.755548] ixgbe 0000:03:00.0: ac:1f:6b:1b:1b:b6
[   17.755551] ixgbe 0000:03:00.0 eth2: Enabled Features: RxQ: 12 TxQ: 12 FdirHash vxlan_rx
[   17.761744] ixgbe 0000:03:00.0 eth2: Intel(R) 10 Gigabit Network Connection

...

[   19.172944] ixgbe 0000:03:00.1: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12
[   19.243886] ixgbe 0000:03:00.1 eth3: MAC: 5, PHY: 7, PBA No: 023B00-000
[   19.243891] ixgbe 0000:03:00.1: ac:1f:6b:1b:1b:b7
[   19.243894] ixgbe 0000:03:00.1 eth3: Enabled Features: RxQ: 12 TxQ: 12 FdirHash vxlan_rx
[   19.250091] ixgbe 0000:03:00.1 eth3: Intel(R) 10 Gigabit Network Connection

 

 looks all pretty normal and in working order - how you find out it does not work?

i'd say look for other sources of trouble

maybe just connect a already tested 1G network cable and look in the dsm gui if the link status of one nic changes

(i guess 2.5G and 5G might not work, only 1G and 10G)

 

6 hours ago, David Rollin said:

Let me know if you succeed getting 6 or more disks attached, this would be awesome !

 

i did some testing and wrote a little about 8 port ahci controller that are possible to buy (for a reasonable price)

 

https://xpenology.com/forum/topic/19854-sata-controllers-not-recognized/?do=findComment&comment=122709

 

 

i also started a  new thread about such cards (to bundle that informations and let new information about better controller and cards come in)

 

https://xpenology.com/forum/topic/35882-new-sataahci-cards-with-more-then-4-ports-and-no-sata-multiplexer/?tab=comments#comment-172511

 

maybe there will be better versions with complete pcie 3.0 support on the bridge chip and the pcie to sata chip and maybe they go not to cheap and use a bridge chip that can use 4 lanes of the host (the controller we have so far are only using 2 lanes pcie 2.0)

 

if it does not have to be ahci then also sas hba's might be a choice, there are 8 and 16 port controller, i know for sure that lsi sas controller work with less then the full pcie lanes (tested doen to just one), the usual modell 9211-8i is only pcie 2.0 but there are newer versions (93xx)  with pcie 3.0 too and thats up to 4000MB/s, that way its possible to have 8 sata ssd's connected or 16 hdd's without compromising with the speed to much

 

Edited by IG-88
Link to post
Share on other sites

Hi

I am using DSM 6.2.3-25426 update2 with jun's 1.04b bootloader and ig-88's 0.13.3 extra/extra2 lzma driver.

and my mainboard is an asrock H110m-itx/ac.

 

but there is weird thing.

File upload speed to nas is full wire speed (1gbps), but download from nas is stuck at 80MBytes/s (700Mbps)

891098522_screenshot2020-10-2700_35_03.thumb.png.aaa38e86fcf82da1cece42e90ea85e0d.png

 

Although I change many network setting on DSM control panel, but it is not changed.

Finally, changing to jun's original lzma driver, I can download with full wire speed from nas.

but due to hw transcoding, I can't use jun's original driver.

 

Is there any solution or experience for it?

 

Thank you in advance.

Edited by Catalina
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.