Jump to content
XPEnology Community

Sata Controllers not recognized


JigglyJoe

Recommended Posts

Hey Guys ,

 

i've installed two 6port raid controllers in my XPEnology NAS but unfortunately it does'nt recognize any drive i plug in.

 

i have the DS918+ modell with DSM version 6.2-23739

i'm using a Asrock Z87 Extreme4 Z87 Mainboard  - https://www.asrock.com/mb/Intel/Z87 Extreme4/index.de.asp

with two of MZHOU PCIe SATA Card with 6 Ports and with the Marvell 88SE9215 Chip - https://www.amazon.co.uk/MZHOU-Controller-Expansion-Marvell-88SE9215/dp/B07RMHH43W

i stumbled across a few people in this forum who are using the Marvell 88SE9215 Chip.

 

has it to do something with die 6port instead of beeing a 4 port card?

 

is it possible to run the system with two 6port sata controllers with the Marvell 88SE9215 Chip?

i also have a 2 port controller with an unknown chipset but this controller is working fine, just rebooting the system and die drives are recognized! but i need more than 2 ports :(

 

Thanks for helping

Link to comment
Share on other sites

I notice its a dual-chipset card: 88SE9215 chip and the ASM109X - I think thats the issue, i recently had a similar issue although with a different card and had to purchase a different one.

Reading other subjects on this, It seems more important that its a single-chipset more than anything.

 

Link to comment
Share on other sites

Some notes regarding controllers:

 

- don’t use cards with exotic chipsets or from unknown brands (like the typical China cards)

- if possible make use of controllers without port multipliers

- check the forum for often used controllers

- use HBAs (host bus adapters) and not RAID controllers (some controllers can be flashed into the IT mode which turns it into a HBA)

- search the web for used controllers from „the big“ companies (LSI/Avago, Dell, Lenovo, etc.). Most of these controllers are branded cards with LSI chipsets and equal in the price of a new „China“ card

Link to comment
Share on other sites

  • 2 weeks later...
On 8/20/2019 at 8:39 AM, jensmander said:

Some notes regarding controllers:

 

- don’t use cards with exotic chipsets or from unknown brands (like the typical China cards)

- if possible make use of controllers without port multipliers

- check the forum for often used controllers

- use HBAs (host bus adapters) and not RAID controllers (some controllers can be flashed into the IT mode which turns it into a HBA)

- search the web for used controllers from „the big“ companies (LSI/Avago, Dell, Lenovo, etc.). Most of these controllers are branded cards with LSI chipsets and equal in the price of a new „China“ card

 

@jensmander I have ordered such a card instead of the pcie sata controlled i am using now.

 

What is the best approach to moving from internal sata ports -> the new hba card? Is there a way of doing it without having to reinstall the os?

 

Can I move them from internal to hba card and boot like nothing happened, do they need to be plugged in the same order etc?

Link to comment
Share on other sites

makes sure the hba works correctly, then just shut the machine down, plug the drives into the hba, disable the internal ports and boot up. I recommend disabling the internal ports in case with then new card you exceed the max # of drives in synology. For example if you have 6 onboard ports and 8 on an LSI card, some drives (last 2) may not be visible from the LSI card since the 6 onboard ports will still be taking up slots in the OS (example 3615 only supports 12 drives out of the box). 

Link to comment
Share on other sites

On 8/30/2019 at 6:20 AM, richv31 said:

makes sure the hba works correctly, then just shut the machine down, plug the drives into the hba, disable the internal ports and boot up. I recommend disabling the internal ports in case with then new card you exceed the max # of drives in synology. For example if you have 6 onboard ports and 8 on an LSI card, some drives (last 2) may not be visible from the LSI card since the 6 onboard ports will still be taking up slots in the OS (example 3615 only supports 12 drives out of the box). 

 

Awesome, thanks! :)

Link to comment
Share on other sites

Hi Guys!

 

since i'm the starter of this topic i wanted to tell you what i've done :)

i learned that my previously used sata controllers mentioned above didnt worked and that i have to use HBA controller from a known brand.

so i searched one and finally bought a 8port "HP H220 IT Mode LSI 9205-8I PCI-e 3.0HBA" controller from HP.

the one i bought from ebay was this one: https://www.ebay.com/c/22028264506

it came with all the needed cables.

 

i just built it in, plugged my drives in it and all my drives where recognized :)

on my mainboard i'm using all 8 ports with 8 drives.

on the controller i tested it with all 8 slots and they are all working, now i can use 16 drives simultaneously 😮

 

Edited by JigglyJoe
  • Thanks 1
Link to comment
Share on other sites

  • 4 weeks later...

i did some search about alternatives to old lsi 8 port sas controllers and 4 port ahci controllers - namely "cheap" 8 port ahci controllers without multiplexers, i found 2 candidates

(using AHCI makes independent from external/additional drivers, if you fall back to just using the drivers synology provides, the ahci will still work in the 918+ image)

 

IO Crest SI-PEX40137 (https://www.sybausa.com/index.php?route=product/product&product_id=1006&search=8+Port+SATA

1 x ASM 1806 (pcie bridge) + 2 x Marvell 9215 (4port sata)

~$100

 

QNINE 8 Port SATA Card (https://www.amazon.com/dp/B07KW38G9N?tag=altasra-20)

1 x ASM 1806 (pcie bridge) + 4 x ASM1061 (2port sata)

~$50

 

but both "only" use max 4x pcie 2.0 lanes on the asm1806 and 2x pcie lane for 4 sata ports (Marvell 9225) or 1 lane for 2 sata ports (asm1061), so they might be ok for hdd's but maybe not deliver enough throughput for ssd's an all 8 ports

the good thing is as ahci controllers they will work with the drivers synology has in its dsm, so no dependencies of additional drivers

ASM1806, https://www.asmedia.com.tw/eng/e_show_products.php?item=199&cate_index=168

Marvell 9215, https://www.marvell.com/storage/system-solutions/assets/Marvell-88SE92xx-002-product-brief.pdf

ASM1061, https://www.asmedia.com.tw/eng/e_show_products.php?cate_index=166&item=118

 

if they turn out to be bough enough then there might be newer/better ones, there are better/newer pcie bridge chips and 2/4 port sata controller being able to use more pcie lanes (making it 1 lane per sata port)

 

i bought the IO Crest SI-PEX40137 as it has the better known/tested marvell 9215 (lots of 4port sata cards with it)

but i only tested it shortly to see if its working as ahci controller as intended - it does and with lspci it looks like having 2 x marvell 9215 in the system - all good

the SF8087 connector and the delivered cables do work as intended, the same cable did work an my "old" lsi sas controller so nothing special about the connector or the cables, they are ~1m long

there are LED's for every sata port on the controller (as we are used to from the lsi controllers)

i haven't tested the performance yet, i have 3 older 50GB ssd so even with these as raid0 i will not be able to max it out but at least i will see if there is any unexpected bottleneck - as i'm planing to use it for old fashioned hdd's i guess that one will be ok (and if not it will switch places with the good old 8 port sas from the system doing backup)

 

EDIT:

after reviewing the data of the cards again my liking for the 9215 based card has vanished, its kind of a fake with its pcie 4x interface as the marvell 9215 is a pcie 1x chip and as there are two of them the card can only use two pcie lanes, so when it comes to performance the asm1061 based card should be the winner as it uses four 2port controller and every controller uses one pcie lane, making full use of the pcie 4x interface of the card

so its 500 MByte/s  for four sata ports (max 6G !!! per port)  on the 9215 based card and 500 MByte/s for two sata ports on the asm1061 based card

the asm1061 ´based card can be found as sa3008 under a lot of different labels - if the quality and reliability can hold up against the old trusty lsi cards is another question

 

better comparison on marvell chips:

https://www.marvell.com/content/dam/marvell/en/public-collateral/storage/marvell-storage-88se92xx-product-brief-2012-04.pdf

(conclusion 9230 and 9235 should be the choice for a 4port controller instead of a 9215)

 

edit2:  the ASM1806/ASM1061 card in different but also has a design flaw, the ASM1806 pci express bridge only has 2 lanes as "upstream" port (to the pcie root aka computer chipset) and has only pcie 2.0 support so in the end it will be capped to ~1000 MB/s for all 8 drives and both cards will end in a measly performance and are unable to handle sdd's in a meaningful way

looks like two marvell 9230/9235 cards would perform better then one of these two 8port cards

 

Edited by IG-88
Link to comment
Share on other sites

On 9/5/2019 at 1:58 PM, JigglyJoe said:

on the controller i tested it with all 8 slots and they are all working, now i can use 16 drives simultaneously

 

only if you edit the synoinfo.conf manually and after bigger dsm update you will  be back to 12 ports used after the first boot (if you forget to do something) so the 1st start will result in a "broken" raid set missing all drives above 12

btw. the limit is 24 for the normal conditions of dsm as we use it

Link to comment
Share on other sites

  • 6 months later...
6 hours ago, GoBlin1983 said:

Who got a positive cross between a snake with a hedgehog?

not really, when it comes to use ssd only (sata based) then mpt based controllers are still mandatory, when using hdd with added ssd for cache or a smaller fast volume ahci can be a alternative (ssd would be connected to onbard sata and hdd to added ahci controllers)

 

atm a JMB585 based (5port) card would be my favorite as it supports pcie 3.0 and should have two times the throughput of a marvell 92xx based cards (pcie 2.0)

sadly its still on delivers because of the corona crises

i also intend to test a M.2 bases card of that type and M.2 to pcie 4x adapter (for using normal pcie cards)

i run a micro atx board with 1xpcie 16x, 1xpcie 4x, 2xpcie 1x and one M.2 (pcie 4x)

the 8port controller might be ok'ich for 1 GBit network but for 10G or even SSD based systems the outdated marvell 92xx wont do it

still a safe bet are lsi/broadcom mpt controllers, plenty of bandwidth with pcie 8x and with the new released kernel source from synology a got a working mpt2sas (27.0.0.1) driver that will support most of the mpt controllers (dont know yet about mpt2/mpt3 for 3615/17 but synology them self had a better mpt3sas driver with 24992 (now 22.0.2.0, they started dsm 6.2.0 with v13.0.0.0)

i had no time to really test the ahci controllers with 4-5 ssd's to see if there is a difference as the numbers suggest and i still wait for the JMB585 controllers (use the forum search with jmb585 to find some infos about products and estimated performance values)

 

so no hard facts, just more ideas and things to test

Link to comment
Share on other sites

  • 1 month later...
On 4/30/2020 at 6:01 PM, IG-88 said:

 

i had no time to really test the ahci controllers with 4-5 ssd's to see if there is a difference as the numbers suggest and i still wait for the JMB585 controllers (use the forum search with jmb585 to find some infos about products and estimated performance values)

 

 

Have you already test the jmb585 controller?

I have one mini-itx board, and i would prefer a m.2 card to expand the number of ports available.

Link to comment
Share on other sites

On 6/27/2020 at 5:07 PM, Luis Aleixo said:

Have you already test the jmb585 controller?

 

yes and also found a performance test for the pcie card with jmb585, it was performing as hoped so i build both (pcie and m.2) in my 12 disk system and ad no problems, after about a week i found that 16 ports in a 12 disk system are a waste and replaced the m.2 card with a 2 port pcie 1x, making it 6 onboard, 5 on the jmb585 pcie 2x card and one disk on the pcie 1x card

the m.2 card is now in the testing system, in the testing system

i did not make heavy test with all 5 disks connected to see if the "missing" heat sink would make any difference

 

Quote

I have one mini-itx board, and i would prefer a m.2 card to expand the number of ports available.

 

beside the possible heat problem (i mounted the cpu cooller in a way that the open side gives air flow to the m.2 card) there is to mention that its mechanically a thing

for once the m.2 card came loose because the gigabyte board had just a flimsy plastic peace to fix the end to the board and that was not enough, so i got some old board mounting stuff and screws to make a proper metal based fix to the card and 2nd the card is extremely thin and could break from the pressure of plugging in sata cable while mounted, so i placed something more solid beneath the m.2 card to not bend it down when plugging cables but its not really made for handling cables in a test system - what i use as test system is usually ma backup hardware so in normal mode its not so often plugged in and out but the m.2 card should be handled very carefully

also there might be more hen one brand of these m.2 cards so if one works for me does not mean all will for everyone

i will keep it for my backup system, if things should go wrong later it will be just the backup that fails

 

edit: after some time of testing i can't suggest using these (nice looking) 5port M.2 cards, the chips seems to overheat when really challenging it with a lot of data transfer (tested 4-8 TB), it just froze at some point and even with some thermal grease and a small cooler (getting some airflow from the cpu cooler) it did not work stable, i replaced the M.2 jmb585 with a normal pcie jmb585 card and that one worked fine as the 1st card i bought), i did not try to use a M.2 to pcie 4x adapter with it , seemed to be to mechanical  unstable and i could not find a good way to fix it in my housing (to much lever force with that big card and cables attached to it)

 

Edited by IG-88
Link to comment
Share on other sites

On 10/4/2019 at 1:18 AM, IG-88 said:

i did some search about alternatives to old lsi 8 port sas controllers and 4 port ahci controllers - namely "cheap" 8 port ahci controllers without multiplexers, i found 2 candidates

(using AHCI makes independent from external/additional drivers, if you fall back to just using the drivers synology provides, the ahci will still work in the 918+ image)

 

IO Crest SI-PEX40137 (https://www.sybausa.com/index.php?route=product/product&product_id=1006&search=8+Port+SATA

1 x ASM 1806 (pcie bridge) + 2 x Marvell 9215 (4port sata)

~$100

 

QNINE 8 Port SATA Card (https://www.amazon.com/dp/B07KW38G9N?tag=altasra-20)

1 x ASM 1806 (pcie bridge) + 4 x ASM1061 (2port sata)

~$50

 

but both "only" use max 4x pcie 2.0 lanes on the asm1806 and 2x pcie lane for 4 sata ports (Marvell 9225) or 1 lane for 2 sata ports (asm1061), so they might be ok for hdd's but maybe not deliver enough throughput for ssd's an all 8 ports

the good thing is as ahci controllers they will work with the drivers synology has in its dsm, so no dependencies of additional drivers

ASM1806, https://www.asmedia.com.tw/eng/e_show_products.php?item=199&cate_index=168

Marvell 9215, https://www.marvell.com/storage/system-solutions/assets/Marvell-88SE92xx-002-product-brief.pdf

ASM1061, https://www.asmedia.com.tw/eng/e_show_products.php?cate_index=166&item=118

 

if they turn out to be bough enough then there might be newer/better ones, there are better/newer pcie bridge chips and 2/4 port sata controller being able to use more pcie lanes (making it 1 lane per sata port)

 

i bought the IO Crest SI-PEX40137 as it has the better known/tested marvell 9215 (lots of 4port sata cards with it)

but i only tested it shortly to see if its working as ahci controller as intended - it does and with lspci it looks like having 2 x marvell 9215 in the system - all good

the SF8087 connector and the delivered cables do work as intended, the same cable did work an my "old" lsi sas controller so nothing special about the connector or the cables, they are ~1m long

there are LED's for every sata port on the controller (as we are used to from the lsi controllers)

i haven't tested the performance yet, i have 3 older 50GB ssd so even with these as raid0 i will not be able to max it out but at least i will see if there is any unexpected bottleneck - as i'm planing to use it for old fashioned hdd's i guess that one will be ok (and if not it will switch places with the good old 8 port sas from the system doing backup)

 

EDIT:

after reviewing the data of the cards again my liking for the 9215 based card has vanished, its kind of a fake with its pcie 4x interface as the marvell 9215 is a pcie 1x chip and as there are two of them the card can only use two pcie lanes, so when it comes to performance the asm1061 based card should be the winner as it uses four 2port controller and every controller uses one pcie lane, making full use of the pcie 4x interface of the card

so its 500 MByte/s  for four sata ports (max 6G !!! per port)  on the 9215 based card and 500 MByte/s for two sata ports on the asm1061 based card

the asm1061 ´based card can be found as sa3008 under a lot of different labels - if the quality and reliability can hold up against the old trusty lsi cards is another question

 

better comparison on marvell chips:

https://www.marvell.com/content/dam/marvell/en/public-collateral/storage/marvell-storage-88se92xx-product-brief-2012-04.pdf

(conclusion 9230 and 9235 should be the choice for a 4port controller instead of a 9215)

 

edit2:  the ASM1806/ASM1061 card in different but also has a design flaw, the ASM1806 pci express bridge only has 2 lanes as "upstream "port (to the pcie root aka computer chipset) so in the end it will be caped to ~1000 MB/s for all 8 drives and both cards will end in a measly performance and are unable to handle sdd's in a meaningful way

looks like two marvell 9230/9235 cards would perform better then one of these two 8port cards

 

Hi,

Do you know if dsm918+ have driver for the onboard SATA controller? I have a Supermicro X10SRL-F with two onboard Wellsburg AHCI Controllers. They were fine passsing through to Xpenology in KVM but not in ESXI. I'm kind of lost whether it's a driver issue of xpenology or something wrong with ESXI. I tried to passthrough the Wellsburg AHCI controller to windows in ESXI and the disks were recognized without problem...make the situation even more wired.

}EYS~C5ZNGIB@{@G3IL8$RC.png

I guess it might be something wrong with the reset method I set in passthru.map in ESXI. The Wellsburg AHCI Controller was originally grayed out in ESXI I have to edit the passthru.map file to make it possible to toggle the controller as passthrough. I have tried default, link, d3d0, and bridge but it seems that only d3d0 reset method will prevent ESXI from grayout the controller.

Edited by sarieri
Link to comment
Share on other sites

3 hours ago, sarieri said:

Supermicro X10SRL-F with two onboard Wellsburg AHCI Controllers

if its in ahci mode (bios setting?) it should work, ahci (driver) is part of the synology kernel

you can check with lspci -k in the vm if the device is present and what driver is used

maybe this helps?

https://xpenology.com/forum/topic/5455-best-setup-gt-esxi-gt-xpenology-dsm52/?tab=comments#comment-47218

 

 

Link to comment
Share on other sites

4 hours ago, IG-88 said:

if its in ahci mode (bios setting?) it should work, ahci (driver) is part of the synology kernel

you can check with lspci -k in the vm if the device is present and what driver is used

maybe this helps?

https://xpenology.com/forum/topic/5455-best-setup-gt-esxi-gt-xpenology-dsm52/?tab=comments#comment-47218

 

 

It’s in AHCI mode. 100% sure

Link to comment
Share on other sites

7 hours ago, IG-88 said:

if its in ahci mode (bios setting?) it should work, ahci (driver) is part of the synology kernel

you can check with lspci -k in the vm if the device is present and what driver is used

maybe this helps?

https://xpenology.com/forum/topic/5455-best-setup-gt-esxi-gt-xpenology-dsm52/?tab=comments#comment-47218

 

 

Is it possible that Xpenology has the driver for the controller but I will need to edit the grub.cfg and change SataPortMap and DiskIdxMap so that it can recognize it properly.

Link to comment
Share on other sites

4 hours ago, sarieri said:

Is it possible that Xpenology has the driver for the controller but I will need to edit the grub.cfg and change SataPortMap and DiskIdxMap so that it can recognize it properly.

the step with lspci was about to check that the driver is used for the device and next would be to check /var/log/dmesg about the disk(s) connected to this controller

Link to comment
Share on other sites

19 minutes ago, IG-88 said:

the step with lspci was about to check that the driver is used for the device and next would be to check /var/log/dmesg about the disk(s) connected to this controller

Sarieri@Sarieri:/$ lspci -k
0000:00:00.0 Class 0600: Device 8086:7190 (rev 01)
        Subsystem: Device 15ad:1976
        Kernel driver in use: agpgart-intel
0000:00:01.0 Class 0604: Device 8086:7191 (rev 01)
0000:00:07.0 Class 0601: Device 8086:7110 (rev 08)
        Subsystem: Device 15ad:1976
0000:00:07.1 Class 0101: Device 8086:7111 (rev 01)
        Subsystem: Device 15ad:1976
0000:00:07.3 Class 0680: Device 8086:7113 (rev 08)
        Subsystem: Device 15ad:1976
0000:00:07.7 Class 0880: Device 15ad:0740 (rev 10)
        Subsystem: Device 15ad:0740
0000:00:0f.0 Class 0300: Device 15ad:0405
        Subsystem: Device 15ad:0405
0000:00:11.0 Class 0604: Device 15ad:0790 (rev 02)
0000:00:15.0 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:15.1 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:15.2 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:15.3 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:15.4 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:15.5 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:15.6 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:15.7 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:16.0 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:16.1 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:16.2 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:16.3 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:16.4 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:16.5 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:16.6 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:16.7 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:17.0 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:17.1 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:17.2 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:17.3 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:17.4 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:17.5 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:17.6 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:17.7 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:18.0 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:18.1 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:18.2 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:18.3 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:18.4 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:18.5 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:18.6 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:00:18.7 Class 0604: Device 15ad:07a0 (rev 01)
        Kernel driver in use: pcieport
0000:02:00.0 Class 0c03: Device 15ad:0774
        Subsystem: Device 15ad:1976
        Kernel driver in use: uhci_hcd
0000:02:01.0 Class 0c03: Device 15ad:0770
        Subsystem: Device 15ad:0770
        Kernel driver in use: ehci-pci
0000:02:03.0 Class 0106: Device 15ad:07e0
        Subsystem: Device 15ad:07e0
        Kernel driver in use: ahci
0000:03:00.0 Class 0200: Device 8086:10fb (rev 01)
        Subsystem: Device 8086:000c
        Kernel driver in use: ixgbe
0000:0b:00.0 Class 0200: Device 8086:10fb (rev 01)
        Subsystem: Device 8086:000c
        Kernel driver in use: ixgbe
0000:13:00.0 Class 0106: Device 8086:8d02 (rev 05)
        Subsystem: Device 15d9:0832
        Kernel driver in use: ahci
0001:00:12.0 Class 0000: Device 8086:5ae3 (rev ff)
0001:00:13.0 Class 0000: Device 8086:5ad8 (rev ff)
0001:00:14.0 Class 0000: Device 8086:5ad6 (rev ff)
0001:00:15.0 Class 0000: Device 8086:5aa8 (rev ff)
0001:00:16.0 Class 0000: Device 8086:5aac (rev ff)
0001:00:18.0 Class 0000: Device 8086:5abc (rev ff)
0001:00:19.2 Class 0000: Device 8086:5ac6 (rev ff)
0001:00:1f.1 Class 0000: Device 8086:5ad4 (rev ff)
0001:01:00.0 Class 0000: Device 1b4b:9215 (rev ff)
0001:02:00.0 Class 0000: Device 8086:1539 (rev ff)
0001:03:00.0 Class 0000: Device 8086:1539 (rev ff)

So, the ahci controller is found by DSM(0000:13:00.0 Device 8086:8d02). And here is the output of /var/log/dmesg:

Sarieri@Sarieri:/$ vi /var/log/dmesg
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.5: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.5: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.6: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.6: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.7: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.7: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.3: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.3: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.4: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.4: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.5: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.5: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.6: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.6: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.7: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.7: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.2: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.2: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.3: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.3: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.4: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.4: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.5: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.5: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.6: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.6: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.7: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.7: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[Sun Jun 28 01:23:04 2020] pci 0000:00:0f.0: BAR 6: assigned [mem 0xc0400000-0xc0407fff pref]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.3: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.3: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.4: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.4: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.5: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.5: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.6: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.6: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.7: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:15.7: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.3: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.3: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.4: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.4: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.5: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.5: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.6: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.6: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.7: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:16.7: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.3: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.3: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.4: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.4: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.5: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.5: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.6: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.6: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.7: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:17.7: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.2: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.2: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.3: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.3: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.4: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.4: BAR 13: failed to assign [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.5: BAR 13: no space for [io  size 0x1000]
[Sun Jun 28 01:23:04 2020] pci 0000:00:18.5: BAR 13: failed to assign [io  size 0x1000]

It seems the disks connected to the controller were failed to assign (I don't really understand what that means, but it seems to be the problem here). 

My motherboard is the X10SRL-F with two Wellsburg AHCI Controller. I passed through one of them which connected to 6 disks and I also have two pcie NVME drives passed through to DSM(RDM passthru, so DSM won't really see them as actually NVME disks).

Edited by sarieri
Link to comment
Share on other sites

4 minutes ago, sarieri said:

So, the ahci controller is found by DSM(0000:13:00.0 Device 8086:8d02).

and the proper driver is loaded/used (ahci)

next stop dmesg and compare this with the dmesg results later when tuning switches in grub.cfg

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...