Jump to content
XPEnology Community

Proxmox: HBA passthrough & second virtual controller not detected


Guest

Recommended Posts

I'm running DSM 6.2.3-25426 Update 3 / DS3615xs on Proxmox. I have my actual HBA card (JMB585-based one) connected to the VM using PCI passthrough and it works perfectly. However I added another disk using Proxmox:

 

image.thumb.png.57d7137bc6d8c725dc260d7f23c0a0a3.png

 

However no matter what I do DSM doesn't see that disk in the UI. I tried changing "SCSI Controller":

  • VMWare PVSCSI - disk is visible in CLI only (fdisk)
  • VirtIO - not detected
  • LSI 53C895A - not detected

 

Some people suggested that VirtIO but DSM doesn't seem to support it. LSI 53C895A was mentioned in multiple tutorials for DSM but 6.2.3-25426 is not detecting disks connected to it. The only one which is sort-of working is the VMWare PVSCSI. However the disk is not visible in Storage Manager. DSM sees it as an eSATA disk to an extent:

417613557_ScreenShot2021-05-29at2_37_48PM.png.bb0d706f4ea887fd60245a2a7f64e8a7.png   1128653015_ScreenShot2021-05-29at2_38_04PM.thumb.png.b348bb732f3331368a1bc2642ed613b7.png  

 

I checked my sata_args but I have no idea if I should change anything in them to make it working: 

set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=0C SataPortMap=1 SasIdxMap=0'

 

 

Can someone suggest how to make that additional disk working?

 

 

Link to comment
Share on other sites

Thank you for the link. That image indeed contains the virtio driver but doesn't solve the problem. Now I can use VirtIO as the bus and the SMART shows something (i.e. emulated SMART data) when I use sata0 instead of scsi0. However the disk is still being seen as external and thus being unusable for the array :(

Edited by Guest
Link to comment
Share on other sites

What is your sata_args? No matter what I do with it there's no change in the UI and it still shows QEMU disk as external.

For me it will be logical to have DiskIdxMap=0000 SataPortMap=57

DiskIdxMap=0000 SataPortMap=57 SasIdxMap=0

as it is consistent with https://github.com/cake654326/xpenology/blob/master/synoconfigs/Kconfig.devices but after rebooting nothing changed.

 

 

EDIT

I just realized "sata_args" is NOT used when booting as baremetal (which is also, AFAIK, recommended for KVM). Thus my config from above results in semi-sensible configuration:

image.thumb.png.85e0d63169f648e5bc283279ec99da5e.png

As the first bay is currently physically empty in my setup this makes sense for disk numbering. However the QEMU disk is still seen as eSATA by DSM grrrr.... 

Edited by Guest
Link to comment
Share on other sites

# Sata0 is dsm port0, Sata1 = dsm Port1, port 2-5 is reserved for sata, scsi drives start at port 6
set extra_args_918='DiskIdxMap=0F00 SataPortMap=15'

btw im booting synoboot from usb.

Proxmox config line:

args: -device 'nec-usb-xhci,id=usb-bus0,multifunction=on' -drive 'file=/var/lib/vz/images/150/synoboot.img,media=disk,format=raw,if=none,id=synoboot' -device 'usb-storage,bus=usb-bus0.0,port=1,drive=synoboot,id=synoboot,bootindex=999,removable=on'

 

Link to comment
Share on other sites

Did you modify synoinfo.conf (in /etc and /etc.defaults) by any chance? I finally managed to "brute-force" DSM to show that QEMU disk as internal one. It shows as 13th disk (which is over 12 max disk set), which is obviously invalid (for now):

 

1315453503_ScreenShot2021-05-29at7_48_27PM.thumb.png.853a691b0d2ab6c188698887e2673fe6.png

1353884137_ScreenShot2021-05-29at7_48_22PM.thumb.png.5c0f52081fddb929b21e0626ed04c1f0.png

 

 

I did force it by just telling DSM to treat all storage devices as internal:

# cat /etc.defaults/synoinfo.conf | grep portcfg
esataportcfg="0"
internalportcfg="0xffffffffff"
usbportcfg="0"

 

 

However the defaults which they provide should actually work (but somehow they don't):

# cat /etc.defaults/backup_synoinfo.conf | grep portcfg
esataportcfg="0xff000"
internalportcfg="0xfff"
usbportcfg="0x300000"

 

I saw https://hedichaibi.com/fix-xpenology-problems-viewing-internal-hard-drives-as-esata-hard-drives/ and from my dmesg I see that it sees 12 ATAs (7+5) and then 8 USB buses:

0000 0000 0000 0000 0000 0000 1111 1111 0000 0000 0000 ==> Usb ports   => 0xff000
0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 ==> Sata ports  => 0xfff

However setting them like that makes the QEMU disk land as eSATA again. I need to dig more into how it actually works here...

 

 

 

 

That aside: 

1 hour ago, loomes said:


# Sata0 is dsm port0, Sata1 = dsm Port1, port 2-5 is reserved for sata, scsi drives start at port 6
set extra_args_918='DiskIdxMap=0F00 SataPortMap=15'

btw im booting synoboot from usb.

I'm booting from USB too and have a similar config for the USB image. However your map is strange or I don't get how it works. Since 0F00 will mean that it will start numbering first controller from 15 (0Fh=1d, O is the 15th letter) and the first one will be sdp and the second controller will number from sda. Then SataPortMap=15 will indicate that 1st controller has 1 port and the second controller has 5 ports...

Edited by Guest
Link to comment
Share on other sites

It works!

I think I finally understood how this whole thing comes together and achieved exactly the configuration I wanted:

 

image.thumb.png.7616fce553eb1657240ee603a209e0db.png

 

No synoinfo.conf modification is needed here. For this to work I had to set the boot config to include:

DiskIdxMap=0C0005 SataPortMap=157

 

 

 

But HOW did I get these values

What didn't make sense to me was how "maxdisks" (in synoinfo.conf, patched to 12 by Jun's loader), "SataPortMap", and "DiskIdxMap" play together. Often times posts just list these values as correct ones without explanation. The kconfig is often shown as an explanation (https://github.com/cake654326/xpenology/blob/master/synoconfigs/Kconfig.devices). While it does explain things it has a significant typo in DiskIdxMap and doesn't take Xpenology synoboot hacking into consideration. However it is actually pretty simple:

  • maxdisks tells DSM how many disks the UI should enumerate and show in Storage Manager, it seems to have nothing to do with disk detection by the OS. That's why most configs just default to 12 as having this number higher than the actual number of slots will not cause problems.
  • SataPortMap is a list of digits (one character = one entry) which are read in order to tell DSM how many ports per controller to initialize (max 9; in my example 1 port at 1st controller, 5 ports at 2nd, and 7 at 3rd)
  • DiskIdxMap is a list of hex pairs (two characters = one entry) which instructs DSM how to map numbered ports from controllers to sda-sdz devs. The order of values here is the same as in SataPortMap. In my example above it maps like so:
    • 0C (dec: 12)
      • 1st controller starts mapping from 13th position
      • 1st controller had 1 port in SataPortMap
      • Result: [sdm]
    • 00 (dec: 0)
      • 2nd controller starts mapping from 1st position
      • 2nd controller had 5 ports in SataPortMap
      • Result: [sda] [sdb] [sdc] [sdd] [sde]
    • 05 (dec: 5)
      • 3rd controller starts mapping from 6th position
      • 3rd controller had 7 ports in SataPortMap
      • Result: [sdf] [sdg] [sdh] [sdi] [sdj] [sdk] [sdl]

With such configuration as above (and on the screenshot) the system sees a direct mapping between bays (even if empty) and sdX:

# fdisk -l /dev/sd? | grep 'Disk /'
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdf: 111.8 GiB, 120034123776 bytes, 234441648 sectors

(sda and sde are missing as the ports 1 and 5 of the PCI controller aren't connected; sdf is the 1st port of the QEMU controller)

 

sdm is missing from the list as it is "renamed" to synoboot:

# fdisk -l /dev/synoboot | grep 'Disk /'
Disk /dev/synoboot: 50 MiB, 52428800 bytes, 102400 sectors

 

 

But I said I have two controllers... or do I?

An important missing bit which doesn't seem to be explained fully anywhere is that the first controller is (always?) the one where DSM boots from. Since USB devices are also SCSI-based (like SATA):

[    5.500034] sd 13:0:0:0: [synoboot] Attached SCSI removable disk

This means they will appear as sdX too. This is exactly why DiskIdxMap maps the first controller to start from 13th and why SataPortMap should list a single port in the first controller. In such configuration kernel will always put the boot device (residing on the 1st controller) outside of the 12 maxdisks making it invisible in the UI. Then the 2nd controller (which in my case is the PCI controller) starts normally from sda, making the UI numbering sane (starting from "Disk 1"):

image.thumb.png.a185a7e690d8279985df0f9c8e7f9644.png

This can be easily confirmed by setting SataPortMap to 057 - it will instantly cause a nice KP :D

image.thumb.png.d8c2e1a62020cadf9fc8b40b5f07a15a.png 

 

 

What about eSATA & synoinfo.conf?

On top of these above "internalportcfg" / "esataportcfg" / "usbportcfg" masks are used to decide which of the sda-sdz devices are treated as internal / eSATA / USB. When something is connected to the DSM. With settings above the defaults for DS3615xs make perfect sense:

  • internalportcfg=0xfff -> first 12 devices (sda-sdl) should be treated as internal

  • esataportcfg=0xff000 -> devices 13-20 (sdm-sdt) should be treated as eSATA

  • usbportcfg=0x300000 -> devices 21-22 (sdu-sdv) should be treated as USB

The DS3615xs according to the data sheet has 12 bays and 2 USB ports.

 

Any unmapped devices or controllers will be attached with letters after the mapped boundary. So connecting a USB stick in my configuration results in sdu:

# fdisk -l /dev/sd? | grep 'Disk /'
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdf: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Disk /dev/sdu: 14.6 GiB, 15682240512 bytes, 30629376 sectors

 

...and a correctly mapped entry in the UI:

image.thumb.png.d02242e4c90f376a937036bc810c5fda.png

Edited by Guest
Link to comment
Share on other sites

  • 1 month later...
On 5/30/2021 at 1:04 PM, Guest said:

It works!

I think I finally understood how this whole thing comes together and achieved exactly the configuration I wanted:

 

image.thumb.png.7616fce553eb1657240ee603a209e0db.png

 

No synoinfo.conf modification is needed here. For this to work I had to set the boot config to include:


DiskIdxMap=0C0005 SataPortMap=157

 

 

 

But HOW did I get these values

What didn't make sense to me was how "maxdisks" (in synoinfo.conf, patched to 12 by Jun's loader), "SataPortMap", and "DiskIdxMap" play together. Often times posts just list these values as correct ones without explanation. The kconfig is often shown as an explanation (https://github.com/cake654326/xpenology/blob/master/synoconfigs/Kconfig.devices). While it does explain things it has a significant typo in DiskIdxMap and doesn't take Xpenology synoboot hacking into consideration. However it is actually pretty simple:

  • maxdisks tells DSM how many disks the UI should enumerate and show in Storage Manager, it seems to have nothing to do with disk detection by the OS. That's why most configs just default to 12 as having this number higher than the actual number of slots will not cause problems.
  • SataPortMap is a list of digits (one character = one entry) which are read in order to tell DSM how many ports per controller to initialize (max 9; in my example 1 port at 1st controller, 5 ports at 2nd, and 7 at 3rd)
  • DiskIdxMap is a list of hex pairs (two characters = one entry) which instructs DSM how to map numbered ports from controllers to sda-sdz devs. The order of values here is the same as in SataPortMap. In my example above it maps like so:
    • 0C (dec: 12)
      • 1st controller starts mapping from 13th position
      • 1st controller had 1 port in SataPortMap
      • Result: [sdm]
    • 00 (dec: 0)
      • 2nd controller starts mapping from 1st position
      • 2nd controller had 5 ports in SataPortMap
      • Result: [sda] [sdb] [sdc] [sdd] [sde]
    • 05 (dec: 5)
      • 3rd controller starts mapping from 6th position
      • 3rd controller had 7 ports in SataPortMap
      • Result: [sdf] [sdg] [sdh] [sdi] [sdj] [sdk] [sdl]

With such configuration as above (and on the screenshot) the system sees a direct mapping between bays (even if empty) and sdX:


# fdisk -l /dev/sd? | grep 'Disk /'
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdf: 111.8 GiB, 120034123776 bytes, 234441648 sectors

(sda and sde are missing as the ports 1 and 5 of the PCI controller aren't connected; sdf is the 1st port of the QEMU controller)

 

sdm is missing from the list as it is "renamed" to synoboot:


# fdisk -l /dev/synoboot | grep 'Disk /'
Disk /dev/synoboot: 50 MiB, 52428800 bytes, 102400 sectors

 

 

But I said I have two controllers... or do I?

An important missing bit which doesn't seem to be explained fully anywhere is that the first controller is (always?) the one where DSM boots from. Since USB devices are also SCSI-based (like SATA):


[    5.500034] sd 13:0:0:0: [synoboot] Attached SCSI removable disk

This means they will appear as sdX too. This is exactly why DiskIdxMap maps the first controller to start from 13th and why SataPortMap should list a single port in the first controller. In such configuration kernel will always put the boot device (residing on the 1st controller) outside of the 12 maxdisks making it invisible in the UI. Then the 2nd controller (which in my case is the PCI controller) starts normally from sda, making the UI numbering sane (starting from "Disk 1"):

image.thumb.png.a185a7e690d8279985df0f9c8e7f9644.png

This can be easily confirmed by setting SataPortMap to 057 - it will instantly cause a nice KP :D

image.thumb.png.d8c2e1a62020cadf9fc8b40b5f07a15a.png 

 

 

What about eSATA & synoinfo.conf?

On top of these above "internalportcfg" / "esataportcfg" / "usbportcfg" masks are used to decide which of the sda-sdz devices are treated as internal / eSATA / USB. When something is connected to the DSM. With settings above the defaults for DS3615xs make perfect sense:

  • internalportcfg=0xfff -> first 12 devices (sda-sdl) should be treated as internal

  • esataportcfg=0xff000 -> devices 13-20 (sdm-sdt) should be treated as eSATA

  • usbportcfg=0x300000 -> devices 21-22 (sdu-sdv) should be treated as USB

The DS3615xs according to the data sheet has 12 bays and 2 USB ports.

 

Any unmapped devices or controllers will be attached with letters after the mapped boundary. So connecting a USB stick in my configuration results in sdu:


# fdisk -l /dev/sd? | grep 'Disk /'
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdf: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Disk /dev/sdu: 14.6 GiB, 15682240512 bytes, 30629376 sectors

 

...and a correctly mapped entry in the UI:

image.thumb.png.d02242e4c90f376a937036bc810c5fda.png

 

thank you. such a good explanation on how to map SATA device.

Link to comment
Share on other sites

  • 3 months later...
  • 3 months later...

Looks like I need some help on this. I have DSM 6.2 running on proxmox 7. All works ok so far. I now wanted to pass through a SATA controller (Marvell 88SE9230). If I use the GUI to pass the controller through by adding the controller to the hardware of my VM, DSM always tries to boot from this new disk attached to the controller and as there is no OS on it it will ask to install new :-(.

Somewhere else I found the information to not use the GUI but to use the cli to add the controller (like so: qm set 100 -hostpci0 01:00 instead of adding the controller with this ID 0000:01:00.0). This way the system boots and I can see some messages in dmesg regarding the controller as well as the disk attached to it. But the disk doesn't appear in storage manager :-(.

I also tried to change the values for DiskIdxMap and SataPortMap - nothing changed regardless the values I used.

Any idea how to solve this issue? Maybe some of you also had this issue and could give me some direction ;-)
Thanks a lot in advance!

 

Edit1: small update. I found the disk attached to the controller as an external disk (thus it isn't shown in storage manager). So the passthrough seems to be ok. I just need to figure out how to make this disk internal.

 

Edit2: tried several combinations of DiskID and SataPortMap without success. Mapping is always the same. Am I editing the correct file? (/volumeUSB1/usbshare1-1/grub/grub.cfg)

 

-->  DiskIdxMap=0C0005 SataPortMap=157

root@NAS:~# fdisk -l /dev/sd? | grep 'Disk /'
Disk /dev/sdh: 16 GiB, 17179869184 bytes, 33554432 sectors
Disk /dev/sdp: 232.9 GiB, 250058268160 bytes, 488395055 sectors
Disk /dev/sdu: 50 MiB, 52428800 bytes, 102400 sectors

 

-->  DiskIdxMap=0C0005 SataPortMap=144

root@NAS:~# fdisk -l /dev/sd? | grep 'Disk /'
Disk /dev/sdh: 16 GiB, 17179869184 bytes, 33554432 sectors
Disk /dev/sdp: 232.9 GiB, 250058268160 bytes, 488395055 sectors
Disk /dev/sdu: 50 MiB, 52428800 bytes, 102400 sectors

 

-->  DiskIdxMap=0000 SataPortMap=144

root@NAS:~# fdisk -l /dev/sd? | grep 'Disk /'
Disk /dev/sdh: 16 GiB, 17179869184 bytes, 33554432 sectors
Disk /dev/sdp: 232.9 GiB, 250058268160 bytes, 488395055 sectors
Disk /dev/sdu: 50 MiB, 52428800 bytes, 102400 sectors

 

Edited by majamudo
Link to comment
Share on other sites

  • 1 month later...

Is all this information relevant to dsm 7.1 and redpill loader too? I am building a proxmox 7.1-7 (current version) with 14 sata (6 on board and 8 hba card), so 1 is for ssd for booting proxmox that leaves lucky 13 empty sata ports. I am buiding DS3622 and want to use at least 12 drives. Obviously I want to pass the entire control directly to dsm so it can handle the raid maintenance and building properly. Is this possible?

Link to comment
Share on other sites

16 hours ago, phone guy said:

I am buiding DS3622 and want to use at least 12 drives. Obviously I want to pass the entire control directly to dsm so it can handle the raid maintenance and building properly.

i guess as its controller (pcie device) based you would only be able to do that for the added 8 port controller, you already used one port of the 6 port onboard in the hypervisor so you can't hand that controller/device to a vm as a whole as its already blocked by the hypervisor and you would need to do raw mapping of the disks to the vm (at least thats the way it work in esxi and imho it will be the same with kvm/proxmox)

and when doing that already why bother with lsi sas drivers in dsm, just raw map these disks too and be free of that and choose whatever platform you want (like 918/920?)

if the hypervisor handles the controller it would also handle disk errors and smart and informing the user about problems (and you would need that already for you disk of the hypervisor), in some cases it might be beneficial like when using a hardware based raid and config the disks outside disk as a raid volume and make dsm only see one big disk, the hypervisor would have less limits then dsm to handle added drivers and tools, example might be a hp smart array p410  where a raid 5 is configured, the hypervisor would care for the disks and the state of the raid and dsm would only see one big disk and would have no raid to maintain, one way to circumvent the problems with hpe "non-microserver" with p4xx onbard and missing capabilities of dsm in that regard

 

 

  • Thanks 1
Link to comment
Share on other sites

4 hours ago, IG-88 said:

i guess as its controller (pcie device) based you would only be able to do that for the added 8 port controller, you already used one port of the 6 port onboard in the hypervisor so you can't hand that controller/device to a vm as a whole as its already blocked by the hypervisor and you would need to do raw mapping of the disks to the vm (at least thats the way it work in esxi and imho it will be the same with kvm/proxmox)

and when doing that already why bother with lsi sas drivers in dsm, just raw map these disks too and be free of that and choose whatever platform you want (like 918/920?)

if the hypervisor handles the controller it would also handle disk errors and smart and informing the user about problems (and you would need that already for you disk of the hypervisor), in some cases it might be beneficial like when using a hardware based raid and config the disks outside disk as a raid volume and make dsm only see one big disk, the hypervisor would have less limits then dsm to handle added drivers and tools, example might be a hp smart array p410  where a raid 5 is configured, the hypervisor would care for the disks and the state of the raid and dsm would only see one big disk and would have no raid to maintain, one way to circumvent the problems with hpe "non-microserver" with p4xx onbard and missing capabilities of dsm in that regard

 

 

Ok, I am a little confused. I thought that way was wrong for usm a pve (proxmox) to do a dsm (vm)? I was under the impression you wanted the pve to pass thru the ports or pci card to dsm so dsm would have all the control over building and maintenance of the array. So dsm would get all the health/smart data to alert if drives fail, to rebuild raid, to use shr and expand arrays etc..

 

If you let pve (proxmox) virtually (emulate) thru the drives or even an array, all the maintenance would be handled on by pve or the card in question, isn't that wrong?

 

I am still learning proxmox, and having some issues with this server board and its 2 nic ports. It has 2 nics + 1 ipmi console, and the 2 nics only 1 seems to get ipv4 and one gets ipv6, they switch, it seems like whichever gets the ipv4 first the other has to get ipv6? confusing. Plus the ipmi console works, but they (supermicro) stopped using admin/admin for ipmi login in 2020 and its not a unique password that should be a sticker on the motherboard and of course mine is not there....

 

I think its "simple" to pass the while pci device to a vm (dsm in this case), I also thought pve would pass thru individual ports of the onboard sata (5 of the 6 since 6 is its boot drive) but I am still trying to figure all that out.

 

Anyway, your method above, the hba/sas card wouldnt matter if it was in it mode or not, since (if I am understanding you correctly) the drives would be handled by the controller card then passed to dsm, so all maintenance and would be done there. Even a raid would be pass over as 1 big voume...? again, to me that does not seem right or safe? (please correct me if I am wrong)

 

Thank you @IG-88 for replying, you always take the time to explain things, where most people here simply post cryptic 1 line replies.

Link to comment
Share on other sites

5 hours ago, IG-88 said:

be free of that and choose whatever platform you want (like 918/920?)
 

I thought 3622 was the most stable? and I am not sure how well 920 is coming, seems like its complicated to get the sata map working? I am sure that will get simpler. In the big picture, all I see in the different builds is the hw transcoding abilities of the 918/920 are not in the 36xx models? (I think you told me that in another thread actually)

Any reason to pick 1 build version over another?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...