Jump to content
XPEnology Community

Develop and refine the DVA1622 loader


Recommended Posts

50 minutes ago, cmathias said:

Hello everyone,

 

Is it possible to install DVA1622 in baremetal on a proliant 120 g6?

I tried all the loaders but always ending in failure when installing the .pat file from dsm.

 

To test the proliant I successfully installed a 3622xs.

 

THANKS

Did you try any other DT model ?

Link to comment
Share on other sites

As I mentioned in another thread..

 

I was able to load the driver (on i7-10700K DVA1622) and /sys/kernel/debug/dri/0/i915_frequency_info says:

Video Turbo Mode: yes
HW control enabled: yes
SW control enabled: no
PM IER=0x00000070 IMR=0xffffff8f ISR=0x00000000 IIR=0x00000000, MASK=0x00003fde
pm_intrmsk_mbz: 0x80000000
GT_PERF_STATUS: 0x00000000
Render p-state ratio: 0
Render p-state VID: 0
Render p-state limit: 255
RPSTAT1: 0x0a800000
RPMODECTL: 0x00000d92
RPINCLIMIT: 0x00002c88
RPDECLIMIT: 0x00004fb0
RPNSWREQ: 350MHz
CAGF: 350MHz
RP CUR UP EI: 1990 (2653us)
RP CUR UP: 28 (37us)
RP PREV UP: 0 (0us)
Up threshold: 95%
RP CUR DOWN EI: 20281 (27041us)
RP CUR DOWN: 28 (37us)
RP PREV DOWN: 0 (0us)
Down threshold: 85%
Lowest (RPN) frequency: 350MHz
Nominal (RP1) frequency: 350MHz
Max non-overclocked (RP0) frequency: 1200MHz
Max overclocked frequency: 1200MHz
Current freq: 350 MHz
Actual freq: 350 MHz
Idle freq: 350 MHz
Min freq: 350 MHz
Boost freq: 1200 MHz
Max freq: 1200 MHz
efficient (RPe) frequency: 350 MHz
Current CD clock frequency: 337500 kHz
Max CD clock frequency: 675000 kHz
Max pixel clock frequency: 675000 kHz

and lsmod | grep i915:

i915                 1287546  5 
drm_kms_helper        118265  1 i915
drm                   307793  6 i915,drm_kms_helper
iosf_mbi                4234  1 i915
fb                     34838  2 i915,drm_kms_helper
video                  27049  1 i915
backlight               6309  2 i915,video
button                  5152  1 i915
i2c_algo_bit            5505  1 i915

 

But deep learning still not working on surveillance 

Link to comment
Share on other sites

So as I already answered to you, this is already identified.

 

I don't think you will find anything usesull from module/driver perspective.

Unless you are able to fake 4th to 9th IGPU series, currently, there is something probably hardcoded in SS application. 

 

As already stated and discussed, I personnally confirmed the simple fact to fake PCI ID on my 9th gen originally not compabible with SS made it work. But my 9th gen driver was already included in DVA1622 loader, this was only a PCI ID issue on my side... It is not the case on 10th gen, unless it is exactly the same iGPU HD630/HD640 maybe...

 

We already know backported drivers/modules work on the loader. HW transcoding works. The real culprit is inside SS application. So unless you are able to hack the application, the best option is to consider running it under Proxmox and try to fake PCI device ID...

 

What is the device ID for the iGPU when you run :

 

lspci -nnkkvq

 

It must begin with 8086:xxxx

 

It is probably 8086:9BC5

 

Edit : actually, if this it 9BC5, you are lucky, it should work with Proxmox VE and running it under VM.

 

https://dgpu-docs.intel.com/devices/hardware-table.html

 

image.thumb.png.6dfdf7d13473f1329b75206733cf757a.png

 

Mine is 3E98, as you can see it is still considered as Gen9.

 

If you go back to to my earlier post :

 

 

 

I confirmed 3E98 is not part of original kernel. But 3E91 and 3E92 are available.

So I just had to fake the ID in Proxmox VE.

 

But you must make PCIe_passthrough works with Proxmox.

Your motherboard has to be compatible with VT-d option (available in BIOS)

 

https://pve.proxmox.com/wiki/PCI_Passthrough

 

Edit 2 : Running as baremetal, I don't know if it is possible to fake PCI IDs... maybe it could be done inside the loader... but @Peter Suh or @pocopico will be better than me to answer this... Don't know if this is possible or not.

The goal would be to hard set the PCI id to 8086:3E91 (in our case)

Edited by Orphée
Link to comment
Share on other sites

@Orphée your answer is super...I'm finally understanding something and I'm so sorry if i'm bothering you here...yes I'm already running it on proxmox and this is my device id inside the NAS running lspci -nnkkvq:

 

0000:06:10.0 VGA compatible controller [0300]: Intel Corporation CometLake-S GT2 [UHD Graphics 630] [8086:9bc5] (rev 05) (prog-if 00 [VGA controller])
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:7c76]
	Flags: bus master, fast devsel, latency 0, IRQ 36
	Memory at fb000000 (64-bit, non-prefetchable) [size=16M]
	Memory at c0000000 (64-bit, prefetchable) [size=256M]
	I/O ports at 4040 [size=64]
	Expansion ROM at fc040000 [disabled] [size=128K]
	Capabilities: [40] Vendor Specific Information: Len=0c <?>
	Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
	Capabilities: [d0] Power Management version 2
	Kernel driver in use: i915

 

I'm already running PCIe_passthrough in fact I can also see the surveillance on my screen (I configured it on my motherboard) :) 

 

So from you just said I should hack the PCI ID from 8086:9bc5 to 8086:3E92 on proxmox?

Something like my screenshot?

 

Thanks thanks thanks

Screenshot 2023-04-30 at 22.12.52.png

Edited by raelix
added a pic
Link to comment
Share on other sites

2 minutes ago, raelix said:

@Orphée your answer is super...I'm finally understanding something and I'm so sorry if i'm bothering you here...yes I'm already running it on proxmox and this is my device id inside the NAS running lspci -nnkkvq:

 

0000:06:10.0 VGA compatible controller [0300]: Intel Corporation CometLake-S GT2 [UHD Graphics 630] [8086:9bc5] (rev 05) (prog-if 00 [VGA controller])
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:7c76]
	Flags: bus master, fast devsel, latency 0, IRQ 36
	Memory at fb000000 (64-bit, non-prefetchable) [size=16M]
	Memory at c0000000 (64-bit, prefetchable) [size=256M]
	I/O ports at 4040 [size=64]
	Expansion ROM at fc040000 [disabled] [size=128K]
	Capabilities: [40] Vendor Specific Information: Len=0c <?>
	Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
	Capabilities: [d0] Power Management version 2
	Kernel driver in use: i915

 

I'm already running PCIe_passthrough in fact I can also see the surveillance on my screen (I configured it on my motherboard) :) 

 

So from you just said I should hack the PCI ID from 8086:9bc5 to 8086:3E92 on proxmox?

 

Thanks thanks thanks

 

In Hardware settings under your VM, just edit the passedtrough device, and insert 3E91 in Device ID :

image.thumb.png.36a7283924d5a87ed55db083cc0ebbff.png

Link to comment
Share on other sites

Still having the same error, maybe the problem is the version? DSM 7.1.1-42962 Update 5

 

This is in the dmesg:

[Sun Apr 30 22:23:40 2023] [drm] Initialized i915 1.6.0 20171222 for 0000:06:10.0 on minor 0
[Sun Apr 30 22:23:40 2023] i915 0000:06:10.0: fb0: inteldrmfb frame buffer device
[Sun Apr 30 22:24:30 2023] Module [i915] is removed. 
[Sun Apr 30 22:24:41 2023] i915 0000:06:10.0: Invalid ROM contents
[Sun Apr 30 22:24:41 2023] i915 0000:06:10.0: Direct firmware load for i915/kbl_dmc_ver1_04.bin failed with error -2
[Sun Apr 30 22:24:41 2023] i915 0000:06:10.0: Falling back to user helper
[Sun Apr 30 22:24:41 2023] i915 0000:06:10.0: Failed to load DMC firmware i915/kbl_dmc_ver1_04.bin. Disabling runtime power management.
[Sun Apr 30 22:24:41 2023] i915 0000:06:10.0: DMC firmware homepage: https://01.org/linuxgraphics/downloads/firmware
[Sun Apr 30 22:24:43 2023] i915 0000:06:10.0: Resetting rcs0 after gpu hang
[Sun Apr 30 22:24:43 2023] i915 0000:06:10.0: Resetting bcs0 after gpu hang
[Sun Apr 30 22:24:43 2023] i915 0000:06:10.0: Resetting vcs0 after gpu hang
[Sun Apr 30 22:24:43 2023] i915 0000:06:10.0: Resetting vecs0 after gpu hang
[Sun Apr 30 22:24:45 2023] i915 0000:06:10.0: Resetting chip after gpu hang
[Sun Apr 30 22:24:45 2023] i915 0000:06:10.0: GPU recovery failed
[Sun Apr 30 22:24:45 2023] [drm] Initialized i915 1.6.0 20171222 for 0000:06:10.0 on minor 0
[Sun Apr 30 22:24:45 2023] i915 0000:06:10.0: fb0: inteldrmfb frame buffer device

and this is the lspci output:

0000:06:10.0 VGA compatible controller [0300]: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] [8086:3e91] (rev 05) (prog-if 00 [VGA controller])

 

Thanks again 

Link to comment
Share on other sites

If you are using 10th gen addon in your loader, try to remove it as the default loader already has needed driver for 9th gen.

 

Edit mine is configured like this :

image.thumb.png.14c500bb1d6d62f0f5925da749b60d8f.png

 

[    2.913701] i915 0000:01:00.0: BAR 6: can't assign [??? 0x00000000 flags 0x20000000] (bogus alignment)
[    2.917114] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[    2.928062] [drm] Initialized i915 1.6.0 20171222 for 0000:01:00.0 on minor 0
[    3.246783] i915 0000:01:00.0: fb0: inteldrmfb frame buffer device

 

Edited by Orphée
Link to comment
Share on other sites

Do you have the display set to Default? Is you BIOS SeaBIOS?

 

I'm using arpl and I downloaded the addon and installed it then once you gave me the hints I entered in the loader and clicked on "Update Menu" , the addon is gone and I see just i915 in the user config then build and boot but no luck 😕 

 

thanks!

 

Edit:

In the dmesg I now have this:

sh-4.4# dmesg -T | grep i915
[Sun Apr 30 22:46:17 2023] i915 0000:01:00.0: Invalid ROM contents
[Sun Apr 30 22:46:17 2023] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[Sun Apr 30 22:46:19 2023] i915 0000:01:00.0: Resetting rcs0 after gpu hang
[Sun Apr 30 22:46:19 2023] i915 0000:01:00.0: Resetting bcs0 after gpu hang
[Sun Apr 30 22:46:19 2023] i915 0000:01:00.0: Resetting vcs0 after gpu hang
[Sun Apr 30 22:46:19 2023] i915 0000:01:00.0: Resetting vecs0 after gpu hang
[Sun Apr 30 22:46:21 2023] i915 0000:01:00.0: Resetting chip after gpu hang
[Sun Apr 30 22:46:21 2023] i915 0000:01:00.0: GPU recovery failed
[Sun Apr 30 22:46:21 2023] [drm] Initialized i915 1.6.0 20171222 for 0000:01:00.0 on minor 0
[Sun Apr 30 22:46:21 2023] i915 0000:01:00.0: fb0: inteldrmfb frame buffer device

 

Edited by raelix
added code
Link to comment
Share on other sites

Right...It's not listed anymore, that was in the previous dmesg where the module was still the patched one...now the dmesg has:

sh-4.4# dmesg -T | grep i915
[Sun Apr 30 22:46:17 2023] i915 0000:01:00.0: Invalid ROM contents
[Sun Apr 30 22:46:17 2023] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[Sun Apr 30 22:46:19 2023] i915 0000:01:00.0: Resetting rcs0 after gpu hang
[Sun Apr 30 22:46:19 2023] i915 0000:01:00.0: Resetting bcs0 after gpu hang
[Sun Apr 30 22:46:19 2023] i915 0000:01:00.0: Resetting vcs0 after gpu hang
[Sun Apr 30 22:46:19 2023] i915 0000:01:00.0: Resetting vecs0 after gpu hang
[Sun Apr 30 22:46:21 2023] i915 0000:01:00.0: Resetting chip after gpu hang
[Sun Apr 30 22:46:21 2023] i915 0000:01:00.0: GPU recovery failed
[Sun Apr 30 22:46:21 2023] [drm] Initialized i915 1.6.0 20171222 for 0000:01:00.0 on minor 0
[Sun Apr 30 22:46:21 2023] i915 0000:01:00.0: fb0: inteldrmfb frame buffer device

Changed the BIOS to UEFI.

 

Do you think I should reimport the img and start from scratch?

Link to comment
Share on other sites

thanks I added it but I notice that I don't have any cmdline (netif, disk,sata...). Is that a problem?

 

Edit:

still no luck...Dmesg is now a bit different:

sh-4.4# dmesg -T  | grep i915
[Sun Apr 30 22:59:33 2023] i915 0000:01:00.0: BAR 6: can't assign [??? 0x00000000 flags 0x20000000] (bogus alignment)
[Sun Apr 30 22:59:33 2023] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[Sun Apr 30 22:59:35 2023] i915 0000:01:00.0: Resetting rcs0 after gpu hang
[Sun Apr 30 22:59:35 2023] i915 0000:01:00.0: Resetting bcs0 after gpu hang
[Sun Apr 30 22:59:35 2023] i915 0000:01:00.0: Resetting vcs0 after gpu hang
[Sun Apr 30 22:59:35 2023] i915 0000:01:00.0: Resetting vecs0 after gpu hang
[Sun Apr 30 22:59:37 2023] i915 0000:01:00.0: Resetting chip after gpu hang
[Sun Apr 30 22:59:37 2023] i915 0000:01:00.0: GPU recovery failed
[Sun Apr 30 22:59:37 2023] [drm] Initialized i915 1.6.0 20171222 for 0000:01:00.0 on minor 0
[Sun Apr 30 22:59:37 2023] i915 0000:01:00.0: fb0: inteldrmfb frame buffer device

 

Edited by raelix
Link to comment
Share on other sites

Still no luck...Can I kindly ask you, if possible, to share the CMDLINE of your host running proxmox? Sorry...maybe something there 😕 

 

Edit:

The strange thing I see is the GPU recovery failed in dmesg:

sh-4.4# dmesg | grep i915
[    2.194399] i915 0000:01:00.0: BAR 6: can't assign [??? 0x00000000 flags 0x20000000] (bogus alignment)
[    2.198549] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[    4.738764] i915 0000:01:00.0: Resetting rcs0 after gpu hang
[    4.739231] i915 0000:01:00.0: Resetting bcs0 after gpu hang
[    4.739708] i915 0000:01:00.0: Resetting vcs0 after gpu hang
[    4.740163] i915 0000:01:00.0: Resetting vecs0 after gpu hang
[    6.707634] i915 0000:01:00.0: Resetting chip after gpu hang
[    6.707927] i915 0000:01:00.0: GPU recovery failed
[    6.717950] [drm] Initialized i915 1.6.0 20171222 for 0000:01:00.0 on minor 0
[    6.851644] i915 0000:01:00.0: fb0: inteldrmfb frame buffer device

Do you have the same?

Edited by raelix
added some details
Link to comment
Share on other sites

looks like the problem is on proxmox...even if the screen is visible on surveillance on proxmox I have the following error:

[Mon May  1 01:57:25 2023] DMAR: DRHD: handling fault status reg 3
[Mon May  1 01:57:25 2023] DMAR: [DMA Read] Request device [00:02.0] PASID ffffffff fault addr 9ba58000 [fault reason 05] PTE Write access is not set
[Mon May  1 01:57:25 2023] DMAR: DRHD: handling fault status reg 3

 

Link to comment
Share on other sites

You must have EFI disk when you enable OMVF (UEFI) BIOS. it is cleary stated/written by Proxmox GUI.

 

I suggest you to request help on Proxmox forum as it is not related to Xpenology but Proxmox issue.

https://forum.proxmox.com/#proxmox-virtual-environment.11

 

Edit :

Do not take the following as what you exactly need, I'm also passing through Nvidia cards to VMs, so my kernel cmdline and modules list wil not be all necessary for you.

 

# dmesg |egrep -i "dmar|iommu|i915"
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.15.104-1-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init
[    0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[    0.008425] ACPI: DMAR 0x000000003B2401D8 0000C8 (v01 INTEL  EDK2     00000002      01000013)
[    0.008461] ACPI: Reserving DMAR table memory at [mem 0x3b2401d8-0x3b24029f]
[    0.078376] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.15.104-1-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init
[    0.078427] DMAR: IOMMU enabled
[    0.221181] DMAR: Host address width 39
[    0.221181] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.221185] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.221188] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.221190] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.221192] DMAR: RMRR base: 0x0000003b699000 end: 0x0000003b8e2fff
[    0.221193] DMAR: RMRR base: 0x0000003d000000 end: 0x0000003f7fffff
[    0.221194] DMAR: RMRR base: 0x0000003acde000 end: 0x0000003ad5dfff
[    0.221195] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.221197] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.221197] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.224696] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.534901] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.644921] DMAR: No ATSR found
[    0.644922] DMAR: No SATC found
[    0.644923] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.644924] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.644925] DMAR: IOMMU feature nwfs inconsistent
[    0.644925] DMAR: IOMMU feature pasid inconsistent
[    0.644926] DMAR: IOMMU feature eafs inconsistent
[    0.644927] DMAR: IOMMU feature prs inconsistent
[    0.644927] DMAR: IOMMU feature nest inconsistent
[    0.644928] DMAR: IOMMU feature mts inconsistent
[    0.644928] DMAR: IOMMU feature sc_support inconsistent
[    0.644929] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.644930] DMAR: dmar0: Using Queued invalidation
[    0.644932] DMAR: dmar1: Using Queued invalidation
[    0.645179] pci 0000:00:00.0: Adding to iommu group 0
[    0.645190] pci 0000:00:01.0: Adding to iommu group 1
[    0.645199] pci 0000:00:01.1: Adding to iommu group 2
[    0.645206] pci 0000:00:02.0: Adding to iommu group 3
[    0.645214] pci 0000:00:08.0: Adding to iommu group 4
[    0.645227] pci 0000:00:12.0: Adding to iommu group 5
[    0.645241] pci 0000:00:14.0: Adding to iommu group 6
[    0.645249] pci 0000:00:14.2: Adding to iommu group 6
[    0.645263] pci 0000:00:15.0: Adding to iommu group 7
[    0.645270] pci 0000:00:15.1: Adding to iommu group 7
[    0.645282] pci 0000:00:16.0: Adding to iommu group 8
[    0.645289] pci 0000:00:17.0: Adding to iommu group 9
[    0.645314] pci 0000:00:1b.0: Adding to iommu group 10
[    0.645335] pci 0000:00:1b.4: Adding to iommu group 11
[    0.645355] pci 0000:00:1c.0: Adding to iommu group 12
[    0.645374] pci 0000:00:1c.2: Adding to iommu group 13
[    0.645389] pci 0000:00:1c.5: Adding to iommu group 14
[    0.645410] pci 0000:00:1c.6: Adding to iommu group 15
[    0.645429] pci 0000:00:1c.7: Adding to iommu group 16
[    0.645440] pci 0000:00:1e.0: Adding to iommu group 17
[    0.645464] pci 0000:00:1f.0: Adding to iommu group 18
[    0.645472] pci 0000:00:1f.3: Adding to iommu group 18
[    0.645481] pci 0000:00:1f.4: Adding to iommu group 18
[    0.645489] pci 0000:00:1f.5: Adding to iommu group 18
[    0.645498] pci 0000:00:1f.6: Adding to iommu group 18
[    0.645511] pci 0000:01:00.0: Adding to iommu group 19
[    0.645521] pci 0000:01:00.1: Adding to iommu group 20
[    0.645531] pci 0000:02:00.0: Adding to iommu group 21
[    0.645542] pci 0000:02:00.1: Adding to iommu group 22
[    0.645562] pci 0000:04:00.0: Adding to iommu group 23
[    0.645582] pci 0000:06:00.0: Adding to iommu group 24
[    0.645603] pci 0000:07:00.0: Adding to iommu group 25
[    0.645625] pci 0000:08:00.0: Adding to iommu group 26
[    0.645628] pci 0000:09:00.0: Adding to iommu group 26
[    0.645649] pci 0000:0a:00.0: Adding to iommu group 27
[    0.645766] DMAR: Intel(R) Virtualization Technology for Directed I/O

 

/etc# cat modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

overlay
aufs
# Chip drivers
adm1021
coretemp
nct6775
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

 

/etc/modprobe.d# cat blacklist.conf 
blacklist amdgpu
blacklist radeon
blacklist nouveau
blacklist nvidia
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915

/etc/modprobe.d# cat intel-microcode-blacklist.conf
# The microcode module attempts to apply a microcode update when
# it autoloads.  This is not always safe, so we block it by default.
blacklist microcode

/etc/modprobe.d# cat kvm.conf
options kvm ignore_msrs=1 report_ignored_msrs=0

/etc/modprobe.d# cat pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE 

# nidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb

/etc/modprobe.d# cat vfio.conf
options vfio-pci ids=8086:3e98 disable_vga=1

 

Don't forget to run "update-initramfs -u" after you change a module/modprobe.d configuration before reboot.

 

# lspci -s 00:02.0 -nnkkvq
00:02.0 Display controller [0380]: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] [8086:3e98]
        Subsystem: Super Micro Computer Inc UHD Graphics 630 (Desktop 9 Series) [15d9:1a1d]
        Flags: fast devsel, IRQ 16, IOMMU group 3
        Memory at 84000000 (64-bit, non-prefetchable) [size=16M]
        Memory at 40000000 (64-bit, prefetchable) [size=256M]
        I/O ports at 7000 [size=64]
        Capabilities: [40] Vendor Specific Information: Len=0c <?>
        Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
        Capabilities: [ac] MSI: Enable- Count=1/1 Maskable- 64bit-
        Capabilities: [d0] Power Management version 2
        Capabilities: [100] Process Address Space ID (PASID)
        Capabilities: [200] Address Translation Service (ATS)
        Capabilities: [300] Page Request Interface (PRI)
        Kernel driver in use: vfio-pci
        Kernel modules: i915

 

Edited by Orphée
Link to comment
Share on other sites

thank you for your help...I think I'm doing some steps ahead just only thanks to you. I passed the night doing the same configuration. I suspect it could be a kernel/proxmox version issue.

 

As last point I would like to ask you which kernel version and proxmox version you are using? 

 

I'm using the 7.4-3 and Linux pve 5.15.102-1-pve.

 

Edit: The strange thing is that my ubuntu vm works without issues. I can see the screen and use it with passthrough 

Edited by raelix
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...