Re: KVM: GPU passthrough

2021-04-30 Thread Gokan Atmaca
ok it worked now. I reduced the ram size I gave for the GPU. But I saw
errors like the following.


---% kernel_err:

[9.487622] r8169 :02:00.0: firmware: failed to load
rtl_nic/rtl8168d-2.fw (-2)
[9.487697] firmware_class: See https://wiki.debian.org/Firmware
for information about missing firmware
[ 1159.047398] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1159.047534] handlers:
[ 1159.047572] [<7029899b>] usb_hcd_irq [usbcore]
[ 1164.024714] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1164.024846] handlers:
[ 1164.024883] [<7029899b>] usb_hcd_irq [usbcore]
[ 1268.843310] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1268.843448] handlers:
[ 1268.843487] [<7029899b>] usb_hcd_irq [usbcore]
[ 1323.645066] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1323.645198] handlers:
[ 1323.645236] [<7029899b>] usb_hcd_irq [usbcore]
root@homeKvm:~#

On Fri, Apr 30, 2021 at 7:36 PM Gokan Atmaca  wrote:
>
> system boots up but freezes. It just stays like that. I guess the
> problem is with the hardware.
>
>
>
> On Tue, Apr 27, 2021 at 6:14 PM Christian Seiler  wrote:
> >
> > Hi there,
> >
> > Am 2021-04-09 00:37, schrieb Gokan Atmaca:
> > > error:
> > > pci,host=:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
> > > :01:00.0: group 1 is not viable
> > > Please ensure all devices within the iommu_group are bound to their
> > > vfio bus driver.
> >
> > This is a known issue with PCIe passthrough: depending on your
> > mainboard and CPU, some PCIe devices will be grouped together,
> > and you will either be able to forward _all_ devices in the
> > group to the VM or none at all.
> >
> > (If you have a "server" GPU that supports SR-IOV you'd have
> > additional options, but that doesn't appear to be the case.)
> >
> > This will highly depend on the PCIe slot the card is in, as well
> > as potentially some BIOS/UEFI settings on PCIe lane distribution.
> >
> > First let's find out what devices are in the same IOMMU group.
> >  From your kernel log:
> >
> > [0.592011] pci :00:01.0: Adding to iommu group 1
> > [0.594091] pci :01:00.0: Adding to iommu group 1
> > [0.594096] pci :01:00.1: Adding to iommu group 1
> >
> > Could you check with "lspci" what these devices are in your case?
> >
> > If you are comfortable forwarding the other two devices into the
> > VM as well, just add that to the list of passthrough devices,
> > then this should work.
> >
> > If you need the other two devices on the host, then you need to
> > either put the GPU into a different PCIe slot, put the other
> > devices into a different PCIe slot, or find some BIOS/UEFI setting
> > for PCIe lane management that separates the devices in question
> > into different IOMMU groups implicitly. (BIOS/UEFI settings will
> > typically not mention IOMMU groups at all, so look for "lane
> > management" or "lane distribution" or something along those
> > lines. You might need to drop some PCIe lanes from other devices
> > and give them directly to the GPU you want to pass through in
> > order for this to work, or vice-versa, depending on the specific
> > situation.)
> >
> > Note: the GUI tool "lstopo" from the package "hwloc" is _very_
> > useful to identify how the PCIe devices are organized in your
> > system and may give you a clue as to why your system is grouped
> > together in the way it is.
> >
> > Hope that helps.
> >
> > Regards,
> > Christian
> >



Re: KVM: GPU passthrough

2021-04-27 Thread Christian Seiler

Hi there,

Am 2021-04-09 00:37, schrieb Gokan Atmaca:

error:
pci,host=:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
:01:00.0: group 1 is not viable
Please ensure all devices within the iommu_group are bound to their
vfio bus driver.


This is a known issue with PCIe passthrough: depending on your
mainboard and CPU, some PCIe devices will be grouped together,
and you will either be able to forward _all_ devices in the
group to the VM or none at all.

(If you have a "server" GPU that supports SR-IOV you'd have
additional options, but that doesn't appear to be the case.)

This will highly depend on the PCIe slot the card is in, as well
as potentially some BIOS/UEFI settings on PCIe lane distribution.

First let's find out what devices are in the same IOMMU group.
From your kernel log:

[0.592011] pci :00:01.0: Adding to iommu group 1
[0.594091] pci :01:00.0: Adding to iommu group 1
[0.594096] pci :01:00.1: Adding to iommu group 1

Could you check with "lspci" what these devices are in your case?

If you are comfortable forwarding the other two devices into the
VM as well, just add that to the list of passthrough devices,
then this should work.

If you need the other two devices on the host, then you need to
either put the GPU into a different PCIe slot, put the other
devices into a different PCIe slot, or find some BIOS/UEFI setting
for PCIe lane management that separates the devices in question
into different IOMMU groups implicitly. (BIOS/UEFI settings will
typically not mention IOMMU groups at all, so look for "lane
management" or "lane distribution" or something along those
lines. You might need to drop some PCIe lanes from other devices
and give them directly to the GPU you want to pass through in
order for this to work, or vice-versa, depending on the specific
situation.)

Note: the GUI tool "lstopo" from the package "hwloc" is _very_
useful to identify how the PCIe devices are organized in your
system and may give you a clue as to why your system is grouped
together in the way it is.

Hope that helps.

Regards,
Christian



Re: KVM: GPU passthrough

2021-04-27 Thread Gokan Atmaca
Hello

I have two GPUs.My other video card has arrived. The current error has changed.
what could be the problem ?


error:
Error starting domain: internal error: qemu unexpectedly closed the
monitor: 2021-04-27T11:26:00.638521Z qemu-system-x86_64:
-device vfio-pci,host=:06:00.0,id=hostdev0,bus=pci.0,addr=0xa:
vfio :06:00.0: failed to setup container for group
18: Failed to set iommu for container: Operation not permitted


-% modules:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

-% log:
dmesg | grep -E "DMAR|IOMMU"
[0.020358] ACPI: DMAR 0xBFE880C0 90 (v01 AMI
OEMDMAR  0001 MSFT 0097)
[0.052689] DMAR: IOMMU enabled
[0.124828] DMAR: Host address width 36
[0.124829] DMAR: DRHD base: 0x00fed9 flags: 0x1
[0.124834] DMAR: dmar0: reg_base_addr fed9 ver 1:0 cap
c90780106f0462 ecap f020e3
[0.124835] DMAR: RMRR base: 0x0e4000 end: 0x0e7fff
[0.124836] DMAR: RMRR base: 0x00bfeec000 end: 0x00bfef
[0.564105] DMAR: No ATSR found
[0.564226] DMAR: dmar0: Using Queued invalidation
[0.569521] DMAR: Intel(R) Virtualization Technology for Directed I/O

-% gpus:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GT218
[GeForce 210] [10de:0a65] (rev a2)
01:00.1 Audio device [0403]: NVIDIA Corporation High Definition Audio
Controller [10de:0be3] (rev a1)

nvidia_uvm 36864  0
nvidia  10592256  77 nvidia_uvm
drm   552960  11 drm_kms_helper,nvidia,radeon,ttm

-% gpus:
06:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc.
[AMD/ATI] Caicos [Radeon HD 6450/7450/8450 / R5 230 OEM] [1002:6779]
06:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI]
Caicos HDMI Audio [Radeon HD 6450 / 7450/8450/8490 OEM / R5
230/235/235X OEM] [1002:a..

radeon   1466368  2
ttm   102400  1 radeon
drm_kms_helper217088  1 radeon
i2c_algo_bit   16384  1 radeon
drm   552960  11 drm_kms_helper,nvidia,radeon,ttm

On Thu, Apr 15, 2021 at 4:22 PM Gokan Atmaca  wrote:
>
> Hello
>
> > Just to confirm: you have at least two graphics cards? One for
> > the host to boot with, one for your guest to take over?
>
> I saw it working in my trials. But of course, since there is only one
> graphics card, the image of the host system is gone. :) I am looking
> for a motherboard where I can install two graphics cards.
>
> On Fri, Apr 9, 2021 at 2:51 AM Dan Ritter  wrote:
> >
> > Gokan Atmaca wrote:
> > > Hello
> > >
> > > I want to use the graphics card directly in the virtual machine. IOMMU
> > > seems to be running, but unfortunately it doesn't work when I want to
> > > start the virtual machine.
> > >
> > >
> > > error:
> > > pci,host=:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
> > > :01:00.0: group 1 is not viable
> > > Please ensure all devices within the iommu_group are bound to their
> > > vfio bus driver.
> >
> > Just to confirm: you have at least two graphics cards? One for
> > the host to boot with, one for your guest to take over?
> >
> > And you loaded the vfio mod and configured it with the PCI ids
> > for your second card? There could be several.
> >
> > -dsr-



Re: KVM: GPU passthrough

2021-04-15 Thread Gokan Atmaca
Hello

> Just to confirm: you have at least two graphics cards? One for
> the host to boot with, one for your guest to take over?

I saw it working in my trials. But of course, since there is only one
graphics card, the image of the host system is gone. :) I am looking
for a motherboard where I can install two graphics cards.

On Fri, Apr 9, 2021 at 2:51 AM Dan Ritter  wrote:
>
> Gokan Atmaca wrote:
> > Hello
> >
> > I want to use the graphics card directly in the virtual machine. IOMMU
> > seems to be running, but unfortunately it doesn't work when I want to
> > start the virtual machine.
> >
> >
> > error:
> > pci,host=:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
> > :01:00.0: group 1 is not viable
> > Please ensure all devices within the iommu_group are bound to their
> > vfio bus driver.
>
> Just to confirm: you have at least two graphics cards? One for
> the host to boot with, one for your guest to take over?
>
> And you loaded the vfio mod and configured it with the PCI ids
> for your second card? There could be several.
>
> -dsr-



Re: KVM: GPU passthrough

2021-04-08 Thread Dan Ritter
Gokan Atmaca wrote: 
> Hello
> 
> I want to use the graphics card directly in the virtual machine. IOMMU
> seems to be running, but unfortunately it doesn't work when I want to
> start the virtual machine.
> 
> 
> error:
> pci,host=:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
> :01:00.0: group 1 is not viable
> Please ensure all devices within the iommu_group are bound to their
> vfio bus driver.

Just to confirm: you have at least two graphics cards? One for
the host to boot with, one for your guest to take over?

And you loaded the vfio mod and configured it with the PCI ids
for your second card? There could be several.

-dsr-



KVM: GPU passthrough

2021-04-08 Thread Gokan Atmaca
Hello

I want to use the graphics card directly in the virtual machine. IOMMU
seems to be running, but unfortunately it doesn't work when I want to
start the virtual machine.


pci:

[0.010066] ACPI: DMAR 0x9D8B7000 70 (v01 INTEL  EDK2
  0002  0113)
[0.121392] DMAR: IOMMU enabled
[0.202324] DMAR: Host address width 39
[0.202325] DMAR: DRHD base: 0x00fed91000 flags: 0x1
[0.202331] DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap
d2008c40660462 ecap f050da
[0.202333] DMAR: RMRR base: 0x009e543000 end: 0x009e78cfff
[0.202336] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 0
[0.202338] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[0.202339] DMAR-IR: Queued invalidation will be enabled to support
x2apic and Intr-remapping.
[0.203666] DMAR-IR: Enabled IRQ remapping in x2apic mode
[0.391676] iommu: Default domain type: Translated
[0.591706] DMAR: No ATSR found
[0.591762] DMAR: dmar0: Using Queued invalidation
[0.591942] pci :00:00.0: Adding to iommu group 0
[0.592011] pci :00:01.0: Adding to iommu group 1
[0.592090] pci :00:08.0: Adding to iommu group 2
[0.592367] pci :00:14.0: Adding to iommu group 3
[0.592378] pci :00:14.2: Adding to iommu group 3
[0.592438] pci :00:16.0: Adding to iommu group 4
[0.592519] pci :00:17.0: Adding to iommu group 5
[0.592583] pci :00:1b.0: Adding to iommu group 6
[0.592674] pci :00:1c.0: Adding to iommu group 7
[0.592687] pci :00:1c.3: Adding to iommu group 7
[0.594066] pci :00:1f.0: Adding to iommu group 8
[0.594075] pci :00:1f.2: Adding to iommu group 8
[0.594084] pci :00:1f.4: Adding to iommu group 8
[0.594091] pci :01:00.0: Adding to iommu group 1
[0.594096] pci :01:00.1: Adding to iommu group 1
[0.594104] pci :02:00.0: Adding to iommu group 6
[0.594112] pci :03:00.0: Adding to iommu group 7
[0.594119] pci :04:00.0: Adding to iommu group 7
[0.594122] DMAR: Intel(R) Virtualization Technology for Directed I/O


error:
pci,host=:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
:01:00.0: group 1 is not viable
Please ensure all devices within the iommu_group are bound to their
vfio bus driver.