Le mercredi 22 juin 2016 11:26:50 UTC-4, Marcus at WetwareLabs a écrit :
> Hello all,
> 
> I've been tinkering with GPU passthrough these couple of weeks and I thought 
> I should now share some of my findings. It's not so much unlike the earlier 
> report on GPU passthrough here 
> (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ).
> 
> I started with Nvidia GTX 980, but I had no luck with ANY of the Xen 
> hypervisors or Qubes versions. Please see my other thread for more 
> information 
> (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/PuZLWxhTgM0/pWe7LXI-AgAJ).
> 
> However after I switched to Radeon 6950, I've had success with all the Xen 
> versions. So I guess it's a thing with Nvidia driver initialization. On a 
> side note, someone should really test this with Nvidia Quadros that are 
> officially supported to be used in VMs. (And of course, there are the hacks 
> to convert older Geforces to Quadros..)
> 
> Anyway, here's a quick and most likely incomplete list (for most users) for 
> getting GPU passthrough working on Win 8.1 VM. (works identically on Win7)
> 
> Enclosed are the VM configuration file and HCL file for information about my 
> hardware setup (feel free to add this to HW compatibility list!)
> 
> TUTORIAL
> 
> Check which PCI addresses correspond to your GPU (and optionally, USB host) 
> with lspci.Here's mine:
> ...
> 
> 
> # lspci
> ....
> 03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] 
> Cayman XT [Radeon HD 6970]
> 03:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/Antilles 
> HDMI Audio [Radeon HD 6900 Series]
> Note that you have to pass both of these devices if you have similar GPU with 
> dual functionality.
> 
> Edit /etc/default/grub and add following options (change the pci address if 
> needed):
> 
> GRUB_CMDLINE_LINUX=".... rd.qubes.hide_pci=03:00.0,03:00.1 
> modprobe=xen-pciback.passthrough=1 xen-pciback.permissive"
> GRUB_CMDLINE_XEN_DEFAULT="... dom0_mem=min:1024M dom0_mem=max:4096M"
> 
> For extra logging:
> 
> 
> GRUB_CMDLINE_XEN_DEFAULT="... apic_verbosity=debug loglvl=all 
> guest_loglvl=all iommu=verbose"
> 
> There are many other options available, but I didn't see any difference in 
> success rate. See here:
> http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> http://wiki.xenproject.org/wiki/Xen_PCI_Passthrough
> http://wiki.xenproject.org/wiki/XenVGAPassthrough
> 
> Update grub:
> 
> # grub2-mkconfig -o /boot/grub2/grub.cfg
> Reboot. Check that VT-t is enabled:
> 
> # xl dmesg
> ...
> (XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
> (XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> Check that pci devices are available to be passed:
> 
> # xl pci-assignable list
> 0000:03:00.0
> 0000:03:00.1
> Create disk images:
> 
> # dd if=/dev/zero of=win8.img bs=1M count=30000
> # dd if=/dev/zero of=win8-user.img bs=1M count=30000
> Install VNC server into Dom0
> 
> # qubes-dom0-update vnc
> Modify the win8.hvm: Check that the disk images and Windows installation 
> CDROM image are correct, and that the IP address does not conflict with any 
> other VM (I haven't figured out yet how to set up dhcp) Check that 'pci = [ 
> .... ]' is commented for nowStart the VM ( -V option runs automatically VNC 
> client)
> 
> # xl create win8.hvm -V
> 
> If you happen to close the client (but VM is still running), start it again 
> with
> 
> 
> # xl vncviewer win8
> Note that I had success starting the VM only as root. Also killing the VM 
> with 'xl destroy win8' would leave the qemu process lingering if not done as 
> root (if that occurs, you have to kill that process manually)
> Install WindowsPartition the user image using 'Disk Manager'Download signed 
> paravirtualized drivers here (Qubes PV drivers work only in Win 
> 7):http://apt.univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi
> Don't mind the name, it works on Win 8.1 as well.
> For more info: 
> http://wiki.univention.com/index.php?title=Installing-signed-GPLPV-drivers
> 
> Move the drivers inside user image partition (shut down VM first):
> 
> # losetup   (Check for free loop device)
> # losetup -P /dev/loop10 win8-user.img   (Setup loop device and scan 
> partition. Assuming loop10 is free)
> # mount /dev/loop10p1 /mnt/removable  ( Mount the first partition )- copy the 
> driver there and unmount.
> 
> Reboot VM, install paravirtual drivers and reboot againCreate this script 
> inside sys-firewall (check that the sys-net vm ip address 10.137.1.1 is 
> correct though):
> 
> fwcfg.sh:
> #!/bin/bash
>    vmip=$1
> 
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.1   --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.254 --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p icmp -j ACCEPT
>     iptables -A FORWARD -s $vmip -p tcp -d 10.137.255.254 --dport 8082 -j DROP
>     iptables -A FORWARD -s $vmip -j ACCEPT
> Then setup the iptables rules:
> 
> 
> # sudo ./fwcfg.sh 10.137.2.50   # substitute with the win8.1 VM ip address
> Note that this has to do be done manually EVERY TIME the vm is (re)started, 
> because a new virtual interface is created and the old one is scrapped. If 
> someone knows how to automate this, I'm all ears :)
> 
> Configure VM networkingInside Windows, manually setup IP, netmask and GW in 
> VM (10.137.2.50, 255.255.255.0, 10.137.2.1) as well as DNS (10.137.1.1) for 
> the 'Xen Net' interface.
> If routing does not work properly at this point, try disabling the other 
> (Realtek) network interface in Windows.
> 
> Uncomment the devices-to-be-passed list ( PCI = [ ... ] ) in win8.hvmDownload 
> the GPU drivers (ATI Catalyst 15.7.1 for Win 8.1 worked for me for Radeon 
> 6950)Launch the installer but close it after it has unzipped drivers to 
> C:\ATIInstall the driver manually via Device Manager ( Update driver -> 
> Browse )Cross  your fingers and hope for the best!
> Enjoy a beer :)
>  ---------
> 
> If these instructions don't work for you, you could try following things:
> Enable permissive mode for PCI device (see link 
> above)iommu=workaround_bios_bug  boot optionenable/disable options in .hvm 
> file: viridian, pae, hpet, acpi, apic, cpi_msitranslate, pci_power_mgmt, 
> xen_platform_pci
> If you still don't get passthrough working, make sure that it is even 
> possible with you current hardware. Most of the modern (<3 years old) working 
> GPU PT installations seem to using KVM (I got even my grumpy NVidia GTX 980 
> functional!), so you should at least try creating bare-metal Arch Linux 
> installion and then following instructions here: 
> https://bufferoverflow.io/gpu-passthrough/
> or Arch wiki entry here: 
> https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
> or a series of tutorials here: 
> http://vfio.blogspot.se/2015/05/vfio-gpu-how-to-series-part-1-hardware.html
> 
> Most of the instructions are KVM specific, but there's lot of great 
> non-hypervisor specific information there as well, especially in the latter 
> blog. Note that all the info about VFIO and IOMMU groups can be misleading 
> since they are KVM specific functionality and not part of Xen (don't ask me 
> how much time I spent time figuring out why I can't seem to find IOMMU group 
> entries in /sys/bus/pci/ under Qubes...)
> 
> One thing about FLReset (Function Level Reset): There's quite general 
> misconception about FLR being a requirement in order to do GPU passthrough, 
> but this isn't true. As a matter of fact, not even the NVidia Quadros have 
> FLR+ in PCI DevCaps, and not many non-GPU PCI devices do either. So even 
> though the how-to here (http://wiki.xen.org/wiki/VTd_HowTo) states otherwise, 
> the missing FLR capability will not necessarily mean that device can't be 
> used in VM, but could only make it harder to survive DomU boot. I've seen in 
> my tests that both Win 7 and Win8 VMs can be in fact booted several times 
> without a requirement to boot Dom0 (but hopping BETWEEN the two Windows 
> versions will result in either BSOD or Code 43). But again, this may wary a 
> lot with GPU models and driver versions. But anyway, if you see this message 
> during VM startup:
> 
> 
> lbxl: error: ....  The kernel doesn't support reset from sysfs for PCI device 
> ...... you can safely ignore it
> 
> Happy hacking!
> 
> Best regards,
> Marcus

Thank you Marcus, i made it work on my Qubes 3.2 install following your 
instructions.

GPU: ASUS Radeon 480 4GB

Sound works with the Radeon HDMI sound device but not with the ASUS XONAR DG 
PCI device that i've been trying to passthrough. Although the device is 
recognized and the driver installs, windows cannot start the device.

Also, Windows boots well the first time after a Dom0 boot, but as soon as the 
Windows VM is shutdown (whether gracefully or by a crash), the windows VM will 
invariably crash with a BSOD. It won't boot again.
It seems though that adding both the Radeon GPU and the HDMI sound device to 
another Qubes VM, starting and shutting down this VM, will "release" the 
devices which will allow the Windows VM to start again without a BSOD.

Now, next step would be to be able to control the startup and shutdown of the 
Windows VM via the Qubes VM manager. I have tried to translate the config file 
into a libvirt XML one with virsh but with no success.
I'm guessing it's because of the use of the qemu-xen-traditional (which i hear 
is not so secure) that libvirt doesn't seem to allow. 

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/27806f6c-d13e-46bf-94a4-31b1ed090181%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to