I've been experiencing something that sounds very similar to what has
been described in this issue post and want to see if you guys think it's
the same issue. For me from a cold boot everything is fine for a while
and I can restart my vm and such just fine. but after a long time or
stressful stuff mining/gaming if I shutdown my vm the host displays will
all go to sleep and the system locks up which I had been assuming is a
display driver crash. I can also sometimes trigger the exact same lockup
by calling lspci. once such a lockup has happened I have to hard reset.
where this gets even weirder is that after this happens I will get the
same lockup during the startup process around when xorg loads. when this
happens I either have to leave my computer alone for around 30 minutes
to an hour, or I can get it to boot by disabling iommu with iommu=off as
a kernel param, and then if I wait around 30 minutes to an hour I can
restart and it will boot fine again with iommu=pt (I get a kernel panic
if i don't use iommu=pt)

Hardware
Ryzen R5 1600
asrock ab350m pro4
32gb ram
Host gpu RX580
Guest gpu GTX1070

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459

Title:
  Windows (10?) guest freezes entire host on shutdown if using PCI
  passthrough

Status in libvirt:
  New
Status in QEMU:
  New
Status in Arch Linux:
  New
Status in Debian:
  New
Status in Fedora:
  New

Bug description:
  Problem: after leaving a Windows VM that uses PCI passthrough (as we
  do for gaming graphics cards, sound cards, and in my case, a USB card)
  running for some amount of time between 1 and 2 hours (it's not
  consistent with exactly how long), and for any amount of time longer
  than that, shutting down that guest will, right as it finishes
  shutting down, freeze the host computer, making it require a hard
  reboot. Unbinding (or in the other user's case, unbinding and THEN
  binding) any PCI device in sysfs, even one that has nothing to do with
  the VM, also has the same effect as shutting down the VM (if the VM
  has been running long enough). So, it's probably an issue related to
  unbinding and binding PCI devices.

  There's a lot of info on this problem over at 
https://bbs.archlinux.org/viewtopic.php?id=206050
  Here's a better-organized list of main details:
  -at least 2 confirmed victims of this bug; 2 (including me) have provided 
lots of info in the link
  -I'm on Arch Linux and the other one is on Gentoo (distro-nonspecific)
  -issue affects my Windows 10 guest and others' Windows guests, but not my 
Arch Linux guest (the others don't have non-Windows guests to test)
  -I'm using libvirt but the other user is not, so it's not an issue with 
libvirt
  -It seems to be version non-specific, too. I first noticed it at, or when 
testing versions still had the issue at (whichever version is lower), Linux 4.1 
and qemu 2.4.0. It still persists in all releases of both since, including the 
newest ones.
  -I can't track down exactly what package downgrade can fix it, as downgrading 
further than Linux 4.1 and qemu 2.4.0 requires Herculean and system-destroying 
changes such as downgrading ncurses, meaning I don't know whether it's a bug in 
QEMU, the Linux kernel, or some weird seemingly unrelated thing.
  -According to the other user, "graphics intensive gameplay (GTA V) can cause 
the crash to happen sooner," as soon as "15 minutes"
  -Also, "bringing up a second passthrough VM with separate hardware will cause 
the same crash," and "bringing up another VM before the two-hour mark will not 
result in a crash," further cementing that it's triggered by the un/binding of 
PCI devices.
  -This is NOT related to the very similar bug that can be worked around by not 
passing through the HDMI device or sound card. Even when we removed all traces 
of any sort of sound card from the VM, it still had the same behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1580459/+subscriptions

Reply via email to