I've been experiencing something that sounds very similar to what has
been described in this issue post and want to see if you guys think it's
the same issue. For me from a cold boot everything is fine for a while
and I can restart my vm and such just fine. but after a long time or
stressful stuff
I haven't remembered to reset those interrupts in a year, but I also
haven't remembered to update my drivers in about as long, so I could be
still on the right setting. I've also been on AMD for that year, and I
don't remember whether this bug applies to modern AMD cards.
--
You received this bug
Updating NVIDIA drivers in the guest also seems to disable MSI for some
reason. Oddly enough I did not run into the host hard locking though.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
Tit
Enabling MSI interrupts works for me. One note is that Windows updates
will sometimes revert the changes so if this starts breaking after an
update you may need to re-apply the registry changes.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed
Hi guys, not sure if I'm on the right track here but I think I'm
experiencing the same issue. My install might be a bit of a mess
combining bits from the VFIO Tips site and Ubuntu guides on GPU
passthrough, but I *did* have it all working for a few hours at a
stretch before I got this lock up.
The
Oh, that is interesting. Using lscpi -v on my computer reveals that
Linux tends to default to enabling MSI on my PCIe devices that support
it (since the common opinion is that it's better for PCIe), including
all my graphics cards, so the fact that vfio-pci and Windows 10 both
default to disabling
(Forgot to clarify: yes, vfio-pci devices disable MSI by default for me
just like for Clif Houck, but all other PCIe devices have it enabled.)
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
Ti
On Thu, 07 Jul 2016 20:34:15 -
Clif Houck wrote:
> I was also experiencing the host hard locking when shutting down a
> Windows 10 guest with a Nvidia GPU passed-through, but the issue appears
> to be completely solved after switching the card to MSI mode in the
> Windows guest.
>
> However,
I was also experiencing the host hard locking when shutting down a
Windows 10 guest with a Nvidia GPU passed-through, but the issue appears
to be completely solved after switching the card to MSI mode in the
Windows guest.
However, I would be interested in understanding *why* using the card in
lin
Thats good to know, I want to reenable my Nvidia sound card as well.
Note: When you update the video card driver, it will disable the MSI
interrupt so you will have to reenable it.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
htt
I enabled MSI interrupts, and now for 2 nights in a row I gamed 2 hours
straight and shut down the Windows VM without a freeze. Never in my 7
months of living with this bug have I gotten no freeze twice in a row. I
think the MSI interrupts have fixed it for me, and no, I did not remove
my HDMI soun
Apparently Passthrough devices work better when using a MSI Interrupt
instead of a traditional interrupt.
See post 32 https://bugs.launchpad.net/qemu/+bug/1580459/comments/32
item 2.
2. I enabled MSI Interrupts on the GPU using this URL as my reference.
http://lime-technology.com/wiki/index.
What are MSI interrupts and how did you configure your card to use them?
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
Title:
Windows (10?) guest freezes entire host on shutdown if using PC
My system has been behaving well the last couple of weeks. I can reboot
at will with no lockups. I am still not passing the NVIDIA sound card
to the VM and have GPU configure to use MSI interrupts. I am not
passing the ROM for my GTX 970 gpu.
I know this is not related but I was able to lockup
I managed to fix that issue and properly load the VM with the rom file
(what had gone wrong was it inexplicably acted like it had no hard
drives, until I restored the libvirt XML file from a backup). I got a
good test out of it: played video games in Windows for 2 hours, with the
rom file loaded. I
I got impatient and got the rom file from EVGA and loaded it in, but for
me and my GTX 960, I get no graphical output when it's loaded. I don't
know anything beyond that. I don't get any error messages in dmesg or
anything--just no video output whatsoever. It was also strangely booting
into the Tia
I just added the romfile argument to mine, will report back later
tonight. (Don't want to reboot now, as my machine will hang and I'm at
work)
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
T
Can someone else please confirm that? I can't test it because nouveau
doesn't support the GTX 960 yet. If it turns out solid, then I could
just ask EVGA support for the rom file.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https:/
FYI I had a similar issue years ago until I figured out that adding the
vgarom file fixes it, eg.:
-device vfio-
pci,host=04:00.0,bus=root.1,multifunction=on,x-vga=on,addr=0.0,romfile=Sapphire.R7260X.1024.131106.rom
For radeon, you can look in /sys. eg. we see
/sys/devices/pci:00/:00
I've been not using USB host passthrough this whole time, as my PCI USB3
card covers that need pretty well. Speaking of those cards, for those of
you who also use one, does it work perfectly? If so, I'd like to know
its model so I can go buy it, because while my card works, about 50% of
the time I
So guys, new information.
I was having trouble getting the HTC Vive passed through in host mode.
The thing shows up as 10+ devices! I've also some logitech webcams that
don't seem to work via usb host passthrough. So I gave windows my entire
usb controller (only 1 for all my ports on this mobo). S
SYSLINUX.CFG
default /syslinux/menu.c32
menu title Lime Technology, Inc.
prompt 0
timeout 50
label unRAID OS
kernel /bzimage
append isolcpus=4,16,5,17,6,18,7,19,8,20,9,21,10,22,11,23
pci-stub.ids=1b6f:7052,10de:13c2,10de:0fbb intel_iommu=on iommu=pt
vfio_iommu_type1.allow_unsafe_interrupts=1
Current VM Config
csmccarronwx00
82c5e4f6-6991-cd5f-8207-49db04386cc9
csmccarronwx00 i440fx-2.5 OVMF
10485760
10485760
12
/machine
hvm
/usr/share/qemu/ovmf-x64/OVMF_CODE-p
Well for now my issue is resolved. This morning when I was shutting
down my unRaid server to blacklist the intel sound module, snd-hda-
intel, I first stopped my ubuntu vm and my two dockers then logged out
of unraid. I then proceeded to shutdown my Windows 10 VM and like magic
it shutdown nicely
If your Windows VM does and always has a sound card being passed in
(like the .1 address of your video card), then we can't know for sure
that you don't have that other bug. In that other bug, you can fix the
crash by not passing in any sound cards, real or virtual, to the VM.
It's definitely not t
I will try an blacklist the sound module in the unRaid kernel. Waiting
on instructions on how to do it.
Chris
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
Title:
Windows (10?) guest free
I'm not really sure what the other similar bug was, but what I was
experiencing was a Win10 VM locking up the host machine upon shutdown of
the VM after several minutes of gaming (or even several hours of
youtube/netflix). It didn't happen all of the time, but most of the
time after the VM had be
Hm. Sound was the issue in that other bug. Have you already confirmed
that you don't have that other, similar bug? If you undo all the other
fixes you've done, including enabling SND again, does the VM still crash
if you have NO sound device assigned to it at all, whether it be a pass-
thru device
I have been able to stop this from happening by recompiling my kernel
without SND support. If you can live without sound in your host (it is
still there in your guest if you pass through the sound device of your
card) then try removing SND support from your hosts kernel. You can
also try blacklis
I know it didn't with the GTX 660. It worked perfectly fine. But, I went
fully into Steam streaming everything before I got the 960, so the 960
could have that issue for all I know.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
http
Jimi, does your HDMI sound lag? I am using a usb sound card and tries
switching to the GTX970 sound and I got horrible lag, sounds like sound
is in slow motion. Was completely unusable.
Chris
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed
Unsure how to edit a post.
Also wanted to say, I can provide BIOS settings later, and any kernel
logs if anyone wants. Wanted to note though that I am using UEFI with
GPT style partitioning. I'm using bttrfs for the host fs. OVMF for
guests (See package list in my system info for versioning). Gues
I've got the same issue. Pretty much just as it has been described by
everyone else. Same on shutdown or certain events. Same for delay.
Similar setups and hardware/software. (X99, Arch, Qemu, libvirt, pcie
passthrough, windows 10, etc...) I've attached my system info (Hardware,
lscpu, Archlinux pa
Well, that's a bunch more stuff ruled out. My host is a BIOS with MBR
partitioning, using ext4, and the images are all raw. For each guest,
there's an image of the OS (so the C: drive on Windows and the /
partition on Linux) on my SSD, and Windows also has a bigger image on my
HDD (drive D:). I don
Has any one found a way to shutdown/restart the vm without causing a
system lockup or is this just the way it is until a fix is found?
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
Title:
W
Remember, I think we've done enough testing to know that it isn't
specifically the VM shutting down that causes this, but the binding or
unbinding of PCI devices in sysfs, which is something a VM will do on
shutdown if you're passing hardware into it. It *is* caused by the VM
running for more than
I am not having any issues with my drives during normal operation on the
server. I only see the ata errors when the system locks up.
If there is something I can do please let me know. I have been trying
to figure this out for over a month now but have had no luck.
--
You received this bug noti
Additional syslog image
** Attachment added: "20160527_182715.jpg"
https://bugs.launchpad.net/qemu/+bug/1580459/+attachment/4672400/+files/20160527_182715.jpg
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad
Additional syslog image
** Attachment added: "20160527_182702.jpg"
https://bugs.launchpad.net/qemu/+bug/1580459/+attachment/4672388/+files/20160527_182702.jpg
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad
I have tried everything to keep it from happening but have had no
success. The likely hood of an entire system lock up is based on how
long the Win 10 VM is on. I personally have not timed it but usually i
can shutdown/restart without problems for about an hour, maybe more.
My Ubuntu vm is not e
Additional syslog image
** Attachment added: "20160527_182718.jpg"
https://bugs.launchpad.net/qemu/+bug/1580459/+attachment/4672401/+files/20160527_182718.jpg
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad
Additional syslog image
** Attachment added: "20160527_182710.jpg"
https://bugs.launchpad.net/qemu/+bug/1580459/+attachment/4672389/+files/20160527_182710.jpg
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad
I have 2 running virtual machines.
1. Ubuntu Server 16.04 acting as a headless game server
2. Windows 10 Pro used for gaming and other daily activities
I too can start/stop the Win 10 vm for a period of time after a cold boot but
if it is logged in for a certain period of time, when I go to sh
I am having the exact same issue!
My Setup:
Model: unRaid 6.2 Beta
M/B: ASUSTeK Computer INC. - Z8P(N)E-D12(X)
CPU: Intel® Xeon® CPU X5690 @ 3.47GHz
HVM: Enabled
IOMMU: Enabled
Cache: 384 kB, 1536 kB, 12288 kB
Memory: 32768 MB (max. installable capacity 96 GB)
Network: bond0: fault-tolerance (act
Well, now we finally know that it isn't the i7-5820K's or X99 chipset's
or LGA 2011 socket's faults.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
Title:
Windows (10?) guest freezes entire
OK, I figured out how to delete it.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
Title:
Windows (10?) guest freezes entire host on shutdown if using PCI
passthrough
Status in QEMU:
Ne
Whoops, I clicked the wrong button and added the wrong thing for Arch
Linux, and I don't know how to delete it. (new to launchpad here)
** Also affects: archlinux-lp
Importance: Undecided
Status: New
** Also affects: archlinux
Importance: Undecided
Status: New
** Changed in:
I see, it's definitely the same issue then.
Could it be something to do with our hardware unbinding and binding pci
devices or something of the sort? I sort of doubt it but it is strange
someone else with a more different CPU/mobo combo hasn't reported this
problem yet.
That being said, we have a
** Also affects: debian
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
Title:
Windows (10?) guest freezes entire host on shutdown if using PCI
p
I think this is what's happening to me on my windows 8.1 vm although it
might be slightly different.
Just about everything you guys talked about applies except I don't have
to shutdown for it to freeze up in my case(although if it's on for long
enough and I shut it off it freezes). It freezes up o
I doubt you have a different issue. My VM has randomly hanged my
computer without a shut down a few times during the life of this bug,
and there are two very possible ways it could happen: the VM suddenly
crashed, making a situation similar to it shutting down, or something in
your host caused some
Also, yeah, the Linux one is called SteamOS, but it is actually just an
almost identical install of Arch. SteamOS wasn't playing nice with most
of my hardware when I tried to install it.
** Attachment added: "SteamOS.xml"
https://bugs.launchpad.net/qemu/+bug/1580459/+attachment/4667053/+files/
I should also post my "scripts" (libvirt XML files in my case):
But, since the Windows VM and Linux VM are completely identical beyond
the OS that's installed, I don't think our VM configurations have
anything to do with this bug. I mean, they aren't completely identical
right now because I remove
Here is my startup script.
#!/bin/bash
echo "Starting virtual machine..."
cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
sudo \
qemu-system-x86_64 \
-name "Windows 10" \
-enable-kvm \
-m 12288 \
-cpu host,kvm=off \
-smp threads=2,cores=4,sockets=1 \
-vga no
Oh, I should post my hardware:
i7-5820K (also) (4/6 cores (8/12 threads) being passed to VMs)
12GB RAM (also) (8GB being passed to VMs)
MSI X99 SLI Plus (though I don't use SLI)
NVidia GTX 960 2GB pass-thru (also had this problem on a GTX 660 before that
died)
GT 740 host card, using nouveau when
** Also affects: fedora
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1580459
Title:
Windows (10?) guest freezes entire host on shutdown if using PCI
p
I am seeing this issue on arch also. I also tried Fedora24 to see if it
was a Arch only issue.
If I start a VM and stop it shortly after everything works fine.
If I start a VM and game for a while, on VM shutdown the host will
totally lock. Tailing the journal to see if anything gets logged sho
57 matches
Mail list logo