Seeing a problem in multi hw thread runs where memory mapped pcie device register reads are returning incorrect values using QEMU 4.2

2020-07-13 Thread Mark Wood-Patrick
Background
==

I have a test environment which runs QEMU 4.2 with a plugin that runs two 
copies of a PCIE device simulator on Ubuntu 18.04/CentOS 7.5 host and with an 
Ubuntu 18.04 guest. 

When running with a single QEMU hw thread/CPU using:

-cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on

Our tests run fine. 

But when running with multiple hw threads/cpu's:

2 cores 1 thread per core (2 hw threads/cpus):

    -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on -smp 2,sockets=1,cores=2

1 core, t threads per core (2 hw threads/cpus)

   -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on -smp 2,sockets=1,cores=1

2 cores, 2 threads per core (4 hw threads/cpus):

-cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on -smp 4,sockets=1,cores=2

The values retuned are correct  all the way up the call stack and in 
KVM_EXIT_MMIO in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the 
value returned to the device driver which initiated the read is 0.

I'm currently testing this issue on 

Ubuntu 18.04.4 LTS
Kernel: 4.15.0-108-generic
KVM Version: 1:2.11+dfsg-1ubuntu7.28

And:

CentOS: 7.5.1804
Kernel 4.14.78-7.x86_64
KVM Version : 1.5.3

Seeing the same issues in both cases.

Questions
=

I have the following questions:

Is anyone else running QEMU 4.2 in multi hw thread/cpu mode? 

Is anyone getting incorrect reads from memory mapped device registers  
when running in this mode?

Does anyone have any pointers on how best to debug the flow from 
KVM_EXIT_MMIO back to the device driver running on the guest 



RE: Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values

2020-07-12 Thread Mark Wood-Patrick


From: Mark Wood-Patrick 
Sent: Wednesday, July 1, 2020 11:26 AM
To: qemu-devel@nongnu.org
Cc: Mark Wood-Patrick 
Subject: Seeing a problem in multi cpu runs where memory mapped pcie device 
register reads are returning incorrect values

Background
I have a test environment which runs QEMU 4.2 with a plugin that runs two 
copies of a PCIE device simulator on a CentOS 7.5 host with an Ubuntu 18.04 
guest. When running with a single QEMU CPU using:

 -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on

Our tests run fine. But when running with multiple cpu's:

-cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on -smp 2,sockets=1,cores=2

The values retuned are correct  all the way up the call stack and in 
KVM_EXIT_MMIO in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the 
value returned to the device driver which initiated the read is 0.

Question
Is anyone else running QEMU 4.2 in multi cpu mode? Is anyone getting incorrect 
reads from memory mapped device registers  when running in this mode? I would 
appreciate any pointers on how best to debug the flow from KVM_EXIT_MMIO back 
to the device driver running on the guest



Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values

2020-07-01 Thread Mark Wood-Patrick
Background
I have a test environment which runs QEMU 4.2 with a plugin that runs two 
copies of a PCIE device simulator on a CentOS 7.5 host with an Ubuntu 18.04 
guest. When running with a single QEMU CPU using:

 -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on

Our tests run fine. But when running with multiple cpu's:

-cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device 
intel-iommu,intremap=on -smp 2,sockets=1,cores=2

The values retuned are correct  all the way up the call stack and in 
KVM_EXIT_MMIO in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the 
value returned to the device driver which initiated the read is 0.

Question
Is anyone else running QEMU 4.2 in multi cpu mode? Is anyone getting incorrect 
reads from memory mapped device registers  when running in this mode? I would 
appreciate any pointers on how best to debug the flow from KVM_EXIT_MMIO back 
to the device driver running on the guest



[Qemu-devel] [Bug 1298442] [NEW] build problem in qemu-2.0.0-rc0 No rule to make target `trace/generated-events.h'

2014-03-27 Thread Mark Wood-Patrick
Public bug reported:

With qemu-2.0.0-rc0 on CentOS release 5.7 (Final) I get

make: *** No rule to make target `trace/generated-events.h', needed by
`Makefile'.  Stop.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1298442

Title:
  build problem in qemu-2.0.0-rc0 No rule to make target `trace
  /generated-events.h'

Status in QEMU:
  New

Bug description:
  With qemu-2.0.0-rc0 on CentOS release 5.7 (Final) I get

  make: *** No rule to make target `trace/generated-events.h', needed by
  `Makefile'.  Stop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1298442/+subscriptions