Re: Emulating Solaris 10 on SPARC64 sun4u

2020-05-09 Thread jasper.lowell
Good idea.

The ESCC device looks like it's written as a sysbus device. I think the
Ultra 5 has no devices on the root sabre bus. The serial controller is
behind the ebus (essentially isa). I'm guessing I would need to write a
wrapper device around the memory io functions so that it can be used
under the ebus - judging from the serial-isa implementation.

I think it might be possible to leave the ESCC as a sysbus device but
I'm not familiar enough with OpenBIOS to expose the right information
to Solaris and reason about what's happening confidentally. I don't
expect writing a wrapper for isa to be difficult so I'll give that a
go. Do you think it would be fine to just choose an arbitrary ioport
address for the device?

Thanks,
Jasper Lowell.


On Fri, 2020-05-08 at 15:45 +0200, Artyom Tarasenko wrote:
> On Fri, May 8, 2020 at 10:51 AM Peter Tribble <
> peter.trib...@gmail.com> wrote:
> > I see the same behaviour as reported here when booting current
> > SPARC illumos
> > (illumos is the ongoing fork of OpenSolaris) under qemu - looks
> > like it's booted
> > up fine, but I can't type anything on the console.
> 
> There is one more option which can be relatively easily tested:
> you can add two more ports (or replace the existing ttya/ttyb) to the
> qemu's ultra5 model: the escc (aka zs) ports.
> They definitely work under Solaris (Ultra-1, Ultra-2, E3000,
> E3500...), I've seen it.
> OpenBIOS already has a driver for them which is used for sun4m (and
> some ppc) targets. So no new devices have to be implemented.
> If your and Jasper's theory proofs to be correct, we can think of
> replacing ttya/ttyb su with zs.
> I guess the other sun4u operating systems do support zs as well.
> (This
> has to be tested though)
> 
> 
> > While I'm an illumos developer, and maintain it on SPARC, I'm
> > unfamiliar with
> > most of the internals. We'll try and have a poke around, though.
> > 
> > (The aim would be to use qemu to allow illumos developers to test
> > changes on SPARC
> > without requiring them to have SPARC hardware, hence my interest.)
> 
> I think I managed booting Tribblix under the QEMU niagara target. It
> has less devices than sun4u, but the console was working.
> 
> Regards,
> Artyom
> 
> > On Fri, May 8, 2020 at 3:40 AM  wrote:
> > > There are two different drivers for the 16550A in OpenSolaris.
> > > 
> > > There is a generic driver in /usr/src/uts/common/io/asy.c. This
> > > driver
> > > clearly states in comments that it is assigning the device to
> > > tty[a-d].
> > > It's really obvious to me that there is support in this driver
> > > for
> > > using the device for a tty.
> > > 
> > > There is another driver at /usr/src/uts/sun4/io/su_driver.c.
> > > Judging
> > > from the name this is specific for SuperIO. This driver is more
> > > than
> > > 1000 lines shorter than the other driver and contains a note at
> > > the top
> > > of the file that it is "modified as sparc keyboard/mouse driver."
> > > I
> > > don't have much experience with Solaris internals but I can't see
> > > any
> > > obvious signs that this can be used as a tty. I'd also question
> > > why
> > > there are two drivers if they have the same purpose/capability.
> > > If the
> > > su_driver exists to only add support for the keyboard/mouse I'm
> > > not
> > > sure why it would be shorter rather than longer. It would be
> > > helpful if
> > > someone with Solaris experience, that knows what they're doing (I
> > > do
> > > not), can take a look at this driver and give a better opinion.
> > > 
> > > When emulating Sun4u, the su driver is used. If you misconfigure
> > > the
> > > serial device in QEMU, you can trigger errors that are unique to
> > > the
> > > driver. Also, Solaris attaches the device as su0.
> > > 
> > > Solaris can access the registers, identify/attach the device, but
> > > it
> > > just never attempts to use it. Interrupts are explicitly disabled
> > > and
> > > the device is never polled. I don't think asyopen in su_driver.c
> > > is
> > > ever called. I've thoroughly stepped through every register
> > > access for
> > > the 16550A and nothing seems out of the ordinary to me. Something
> > > I've
> > > considered is that OpenBIOS is not setting properties for the
> > > device in
> > > a way that Solaris expects but I'm not familiar enough with Sun's
> > > OpenBOOT to make progress here. I've read through the PROM
> > > functions
> > > that Solaris calls when choosing what device to use as a console
> > > and
> > > nothing seemed inconsistent with what OpenBIOS does.
> > > Unfortunately,
> > > OpenBIOS seems to crash or hang when using the kernel debugger
> > > module,
> > > so dynamic analysis using breakpoints isn't an option.
> > > 
> > > I don't have any concrete explanations for the broken console but
> > > rather than do nothing I figured I'd see what happens by
> > > emulating the
> > > SAB 82532.
> > > 
> > > > At least Fujitsu Siemens PRIMEPOWER250 had a tty attached to
> > > > the
> > > > 16550A UART. I 

[Bug 1876678] Re: Ubuntu 20.04 KVM / QEMU Failure with nested FreeBSD bhyve

2020-05-09 Thread John Hartley
Hi Ubuntu / KVM Maintainers,

I have now done additional diagnostics on this bug and it appears to be
triggered in nested virtualization case when apic virtualisation is
available in Layer 0 HW and then passed forward to Layer 1 VM via
Libvirt:  .

Testing found that in case where Layer 1 FreeBSD host had this feature,
see "VID,PostIntr" in "VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID,VID,PostIntr"
from CPU Feature below:

<>
...
...
CPU: Intel Core Processor (Broadwell, IBRS) (2600.09-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x306d2  Family=0x6  Model=0x3d  Stepping=2
  
Features=0xf83fbff
  
Features2=0xfffa3223
  AMD Features=0x2c100800
  AMD Features2=0x121
  Structured Extended 
Features=0x1c0fbb
  Structured Extended Features2=0x4
  Structured Extended Features3=0xac000400
  XSAVE Features=0x1
  IA32_ARCH_CAPS=0x8
  AMD Extended Feature Extensions ID EBX=0x1001000
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID,VID,PostIntr
Hypervisor: Origin = "KVMKVMKVM"
...
...
>

In my case with Intel Broadwell chipset this is available, in case of
desktop "core i5-8250U" chip- this reports as:

VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID

For this case HW case, nested:
Layer 0 - Ubuntu 20.04, Layer 1 - FreeBSD 12.1 with bhyve, Layer 2 - FreeBSD 
12.1
Works.

Workaround is to disable APIC virtual interrupt delivery:

1. Add entry into Layer 1 - FreeBSD Guest / Host: /boot/loader.conf:
hw.vmm.vmx.use_apic_vid=0

2. Reboot

3. Check via sysctl that virtual_interupt_delivery is disabled:
# sysctl hw.vmm.vmx.cap.virtual_interrupt_delivery
hw.vmm.vmx.cap.virtual_interrupt_delivery: 0  <- should be zero


Questions is:

While FreeBSD triggers this bug, is this a KVM issue or a FreeBSD bhyve
one ?

In doing some searching on Web I see that there is already work being
done with KVM 5.6 around APIC virtualisation and its handling. So not
sure if this a potentially know problem:
https://events19.linuxfoundation.org/wp-content/uploads/2017/12
/Improving-KVM-x86-Nested-Virtualization-Liran-Alon-Oracle.pdf

APIC Virtualisation support was introduced back in FreeBSD 11.0 way back
in Sept 2016:

https://www.freebsd.org/releases/11.0R/relnotes.html#hardware-
virtualization

Thanks to Peter Graham on FreeBSD virtualization bug tracker for helping
to find source of problem.

Should this BUG go to KVM / QEMU upstream ?

Cheers,

John Hartley.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1876678

Title:
  Ubuntu 20.04 KVM / QEMU Failure with nested FreeBSD bhyve

Status in QEMU:
  New

Bug description:
  BUG:

  Starting FreeBSD Layer 2 bhyve Guest within Layer 1 FreeBSD VM Host on
  Layer 0 Ubuntu 20.04 KVM / QEMU Host result in Layer 1 Guest / Host
  Pausing with "Emulation Failure"

  TESTING:

  My test scenario is nested virtualisation:
  Layer 0 - Ubuntu 20.04 Host
  Layer 1 - FreeBSD 12.1 with OVMF + bhyve hypervisor Guest/Host
  Layer 2 - FreeBSD 12.1 guest

  Layer 0 Host is: Ubuntu 20.04 LTS KVM / QEMU / libvirt

  <>
  $ virsh -c qemu:///system version --daemon
  Compiled against library: libvirt 6.0.0
  Using library: libvirt 6.0.0
  Using API: QEMU 6.0.0
  Running hypervisor: QEMU 4.2.0
  Running against daemon: 6.0.0
  <

  <>
  $ cat /proc/cpuinfo | grep -c vmx
  64
  $ cat /sys/module/kvm_intel/parameters/nested
  Y
  <>


  Layer 1 Guest / Host is: FreeBSD Q35 v4.2 with OVMF:

  Pass Host VMX support to Layer 1 Guest via hvm
  /usr/share/OVMF/OVMF_CODE.fd
  /home/USER/swarm.bhyve.freebsd/OVMF_VARS.fd


  
  
  


  ...
  ...
  >

  Checked that Layer 1 - FreeBSD Quest / Host has VMX feature available:

  <>
  # uname -a
  FreeBSD swarm.DOMAIN.HERE 12.1-RELEASE FreeBSD 12.1-RELEASE GENERIC  amd64

  # grep Features /var/run/dmesg.boot 

Features=0xf83fbff

Features2=0xfffa3223
AMD Features=0x2c100800
AMD Features2=0x121
Structured Extended 
Features=0x1c0fbb
Structured Extended Features2=0x4
Structured Extended Features3=0xac000400
XSAVE Features=0x1
  <

  On Layer 1 FreeBSD Guest / Host start up the Layer 2 guest..

  <>
  # ls
  FreeBSD-11.2-RELEASE-amd64-bootonly.iso   
FreeBSD-12.1-RELEASE-amd64-dvd1.iso bee-hd1-01.img
  # /usr/sbin/bhyve -c 2 -m 2048 -H -A -s 0:0,hostbridge -s 1:0,lpc -s 
2:0,e1000,tap0 -s 3:0,ahci-hd,bee-hd1-01.img -l com1,stdio -s 
5:0,ahci-cd,./FreeBSD-12.1-RELEASE-amd64-dvd1.iso bee
  <>

  Result is that Layer 1 - FreeBSD Host guest "paused".

  To Layer 1 machines freezes I cannot get any further diagnostics from
  this machine, so I run tail on libvirt log from Layer 0 - Ubuntu Host

  <>
  char device redirected to /dev/pts/29 (label charserial0)
  2020-05-04T06:09:15.310474Z qemu-system-x86_64: warning: host doesn't support 
requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12]
  2020-05-04T06:09:15.310531Z qemu-system-x86_64: warning: host doesn't support 
requested feature: 

[Qemu-devel][PATCH v5 4/4] x86/cpu: Add user space access interface for CET MSRs

2020-05-09 Thread Yang Weijiang
Added interface for CET MSR_IA32_{U,S}_CET, MSR_IA32_PL{0,1,2,3}_SSP,
MSR_IA32_INTR_SSP_TBL and MSR_KVM_GUEST_SSP save/restore. Check if
corresponding CET features are available before access the MSRs.

Signed-off-by: Yang Weijiang 
---
 target/i386/cpu.h |  18 +
 target/i386/kvm.c |  73 +++
 target/i386/machine.c | 161 ++
 3 files changed, 252 insertions(+)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index ed03cd1760..51577a04ca 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -480,6 +480,15 @@ typedef enum X86Seg {
 #define MSR_IA32_VMX_TRUE_ENTRY_CTLS 0x0490
 #define MSR_IA32_VMX_VMFUNC 0x0491
 
+#define MSR_IA32_U_CET  0x6a0
+#define MSR_IA32_S_CET  0x6a2
+#define MSR_IA32_PL0_SSP0x6a4
+#define MSR_IA32_PL1_SSP0x6a5
+#define MSR_IA32_PL2_SSP0x6a6
+#define MSR_IA32_PL3_SSP0x6a7
+#define MSR_IA32_SSP_TBL0x6a8
+#define MSR_KVM_GUEST_SSP   0x4b564d06
+
 #define XSTATE_FP_BIT   0
 #define XSTATE_SSE_BIT  1
 #define XSTATE_YMM_BIT  2
@@ -1567,6 +1576,15 @@ typedef struct CPUX86State {
 
 uintptr_t retaddr;
 
+uint64_t u_cet;
+uint64_t s_cet;
+uint64_t pl0_ssp;
+uint64_t pl1_ssp;
+uint64_t pl2_ssp;
+uint64_t pl3_ssp;
+uint64_t ssp_tbl;
+uint64_t guest_ssp;
+
 /* Fields up to this point are cleared by a CPU reset */
 struct {} end_reset_fields;
 
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 4901c6dd74..0735981558 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2979,6 +2979,31 @@ static int kvm_put_msrs(X86CPU *cpu, int level)
 }
 }
 
+if (((env->features[FEAT_7_0_ECX] & CPUID_7_0_ECX_CET_SHSTK) ||
+(env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_CET_IBT)) &&
+(env->features[FEAT_XSAVES_LO] & XSTATE_CET_U_MASK)) {
+kvm_msr_entry_add(cpu, MSR_IA32_U_CET, env->u_cet);
+kvm_msr_entry_add(cpu, MSR_IA32_PL3_SSP, env->pl3_ssp);
+}
+
+if (env->features[FEAT_XSAVES_LO] & XSTATE_CET_S_MASK) {
+if (env->features[FEAT_7_0_ECX] & CPUID_7_0_ECX_CET_SHSTK) {
+kvm_msr_entry_add(cpu, MSR_IA32_PL0_SSP, env->pl0_ssp);
+kvm_msr_entry_add(cpu, MSR_IA32_PL1_SSP, env->pl1_ssp);
+kvm_msr_entry_add(cpu, MSR_IA32_PL2_SSP, env->pl2_ssp);
+kvm_msr_entry_add(cpu, MSR_IA32_SSP_TBL, env->ssp_tbl);
+}
+
+if (env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_CET_IBT) {
+kvm_msr_entry_add(cpu, MSR_IA32_S_CET, env->s_cet);
+}
+}
+
+if ((env->features[FEAT_7_0_ECX] & CPUID_7_0_ECX_CET_SHSTK) &&
+(env->features[FEAT_XSAVES_LO] & (XSTATE_CET_U_MASK |
+XSTATE_CET_S_MASK)))
+kvm_msr_entry_add(cpu, MSR_KVM_GUEST_SSP, env->guest_ssp);
+
 return kvm_buf_set_msrs(cpu);
 }
 
@@ -3295,6 +3320,30 @@ static int kvm_get_msrs(X86CPU *cpu)
 }
 }
 
+if (((env->features[FEAT_7_0_ECX] & CPUID_7_0_ECX_CET_SHSTK) ||
+(env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_CET_IBT)) &&
+(env->features[FEAT_XSAVES_LO] & XSTATE_CET_U_MASK)) {
+kvm_msr_entry_add(cpu, MSR_IA32_U_CET, 0);
+kvm_msr_entry_add(cpu, MSR_IA32_PL3_SSP, 0);
+}
+
+if (env->features[FEAT_XSAVES_LO] & XSTATE_CET_S_MASK) {
+if (env->features[FEAT_7_0_ECX] & CPUID_7_0_ECX_CET_SHSTK) {
+kvm_msr_entry_add(cpu, MSR_IA32_PL0_SSP, 0);
+kvm_msr_entry_add(cpu, MSR_IA32_PL1_SSP, 0);
+kvm_msr_entry_add(cpu, MSR_IA32_PL2_SSP, 0);
+kvm_msr_entry_add(cpu, MSR_IA32_SSP_TBL, 0);
+}
+
+if (env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_CET_IBT) {
+kvm_msr_entry_add(cpu, MSR_IA32_S_CET, 0);
+}
+}
+if ((env->features[FEAT_7_0_ECX] & CPUID_7_0_ECX_CET_SHSTK) &&
+(env->features[FEAT_XSAVES_LO] & (XSTATE_CET_U_MASK |
+XSTATE_CET_S_MASK)))
+kvm_msr_entry_add(cpu, MSR_KVM_GUEST_SSP, 0);
+
 ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, cpu->kvm_msr_buf);
 if (ret < 0) {
 return ret;
@@ -3578,6 +3627,30 @@ static int kvm_get_msrs(X86CPU *cpu)
 case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B:
 env->msr_rtit_addrs[index - MSR_IA32_RTIT_ADDR0_A] = msrs[i].data;
 break;
+case MSR_IA32_U_CET:
+env->u_cet = msrs[i].data;
+break;
+case MSR_IA32_S_CET:
+env->s_cet = msrs[i].data;
+break;
+case MSR_IA32_PL0_SSP:
+env->pl0_ssp = msrs[i].data;
+break;
+case MSR_IA32_PL1_SSP:
+env->pl1_ssp = msrs[i].data;
+break;
+case MSR_IA32_PL2_SSP:
+env->pl2_ssp = msrs[i].data;
+break;
+case MSR_IA32_PL3_SSP:
+env->pl3_ssp = msrs[i].data;
+   

[Qemu-devel][PATCH v5 2/4] x86/cpuid: Add XSAVES feature words and CET related state bits

2020-05-09 Thread Yang Weijiang
CET SHSTK/IBT MSRs can be saved/restored with XSAVES/XRSTORS, but
currently the related feature words are not supported, so add the
new entries. XSAVES/RSTORS always use compacted storage format, which
means the supervisor states' offsets are always 0, ignore them while
calculating stardard format storage size.

Signed-off-by: Zhang Yi 
Signed-off-by: Yang Weijiang 
---
 target/i386/cpu.c | 38 --
 1 file changed, 36 insertions(+), 2 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 90ffc5f3b1..3174e05482 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -965,7 +965,7 @@ static FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
 .type = CPUID_FEATURE_WORD,
 .feat_names = {
 NULL, "avx512vbmi", "umip", "pku",
-NULL /* ospke */, "waitpkg", "avx512vbmi2", NULL,
+NULL /* ospke */, "waitpkg", "avx512vbmi2", "shstk",
 "gfni", "vaes", "vpclmulqdq", "avx512vnni",
 "avx512bitalg", NULL, "avx512-vpopcntdq", NULL,
 "la57", NULL, NULL, NULL,
@@ -988,7 +988,7 @@ static FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
 NULL, NULL, "md-clear", NULL,
 NULL, NULL, NULL, NULL,
 NULL, NULL, NULL /* pconfig */, NULL,
-NULL, NULL, NULL, NULL,
+"ibt", NULL, NULL, NULL,
 NULL, NULL, "spec-ctrl", "stibp",
 NULL, "arch-capabilities", "core-capability", "ssbd",
 },
@@ -1069,6 +1069,26 @@ static FeatureWordInfo feature_word_info[FEATURE_WORDS] 
= {
 },
 .tcg_features = TCG_XSAVE_FEATURES,
 },
+/* Below are xsaves feature words */
+[FEAT_XSAVES_LO] = {
+.type = CPUID_FEATURE_WORD,
+.cpuid = {
+.eax = 0xD,
+.needs_ecx = true,
+.ecx = 1,
+.reg = R_ECX,
+},
+.migratable_flags = XSTATE_CET_U_MASK,
+},
+[FEAT_XSAVES_HI] = {
+.type = CPUID_FEATURE_WORD,
+.cpuid = {
+.eax = 0xD,
+.needs_ecx = true,
+.ecx = 1,
+.reg = R_EDX
+},
+},
 [FEAT_6_EAX] = {
 .type = CPUID_FEATURE_WORD,
 .feat_names = {
@@ -1455,6 +1475,14 @@ static const ExtSaveArea x86_ext_save_areas[] = {
   { .feature = FEAT_7_0_ECX, .bits = CPUID_7_0_ECX_PKU,
 .offset = offsetof(X86XSaveArea, pkru_state),
 .size = sizeof(XSavePKRU) },
+[XSTATE_CET_U_BIT] = {
+.feature = FEAT_7_0_ECX, .bits = CPUID_7_0_ECX_CET_SHSTK,
+.offset = 0 /*supervisor mode component, offset = 0 */,
+.size = sizeof(XSavesCETU) },
+[XSTATE_CET_S_BIT] = {
+.feature = FEAT_7_0_ECX, .bits = CPUID_7_0_ECX_CET_SHSTK,
+.offset = 0 /*supervisor mode component, offset = 0 */,
+.size = sizeof(XSavesCETS) },
 };
 
 static uint32_t xsave_area_size(uint64_t mask)
@@ -1465,6 +1493,9 @@ static uint32_t xsave_area_size(uint64_t mask)
 for (i = 0; i < ARRAY_SIZE(x86_ext_save_areas); i++) {
 const ExtSaveArea *esa = _ext_save_areas[i];
 if ((mask >> i) & 1) {
+if (i >= 2 && !esa->offset) {
+continue;
+}
 ret = MAX(ret, esa->offset + esa->size);
 }
 }
@@ -6008,6 +6039,9 @@ static void x86_cpu_reset(DeviceState *dev)
 }
 for (i = 2; i < ARRAY_SIZE(x86_ext_save_areas); i++) {
 const ExtSaveArea *esa = _ext_save_areas[i];
+if (!esa->offset) {
+continue;
+}
 if (env->features[esa->feature] & esa->bits) {
 xcr0 |= 1ull << i;
 }
-- 
2.17.2




[Qemu-devel][PATCH v5 3/4] x86/cpuid: Add support for XSAVES dependent feature enumeration

2020-05-09 Thread Yang Weijiang
Currently XSAVES dependent features are not supported in CPUID enumeration,
update CPUID(0xD,n>=1) to enable it.

CET XSAVES related enumeration includes:
CPUID(0xD,1):ECX[bit 11]: user mode CET state, controls bit 11 in XSS.
CPUID(0xD,1):ECX[bit 12]: supervisor mode CET state, controls bit 12 in XSS.
CPUID(0xD,11): user mode CET state sub-leaf, reports the state size.
CPUID(0xD,12): supervisor mode CE state sub-leaf, reports the state size.

Signed-off-by: Zhang Yi 
Signed-off-by: Yang Weijiang 
---
 target/i386/cpu.c | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 3174e05482..881c84a3b3 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1513,6 +1513,12 @@ static inline uint64_t x86_cpu_xsave_components(X86CPU 
*cpu)
cpu->env.features[FEAT_XSAVE_COMP_LO];
 }
 
+static inline uint64_t x86_cpu_xsave_sv_components(X86CPU *cpu)
+{
+return ((uint64_t)cpu->env.features[FEAT_XSAVES_HI]) << 32 |
+   cpu->env.features[FEAT_XSAVES_LO];
+}
+
 const char *get_register_name_32(unsigned int reg)
 {
 if (reg >= CPU_NB_REGS32) {
@@ -5722,13 +5728,22 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
  */
 *ebx = kvm_enabled() ? *ecx : xsave_area_size(env->xcr0);
 } else if (count == 1) {
+/* ebx is updated in kvm.*/
 *eax = env->features[FEAT_XSAVE];
+*ecx = env->features[FEAT_XSAVES_LO];
+*edx = env->features[FEAT_XSAVES_HI];
 } else if (count < ARRAY_SIZE(x86_ext_save_areas)) {
 if ((x86_cpu_xsave_components(cpu) >> count) & 1) {
 const ExtSaveArea *esa = _ext_save_areas[count];
 *eax = esa->size;
 *ebx = esa->offset;
 }
+if ((x86_cpu_xsave_sv_components(cpu) >> count) & 1) {
+const ExtSaveArea *esa_sv = _ext_save_areas[count];
+*eax = esa_sv->size;
+*ebx = 0;
+*ecx = 1;
+}
 }
 break;
 }
@@ -6280,8 +6295,10 @@ static void x86_cpu_enable_xsave_components(X86CPU *cpu)
 }
 }
 
-env->features[FEAT_XSAVE_COMP_LO] = mask;
+env->features[FEAT_XSAVE_COMP_LO] = mask & CPUID_XSTATE_USER_MASK;
 env->features[FEAT_XSAVE_COMP_HI] = mask >> 32;
+env->features[FEAT_XSAVES_LO] = mask & CPUID_XSTATE_KERNEL_MASK;
+env->features[FEAT_XSAVES_HI] = mask >> 32;
 }
 
 /* Steps involved on loading and filtering CPUID data
-- 
2.17.2




[Qemu-devel][PATCH v5 0/4] Enable CET support for guest

2020-05-09 Thread Yang Weijiang
Control-flow Enforcement Technology (CET) provides protection against 
return/jump-oriented programming (ROP/JOP). It includes two
sub-features: Shadow Stack(SHSTK) and Indirect Branch Tracking(IBT).
This patchset is to enable CET related CPUID report, XSAVES/XRSTORS
support and MSR access etc. for guest.

CET KVM patches:
https://lkml.kernel.org/r/20200506082110.25441-1-weijiang.y...@intel.com

CET kernel patches:
https://lkml.kernel.org/r/20200429220732.31602-1-yu-cheng...@intel.com

v5:
  - Checked CET states before access related MSRs.
  - Added new MSR MSR_KVM_GUEST_SSP for live-migration.
  - Refactored patches to make them more structured.

v4:
  - Added MSR read/write interface for PL1_SSP/PL2_SSP.
  - Removed CET structures from X86XSaveArea.
  - Cleared ebx in return of CPUID.(EAX=d, ECX=1).
 
v3:
  - Add CET MSR save/restore support for live-migration.
 
v2:
  - In CPUID.(EAX=d, ECX=1), set return ECX[n] = 0 if bit n corresponds
to a bit in MSR_IA32_XSS.
  - In CPUID.(EAX=d, ECX=n), set return ECX = 1 if bit n corresponds
to a bit in MSR_IA32_XSS.
  - Skip Supervisor mode xsave component when calculate User mode
xave component size in xsave_area_size() and x86_cpu_reset().

Yang Weijiang (4):
  x86/cpu: Add CET CPUID/XSAVES flags and data structures
  x86/cpuid: Add XSAVES feature words and CET related state bits
  x86/cpuid: Add support for XSAVES dependent feature enumeration
  x86/cpu: Add user space access interface for CET MSRs

 target/i386/cpu.c |  57 ++-
 target/i386/cpu.h |  53 ++
 target/i386/kvm.c |  73 +++
 target/i386/machine.c | 161 ++
 4 files changed, 341 insertions(+), 3 deletions(-)

-- 
2.17.2




[Qemu-devel][PATCH v5 1/4] x86/cpu: Add CET CPUID/XSAVES flags and data structures

2020-05-09 Thread Yang Weijiang
CET feature SHSTK and IBT are enumerated via CPUID(EAX=0x7,0):ECX[bit 7]
and EDX[bit 20] respectively. Two CET bits (bit 11 and 12) are defined in
MSR_IA32_XSS to support XSAVES/XRSTORS. CPUID(EAX=0xd, 1):ECX[bit 11] and
ECX[bit 12] correspond to CET states in user and supervisor mode respectively.

Signed-off-by: Zhang Yi 
Signed-off-by: Yang Weijiang 
---
 target/i386/cpu.h | 35 +++
 1 file changed, 35 insertions(+)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index e818fc712a..ed03cd1760 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -489,6 +489,9 @@ typedef enum X86Seg {
 #define XSTATE_ZMM_Hi256_BIT6
 #define XSTATE_Hi16_ZMM_BIT 7
 #define XSTATE_PKRU_BIT 9
+#define XSTATE_RESERVED_BIT 10
+#define XSTATE_CET_U_BIT11
+#define XSTATE_CET_S_BIT12
 
 #define XSTATE_FP_MASK  (1ULL << XSTATE_FP_BIT)
 #define XSTATE_SSE_MASK (1ULL << XSTATE_SSE_BIT)
@@ -499,6 +502,19 @@ typedef enum X86Seg {
 #define XSTATE_ZMM_Hi256_MASK   (1ULL << XSTATE_ZMM_Hi256_BIT)
 #define XSTATE_Hi16_ZMM_MASK(1ULL << XSTATE_Hi16_ZMM_BIT)
 #define XSTATE_PKRU_MASK(1ULL << XSTATE_PKRU_BIT)
+#define XSTATE_RESERVED_MASK(1ULL << XSTATE_RESERVED_BIT)
+#define XSTATE_CET_U_MASK   (1ULL << XSTATE_CET_U_BIT)
+#define XSTATE_CET_S_MASK   (1ULL << XSTATE_CET_S_BIT)
+
+/* CPUID feature bits available in XCR0 */
+#define CPUID_XSTATE_USER_MASK  (XSTATE_FP_MASK | XSTATE_SSE_MASK | \
+ XSTATE_YMM_MASK | XSTATE_BNDREGS_MASK | \
+ XSTATE_BNDCSR_MASK | XSTATE_OPMASK_MASK | \
+ XSTATE_ZMM_Hi256_MASK | \
+ XSTATE_Hi16_ZMM_MASK | XSTATE_PKRU_MASK)
+
+/* CPUID feature bits available in XSS */
+#define CPUID_XSTATE_KERNEL_MASK(XSTATE_CET_U_MASK)
 
 /* CPUID feature words */
 typedef enum FeatureWord {
@@ -536,6 +552,8 @@ typedef enum FeatureWord {
 FEAT_VMX_EPT_VPID_CAPS,
 FEAT_VMX_BASIC,
 FEAT_VMX_VMFUNC,
+FEAT_XSAVES_LO, /* CPUID[EAX=0xd,ECX=1].ECX */
+FEAT_XSAVES_HI, /* CPUID[EAX=0xd,ECX=1].EDX */
 FEATURE_WORDS,
 } FeatureWord;
 
@@ -743,6 +761,8 @@ typedef uint64_t FeatureWordArray[FEATURE_WORDS];
 #define CPUID_7_0_ECX_WAITPKG   (1U << 5)
 /* Additional AVX-512 Vector Byte Manipulation Instruction */
 #define CPUID_7_0_ECX_AVX512_VBMI2  (1U << 6)
+/* CET SHSTK feature */
+#define CPUID_7_0_ECX_CET_SHSTK (1U << 7)
 /* Galois Field New Instructions */
 #define CPUID_7_0_ECX_GFNI  (1U << 8)
 /* Vector AES Instructions */
@@ -770,6 +790,8 @@ typedef uint64_t FeatureWordArray[FEATURE_WORDS];
 #define CPUID_7_0_EDX_AVX512_4VNNIW (1U << 2)
 /* AVX512 Multiply Accumulation Single Precision */
 #define CPUID_7_0_EDX_AVX512_4FMAPS (1U << 3)
+/* CET IBT feature */
+#define CPUID_7_0_EDX_CET_IBT   (1U << 20)
 /* Speculation Control */
 #define CPUID_7_0_EDX_SPEC_CTRL (1U << 26)
 /* Single Thread Indirect Branch Predictors */
@@ -1260,6 +1282,19 @@ typedef struct XSavePKRU {
 uint32_t padding;
 } XSavePKRU;
 
+/* Ext. save area 11: User mode CET state */
+typedef struct XSavesCETU {
+uint64_t u_cet;
+uint64_t user_ssp;
+} XSavesCETU;
+
+/* Ext. save area 12: Supervisor mode CET state */
+typedef struct XSavesCETS {
+uint64_t kernel_ssp;
+uint64_t pl1_ssp;
+uint64_t pl2_ssp;
+} XSavesCETS;
+
 typedef struct X86XSaveArea {
 X86LegacyXSaveArea legacy;
 X86XSaveHeader header;
-- 
2.17.2




[PATCH] qom: remove index from object_resolve_abs_path()

2020-05-09 Thread Masahiro Yamada
You can advance 'parts' to track the current path fragment.
The 'index' parameter is unneeded.

Signed-off-by: Masahiro Yamada 
---

 qom/object.c | 17 -
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/qom/object.c b/qom/object.c
index aa8a3f24e6..a3ee968b12 100644
--- a/qom/object.c
+++ b/qom/object.c
@@ -2012,25 +2012,24 @@ Object *object_resolve_path_component(Object *parent, 
const gchar *part)
 
 static Object *object_resolve_abs_path(Object *parent,
   gchar **parts,
-  const char *typename,
-  int index)
+  const char *typename)
 {
 Object *child;
 
-if (parts[index] == NULL) {
+if (*parts == NULL) {
 return object_dynamic_cast(parent, typename);
 }
 
-if (strcmp(parts[index], "") == 0) {
-return object_resolve_abs_path(parent, parts, typename, index + 1);
+if (strcmp(*parts, "") == 0) {
+return object_resolve_abs_path(parent, parts + 1, typename);
 }
 
-child = object_resolve_path_component(parent, parts[index]);
+child = object_resolve_path_component(parent, *parts);
 if (!child) {
 return NULL;
 }
 
-return object_resolve_abs_path(child, parts, typename, index + 1);
+return object_resolve_abs_path(child, parts + 1, typename);
 }
 
 static Object *object_resolve_partial_path(Object *parent,
@@ -2042,7 +2041,7 @@ static Object *object_resolve_partial_path(Object *parent,
 GHashTableIter iter;
 ObjectProperty *prop;
 
-obj = object_resolve_abs_path(parent, parts, typename, 0);
+obj = object_resolve_abs_path(parent, parts, typename);
 
 g_hash_table_iter_init(, parent->properties);
 while (g_hash_table_iter_next(, NULL, (gpointer *))) {
@@ -2087,7 +2086,7 @@ Object *object_resolve_path_type(const char *path, const 
char *typename,
 *ambiguousp = ambiguous;
 }
 } else {
-obj = object_resolve_abs_path(object_get_root(), parts, typename, 1);
+obj = object_resolve_abs_path(object_get_root(), parts + 1, typename);
 }
 
 g_strfreev(parts);
-- 
2.25.1




Re: [PATCH 3/3] plugins: avoid failing plugin when CPU is inited several times

2020-05-09 Thread Emilio G. Cota
On Mon, Apr 20, 2020 at 13:04:51 +0300, Nikolay Igotti wrote:
> In linux-user multithreaded scenarious CPU could be inited many times with
> the same id,
> so avoid assertions on already present hashtable entry.
> 
> Signed-off-by: Nikolay Igotti 
> ---
>  plugins/core.c | 5 +
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/plugins/core.c b/plugins/core.c
> index 51bfc94787..889cc6441a 100644
> --- a/plugins/core.c
> +++ b/plugins/core.c
> @@ -196,13 +196,10 @@ plugin_register_cb_udata(qemu_plugin_id_t id, enum
> qemu_plugin_event ev,
> 
>  void qemu_plugin_vcpu_init_hook(CPUState *cpu)
>  {
> -bool success;
> -
>  qemu_rec_mutex_lock();
>  plugin_cpu_update__locked(>cpu_index, NULL, NULL);
> -success = g_hash_table_insert(plugin.cpu_ht, >cpu_index,
> +g_hash_table_insert(plugin.cpu_ht, >cpu_index,
>>cpu_index);
> -g_assert(success);
>  qemu_rec_mutex_unlock();

Do you have a reproducer for this? I'd expect (1) the g_hash_table_remove
call in qemu_plugin_vcpu_exit_hook to clear this entry upon CPU exit,
and (2) no two live CPUs to have the same cpu_index. But maybe assumption
(2) is wrong, or simply (1) does not get called for some exiting CPUs,
in which case the right fix would be to make sure that it does get called
on CPU exit.

Thanks,

Emilio



[Bug 1877794] [NEW] Constant Folding on 64-bit Subtraction causes SIGILL on linux-user glxgears ppc64le to x86_64 by way of generating bad shift instruction with c=-1

2020-05-09 Thread Catherine A. Frederick
Public bug reported:

Hello, I've been recently working on my own little branch of QEMU
implementing the drm IOCTLs, when I discovered that glxgears seems to
crash in GLXSwapBuffers(); with a SIGILL. I investigated this for about
2 weeks, manually trying to trace the call stack, only to find that we
seemingly crash in a bad shift instruction. Originally intended to be an
shr_i64 generated to an RLDICL, we end up with an all ones(-1) c value,
which gets thrown to the macro for generating the MB, and replaces the
instruction with mostly ones. This new instruction, FFE0 is invalid
on ppc64le, and crashes in a host SIGILL in codegen_buffer. I tried to
see if the output of translate.c had this bad instruction, but all I got
were two (shr eax, cl) instructions, and upon creating a test program
with shr (eax, cl) in it, nothing happened. Then figuring that there was
nothing actually wrong with the instruction in the first place, I turned
my eye to the optimizer, and completely disabled constant folding for
arithmetic instructions.  This seemed to actually resolve the issue, and
then I slowly enabled constant folding again on various instructions
only to find that enabling not on the shifts, but on subtraction seemed
to cause the bug to reappear. I am bewildered and frankly at this point
I'm not sure I have a chance in hell of figuring out what causes it, so
I'm throwing it here.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877794

Title:
  Constant Folding on 64-bit Subtraction causes SIGILL on linux-user
  glxgears ppc64le to x86_64 by way of generating bad shift instruction
  with c=-1

Status in QEMU:
  New

Bug description:
  Hello, I've been recently working on my own little branch of QEMU
  implementing the drm IOCTLs, when I discovered that glxgears seems to
  crash in GLXSwapBuffers(); with a SIGILL. I investigated this for
  about 2 weeks, manually trying to trace the call stack, only to find
  that we seemingly crash in a bad shift instruction. Originally
  intended to be an shr_i64 generated to an RLDICL, we end up with an
  all ones(-1) c value, which gets thrown to the macro for generating
  the MB, and replaces the instruction with mostly ones. This new
  instruction, FFE0 is invalid on ppc64le, and crashes in a host
  SIGILL in codegen_buffer. I tried to see if the output of translate.c
  had this bad instruction, but all I got were two (shr eax, cl)
  instructions, and upon creating a test program with shr (eax, cl) in
  it, nothing happened. Then figuring that there was nothing actually
  wrong with the instruction in the first place, I turned my eye to the
  optimizer, and completely disabled constant folding for arithmetic
  instructions.  This seemed to actually resolve the issue, and then I
  slowly enabled constant folding again on various instructions only to
  find that enabling not on the shifts, but on subtraction seemed to
  cause the bug to reappear. I am bewildered and frankly at this point
  I'm not sure I have a chance in hell of figuring out what causes it,
  so I'm throwing it here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877794/+subscriptions



Re: [PATCH 09/11] target/cpu: Restrict handlers using hwaddr type to system-mode

2020-05-09 Thread Paolo Bonzini
Il sab 9 mag 2020, 22:01 Philippe Mathieu-Daudé  ha
scritto:

> > I forgot once Peter Maydell told me we can't do that for some reason I
> > don't remember.
> >
> > At least this changes the sizeof(CPUClass), so we get:
> >
> > qom/object.c:315:type_initialize: assertion failed: (parent->class_size
> > <= ti->class_size)
> >
> > So we can't poison the hwaddr type? (final patch of this series).
>
> Well, this works...:
>
> -- >8 --
> diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
> --- a/include/hw/core/cpu.h
> +++ b/include/hw/core/cpu.h
> @@ -167,6 +167,7 @@ typedef struct CPUClass {
>  int reset_dump_flags;
>  bool (*has_work)(CPUState *cpu);
>  void (*do_interrupt)(CPUState *cpu);
> +#ifndef CONFIG_USER_ONLY
>  void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
>  MMUAccessType access_type,
>  int mmu_idx, uintptr_t retaddr);
> @@ -174,6 +175,12 @@ typedef struct CPUClass {
>unsigned size, MMUAccessType
> access_type,
>int mmu_idx, MemTxAttrs attrs,
>MemTxResult response, uintptr_t
> retaddr);
> +hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
> +hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
> +MemTxAttrs *attrs);
> +#else
> +void (*reserved[4])(CPUState *cpu, ...);
> +#endif /* CONFIG_USER_ONLY */
>  bool (*virtio_is_big_endian)(CPUState *cpu);
>  int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
> uint8_t *buf, int len, bool is_write);
> @@ -189,9 +196,6 @@ typedef struct CPUClass {
>  bool (*tlb_fill)(CPUState *cpu, vaddr address, int size,
>   MMUAccessType access_type, int mmu_idx,
>   bool probe, uintptr_t retaddr);
> -hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
> -hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
> -MemTxAttrs *attrs);
>  int (*asidx_from_attrs)(CPUState *cpu, MemTxAttrs attrs);
>  int (*gdb_read_register)(CPUState *cpu, GByteArray *buf, int reg);
>  int (*gdb_write_register)(CPUState *cpu, uint8_t *buf, int reg);
> ---
>
> Ugly?
>

More. :-) And hwaddr is only a small part, there are several other methods
that only make sense for system emulation. Let me review the rest of the
series, it may not be good enough to stop here while we figure out a way.

Paolo


> >
> > >   void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
> > >   MMUAccessType access_type,
> > >   int mmu_idx, uintptr_t retaddr);
> > > @@ -174,6 +175,10 @@ typedef struct CPUClass {
> > > unsigned size, MMUAccessType
> access_type,
> > > int mmu_idx, MemTxAttrs attrs,
> > > MemTxResult response, uintptr_t
> retaddr);
> > > +hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
> > > +hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
> > > +MemTxAttrs *attrs);
> > > +#endif /* CONFIG_USER_ONLY */
> > >   bool (*virtio_is_big_endian)(CPUState *cpu);
> > >   int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
> > >  uint8_t *buf, int len, bool is_write);
> > > @@ -189,9 +194,6 @@ typedef struct CPUClass {
> > >   bool (*tlb_fill)(CPUState *cpu, vaddr address, int size,
> > >MMUAccessType access_type, int mmu_idx,
> > >bool probe, uintptr_t retaddr);
> > > -hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
> > > -hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
> > > -MemTxAttrs *attrs);
> > >   int (*asidx_from_attrs)(CPUState *cpu, MemTxAttrs attrs);
> > >   int (*gdb_read_register)(CPUState *cpu, GByteArray *buf, int
> reg);
> > >   int (*gdb_write_register)(CPUState *cpu, uint8_t *buf, int reg);
> > [...]
>
>


[Bug 1877781] [NEW] TCG does not support x2APIC emulation

2020-05-09 Thread Dan Cross
Public bug reported:

This is not a bug so much as a feature request.

It would be great if there was a pure-software emulation of the x2APIC
on x86_64, so that it could be used on systems that don't support such
providing a thing on via a host-based solution (e.g., KVM etc). KVM
provides this, but that doesn't help if you're working on a machine that
doesn't support KVM.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877781

Title:
  TCG does not support x2APIC emulation

Status in QEMU:
  New

Bug description:
  This is not a bug so much as a feature request.

  It would be great if there was a pure-software emulation of the x2APIC
  on x86_64, so that it could be used on systems that don't support such
  providing a thing on via a host-based solution (e.g., KVM etc). KVM
  provides this, but that doesn't help if you're working on a machine
  that doesn't support KVM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877781/+subscriptions



Re: [PATCH 09/11] target/cpu: Restrict handlers using hwaddr type to system-mode

2020-05-09 Thread Philippe Mathieu-Daudé
On Sat, May 9, 2020 at 6:08 PM Philippe Mathieu-Daudé  wrote:
> On 5/9/20 3:09 PM, Philippe Mathieu-Daudé wrote:
> > Restrict the following handlers to system-mode:
> > - do_unaligned_access
> > - do_transaction_failed
> > - get_phys_page_debug
> > - get_phys_page_attrs_debug
> >
> > Signed-off-by: Philippe Mathieu-Daudé 
> > ---
> >   include/hw/core/cpu.h   |  8 +---
> >   target/alpha/cpu.h  |  4 +++-
> >   target/arm/cpu.h|  6 +++---
> >   target/arm/internals.h  |  4 
> >   target/cris/cpu.h   |  2 ++
> >   target/hppa/cpu.h   |  2 +-
> >   target/i386/cpu.h   |  2 ++
> >   target/m68k/cpu.h   |  7 ++-
> >   target/microblaze/cpu.h |  5 -
> >   target/mips/internal.h  |  2 +-
> >   target/nios2/cpu.h  |  5 -
> >   target/openrisc/cpu.h   |  3 ++-
> >   target/ppc/cpu.h|  2 +-
> >   target/riscv/cpu.h  | 20 ++--
> >   target/sh4/cpu.h|  2 +-
> >   target/sparc/cpu.h  |  2 ++
> >   target/xtensa/cpu.h | 12 +++-
> >   target/hppa/cpu.c   |  4 +++-
> >   target/ppc/translate_init.inc.c |  2 +-
> >   19 files changed, 62 insertions(+), 32 deletions(-)
> >
> > diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
> > index 5bf94d28cf..ed09d056d1 100644
> > --- a/include/hw/core/cpu.h
> > +++ b/include/hw/core/cpu.h
> > @@ -167,6 +167,7 @@ typedef struct CPUClass {
> >   int reset_dump_flags;
> >   bool (*has_work)(CPUState *cpu);
> >   void (*do_interrupt)(CPUState *cpu);
> > +#ifndef CONFIG_USER_ONLY
>
> I forgot once Peter Maydell told me we can't do that for some reason I
> don't remember.
>
> At least this changes the sizeof(CPUClass), so we get:
>
> qom/object.c:315:type_initialize: assertion failed: (parent->class_size
> <= ti->class_size)
>
> So we can't poison the hwaddr type? (final patch of this series).

Well, this works...:

-- >8 --
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -167,6 +167,7 @@ typedef struct CPUClass {
 int reset_dump_flags;
 bool (*has_work)(CPUState *cpu);
 void (*do_interrupt)(CPUState *cpu);
+#ifndef CONFIG_USER_ONLY
 void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
 MMUAccessType access_type,
 int mmu_idx, uintptr_t retaddr);
@@ -174,6 +175,12 @@ typedef struct CPUClass {
   unsigned size, MMUAccessType access_type,
   int mmu_idx, MemTxAttrs attrs,
   MemTxResult response, uintptr_t retaddr);
+hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
+hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
+MemTxAttrs *attrs);
+#else
+void (*reserved[4])(CPUState *cpu, ...);
+#endif /* CONFIG_USER_ONLY */
 bool (*virtio_is_big_endian)(CPUState *cpu);
 int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
uint8_t *buf, int len, bool is_write);
@@ -189,9 +196,6 @@ typedef struct CPUClass {
 bool (*tlb_fill)(CPUState *cpu, vaddr address, int size,
  MMUAccessType access_type, int mmu_idx,
  bool probe, uintptr_t retaddr);
-hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
-hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
-MemTxAttrs *attrs);
 int (*asidx_from_attrs)(CPUState *cpu, MemTxAttrs attrs);
 int (*gdb_read_register)(CPUState *cpu, GByteArray *buf, int reg);
 int (*gdb_write_register)(CPUState *cpu, uint8_t *buf, int reg);
---

Ugly?

>
> >   void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
> >   MMUAccessType access_type,
> >   int mmu_idx, uintptr_t retaddr);
> > @@ -174,6 +175,10 @@ typedef struct CPUClass {
> > unsigned size, MMUAccessType 
> > access_type,
> > int mmu_idx, MemTxAttrs attrs,
> > MemTxResult response, uintptr_t 
> > retaddr);
> > +hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
> > +hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
> > +MemTxAttrs *attrs);
> > +#endif /* CONFIG_USER_ONLY */
> >   bool (*virtio_is_big_endian)(CPUState *cpu);
> >   int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
> >  uint8_t *buf, int len, bool is_write);
> > @@ -189,9 +194,6 @@ typedef struct CPUClass {
> >   bool (*tlb_fill)(CPUState *cpu, vaddr address, int size,
> >MMUAccessType access_type, int mmu_idx,
> >   

[Bug 1877688] Re: 9p virtfs device reports error when opening certain files

2020-05-09 Thread A A
Thanks, it works.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877688

Title:
  9p virtfs device reports error when opening certain files

Status in QEMU:
  In Progress

Bug description:
  Reading certain files on a 9p mounted FS produces this error message:

  qemu-system-x86_64: VirtFS reply type 117 needs 12 bytes, buffer has
  12, less than minimum

  After this error message is generated, further accesses to the 9p FS
  hangs whatever tries to access it. The Arch Linux guest system is
  otherwise usable. This happens with QEMU 5.0.0 and guest kernel
  version 5.6.11, hosted on an Arch Linux distro. I use the following
  command to launch QEMU:

  exec qemu-system-x86_64 -enable-kvm -display gtk -vga virtio -cpu host
  -m 4G -netdev tap,ifname=vmtap0,id=vn0,script=no,downscript=no -device
  virtio-net-pci,netdev=vn0 -kernel kernel.img -drive
  file=file.img,format=raw,if=virtio -virtfs
  local,path=mnt,mount_tag=host0,security_model=passthrough,id=host0
  -append "console=ttyS0 root=/dev/vda rw"

  There's nothing relevant in the guest kernel logs as far as I'm aware
  of with loglevel set to 7.

  I tracked down the issue to readv() with a small buffer(<=12 bytes)
  and then a large buffer(>= 1024 bytes). A C program is provided to
  trigger this behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877688/+subscriptions



[Bug 1877688] Re: 9p virtfs device reports error when opening certain files

2020-05-09 Thread Christian Schoenebeck
The following patch should fix this bug for the kvm backend (not for the
XEN backend yet).

Please let me know if it fixes this bug for you.

** Patch added: "bug1877688_kvm_fix.patch"
   
https://bugs.launchpad.net/qemu/+bug/1877688/+attachment/5369130/+files/bug1877688_kvm_fix.patch

** Changed in: qemu
   Status: New => In Progress

** Changed in: qemu
 Assignee: (unassigned) => Christian Schoenebeck (schoenebeck)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877688

Title:
  9p virtfs device reports error when opening certain files

Status in QEMU:
  In Progress

Bug description:
  Reading certain files on a 9p mounted FS produces this error message:

  qemu-system-x86_64: VirtFS reply type 117 needs 12 bytes, buffer has
  12, less than minimum

  After this error message is generated, further accesses to the 9p FS
  hangs whatever tries to access it. The Arch Linux guest system is
  otherwise usable. This happens with QEMU 5.0.0 and guest kernel
  version 5.6.11, hosted on an Arch Linux distro. I use the following
  command to launch QEMU:

  exec qemu-system-x86_64 -enable-kvm -display gtk -vga virtio -cpu host
  -m 4G -netdev tap,ifname=vmtap0,id=vn0,script=no,downscript=no -device
  virtio-net-pci,netdev=vn0 -kernel kernel.img -drive
  file=file.img,format=raw,if=virtio -virtfs
  local,path=mnt,mount_tag=host0,security_model=passthrough,id=host0
  -append "console=ttyS0 root=/dev/vda rw"

  There's nothing relevant in the guest kernel logs as far as I'm aware
  of with loglevel set to 7.

  I tracked down the issue to readv() with a small buffer(<=12 bytes)
  and then a large buffer(>= 1024 bytes). A C program is provided to
  trigger this behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877688/+subscriptions



Re: [INFO] Some preliminary performance data

2020-05-09 Thread Alex Bennée


Laurent Desnogues  writes:

> On Sat, May 9, 2020 at 2:38 PM Aleksandar Markovic
>  wrote:
>>
>> суб, 9. мај 2020. у 13:37 Laurent Desnogues
>>  је написао/ла:
>> >
>> > On Sat, May 9, 2020 at 12:17 PM Aleksandar Markovic
>> >  wrote:
>> > >  сре, 6. мај 2020. у 13:26 Alex Bennée  је 
>> > > написао/ла:
>> > >
>> > > > This is very much driven by how much code generation vs running you 
>> > > > see.
>> > > > In most of my personal benchmarks I never really notice code generation
>> > > > because I give my machines large amounts of RAM so code tends to stay
>> > > > resident so not need to be re-translated. When the optimiser shows up
>> > > > it's usually accompanied by high TB flush and invalidate counts in 
>> > > > "info
>> > > > jit" because we are doing more translation that we usually do.
>> > > >
>> > >
>> > > Yes, I think the machine was setup with only 128MB RAM.
>> > >
>> > > That would be an interesting experiment for Ahmed actually - to
>> > > measure impact of given RAM memory to performance.
>> > >
>> > > But it looks that at least for machines with small RAM, translation
>> > > phase will take significant percentage.
>> > >
>> > > I am attaching call graph for translation phase for "Hello World" built
>> > > for mips, and emulated by QEMU: *tb_gen_code() and its calees)
>> >
>>
>> Hi, Laurent,
>>
>> "Hello world" was taken as an example where code generation is
>> dominant. It was taken to illustrate how performance-wise code
>> generation overhead is distributed (illustrating dominance of a
>> single function).
>>
>> While "Hello world" by itself is not a significant example, it conveys
>> a useful information: it says how much is the overhead of QEMU
>> linux-user executable initialization, and code generation spent on
>> emulation of loading target executable and printing a simple
>> message. This can be roughly deducted from the result for
>> a meaningful benchmark.
>>
>> Booting of a virtual machine is a legitimate scenario for measuring
>> performance, and perhaps even attempting improving it.
>>
>> Everything should be measured - code generation, JIT-ed code
>> execution, and helpers execution - in all cases, and checked
>> whether it departs from expected behavior.
>>
>> Let's say that we emulate a benchmark that basically runs some
>> code in a loop, or an algorithm - one would expect that after a
>> while, while increasing number of iterations of the loop, or the
>> size of data in the algorithm, code generation becomes less and
>> less significant, converging to zero. Well, this should be confirmed
>> with an experiment, and not taken for granted.
>>
>> I think limiting measurements only on, let's say, execution of
>> JIT-ed code (if that is what you implied) is a logical mistake.
>> The right conclusions should be drawn from the complete
>> picture, shouldn't it?
>
> I explicitly wrote that you should consider a wide spectrum of
> programs so I think we're in violent agreement ;-)

If you want a good example for a real world use case where we could
improve things then I suggest looking at compilers.

They are frequently instantiated once per compilation unit and once done
all the JIT translations are thrown away. While the code-path taken by a
compiler may be different for every unit it compiles I bet there are
savings we could make by caching compilation. The first step would be
identifying how similar the profiles of the generated code generated is.

-- 
Alex Bennée



Re: [PATCH 09/11] target/cpu: Restrict handlers using hwaddr type to system-mode

2020-05-09 Thread Philippe Mathieu-Daudé

On 5/9/20 3:09 PM, Philippe Mathieu-Daudé wrote:

Restrict the following handlers to system-mode:
- do_unaligned_access
- do_transaction_failed
- get_phys_page_debug
- get_phys_page_attrs_debug

Signed-off-by: Philippe Mathieu-Daudé 
---
  include/hw/core/cpu.h   |  8 +---
  target/alpha/cpu.h  |  4 +++-
  target/arm/cpu.h|  6 +++---
  target/arm/internals.h  |  4 
  target/cris/cpu.h   |  2 ++
  target/hppa/cpu.h   |  2 +-
  target/i386/cpu.h   |  2 ++
  target/m68k/cpu.h   |  7 ++-
  target/microblaze/cpu.h |  5 -
  target/mips/internal.h  |  2 +-
  target/nios2/cpu.h  |  5 -
  target/openrisc/cpu.h   |  3 ++-
  target/ppc/cpu.h|  2 +-
  target/riscv/cpu.h  | 20 ++--
  target/sh4/cpu.h|  2 +-
  target/sparc/cpu.h  |  2 ++
  target/xtensa/cpu.h | 12 +++-
  target/hppa/cpu.c   |  4 +++-
  target/ppc/translate_init.inc.c |  2 +-
  19 files changed, 62 insertions(+), 32 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 5bf94d28cf..ed09d056d1 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -167,6 +167,7 @@ typedef struct CPUClass {
  int reset_dump_flags;
  bool (*has_work)(CPUState *cpu);
  void (*do_interrupt)(CPUState *cpu);
+#ifndef CONFIG_USER_ONLY


I forgot once Peter Maydell told me we can't do that for some reason I 
don't remember.


At least this changes the sizeof(CPUClass), so we get:

qom/object.c:315:type_initialize: assertion failed: (parent->class_size 
<= ti->class_size)


So we can't poison the hwaddr type? (final patch of this series).


  void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
  MMUAccessType access_type,
  int mmu_idx, uintptr_t retaddr);
@@ -174,6 +175,10 @@ typedef struct CPUClass {
unsigned size, MMUAccessType access_type,
int mmu_idx, MemTxAttrs attrs,
MemTxResult response, uintptr_t retaddr);
+hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
+hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
+MemTxAttrs *attrs);
+#endif /* CONFIG_USER_ONLY */
  bool (*virtio_is_big_endian)(CPUState *cpu);
  int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
 uint8_t *buf, int len, bool is_write);
@@ -189,9 +194,6 @@ typedef struct CPUClass {
  bool (*tlb_fill)(CPUState *cpu, vaddr address, int size,
   MMUAccessType access_type, int mmu_idx,
   bool probe, uintptr_t retaddr);
-hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
-hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
-MemTxAttrs *attrs);
  int (*asidx_from_attrs)(CPUState *cpu, MemTxAttrs attrs);
  int (*gdb_read_register)(CPUState *cpu, GByteArray *buf, int reg);
  int (*gdb_write_register)(CPUState *cpu, uint8_t *buf, int reg);

[...]



[Bug 1877688] Re: 9p virtfs device reports error when opening certain files

2020-05-09 Thread Christian Schoenebeck
Looks like being introduced by this change:
https://patchwork.kernel.org/patch/11319993/

More specifically this one exactly:

-if (buf_size  < size) {
+if (buf_size  < P9_IOHDRSZ) {

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877688

Title:
  9p virtfs device reports error when opening certain files

Status in QEMU:
  New

Bug description:
  Reading certain files on a 9p mounted FS produces this error message:

  qemu-system-x86_64: VirtFS reply type 117 needs 12 bytes, buffer has
  12, less than minimum

  After this error message is generated, further accesses to the 9p FS
  hangs whatever tries to access it. The Arch Linux guest system is
  otherwise usable. This happens with QEMU 5.0.0 and guest kernel
  version 5.6.11, hosted on an Arch Linux distro. I use the following
  command to launch QEMU:

  exec qemu-system-x86_64 -enable-kvm -display gtk -vga virtio -cpu host
  -m 4G -netdev tap,ifname=vmtap0,id=vn0,script=no,downscript=no -device
  virtio-net-pci,netdev=vn0 -kernel kernel.img -drive
  file=file.img,format=raw,if=virtio -virtfs
  local,path=mnt,mount_tag=host0,security_model=passthrough,id=host0
  -append "console=ttyS0 root=/dev/vda rw"

  There's nothing relevant in the guest kernel logs as far as I'm aware
  of with loglevel set to 7.

  I tracked down the issue to readv() with a small buffer(<=12 bytes)
  and then a large buffer(>= 1024 bytes). A C program is provided to
  trigger this behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877688/+subscriptions



Fwd: Qemu Support for Virtio Video V4L2 driver

2020-05-09 Thread Saket Sinha
Hi,

As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
Michael to point in the right direction how to move forward with Qemu
support for Virtio Video V4L2 driver
posted in [1].

[1]: https://patchwork.linuxtv.org/patch/61717/

Regards,
Saket Sinha

On Sat, May 9, 2020 at 1:09 AM Saket Sinha  wrote:
>
> Hi ,
>
> This is to inquire about Qemu support for Virtio Video V4L2 driver
> posted in [1].
> I am currently not aware of any upstream effort for Qemu reference
> implementation and would like to discuss how to proceed with the same.
>
> [1]: https://patchwork.linuxtv.org/patch/61717/
>
> Regards,
> Saket Sinha



Re: Qemu Support for Virtio Video V4L2 driver

2020-05-09 Thread Saket Sinha
Hi,

As suggested on #qemu-devel IRC channel, I am including Gerd and
Michael to point in the right direction how to move forward with Qemu
support for Virtio Video V4L2 driver
posted in [1].

[1]: https://patchwork.linuxtv.org/patch/61717/

Regards,
Saket Sinha

On Sat, May 9, 2020 at 1:09 AM Saket Sinha  wrote:
>
> Hi ,
>
> This is to inquire about Qemu support for Virtio Video V4L2 driver
> posted in [1].
> I am currently not aware of any upstream effort for Qemu reference
> implementation and would like to discuss how to proceed with the same.
>
> [1]: https://patchwork.linuxtv.org/patch/61717/
>
> Regards,
> Saket Sinha



[Bug 1877384] Re: 9pfs file create with mapped-xattr can fail on overlayfs

2020-05-09 Thread Christian Schoenebeck
Since the report is about overlayfs being involved, could you please try if 
the following patch makes a difference?

https://github.com/gkurz/qemu/commit/f7f5a1b01307af1c7b6c94672f2ce75c36f10565

It's not yet on master, but will be soon.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877384

Title:
  9pfs file create with mapped-xattr can fail on overlayfs

Status in QEMU:
  New

Bug description:
  QEMU Version: 3.1.0 as packaged in debian buster, but the code appears to do 
the same in master.
  qemu command-line: qemu-system-x86_64 -m 1G -nographic -nic 
"user,model=virtio-net-pci,tftp=$(pwd),net=10.0.2.0/24,host=10.0.2.2" -fsdev 
local,id=fs,path=$thisdir/..,security_model=mapped-xattr -device 
virtio-9p-pci,fsdev=fs,mount_tag=fs -drive 
"file=$rootdisk,if=virtio,format=raw" -kernel "$kernel" -initrd "$initrd" 
-append "$append"

  
  I'm using CI that runs in a Docker container and runs a qemu VM with code and 
results shared via virtio 9p.
  The 9p fsdev is configured with security_model=mapped-xattr
  When the test code attempts to create a log file in an existing directory, 
open with O_CREAT fails with -ENOENT.

  The relevant strace excerpt is:

  28791 openat(11, ".", O_RDONLY|O_NOFOLLOW|O_PATH|O_DIRECTORY) = 20
  28791 openat(20, "src", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_NOFOLLOW|O_DIRECTORY) 
= 21
  28791 fcntl(21, F_SETFL, O_RDONLY|O_DIRECTORY) = 0
  28791 close(20) = 0
  28791 openat(21, "client.log", 
O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK|O_NOFOLLOW, 0600) = 20
  28791 fcntl(20, F_SETFL, O_WRONLY|O_CREAT|O_NONBLOCK|O_NOFOLLOW) = 0
  28791 lsetxattr("/proc/self/fd/21/client.log", "user.virtfs.uid", "\0\0\0", 
4, 0) = -1 ENOENT (No such file or directory)

  My hypothesis for what's going wrong is since the Docker container's
  overlayfs copies-up on writes, when it opens the file it's created a
  new version of the `src` directory containing a `client.log`, but this
  new src directory isn't accessible by file descriptor 20 and the
  lsetxattr call is instead attempting to set attributes on the path in
  the old `src` directory.

  Looking at the code, a fix would be to change `hw/9pfs/9p-local.c` and
  change `local_open2` to instead of calling `local_set_xattrat` to set
  the xattrs by directory file descriptor and file name, to have a
  version of local_set_xattrat` which uses `fsetxattr` to set the virtfs
  attributes instead of the `fsetxattrat_nofollow` helper.

  This reliably happened for me in CI, but I don't have access to the CI
  host or the time to strip the test down to make a minimal test case,
  and had difficulty reproducing the error on other machines.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877384/+subscriptions



Re: [PATCH] tests/acceptance/boot_linux: Skip slow Aarch64 'virt' machine TCG test

2020-05-09 Thread Lukas Straub
On Thu,  7 May 2020 18:22:35 +0200
Philippe Mathieu-Daudé  wrote:

> The BootLinuxAarch64.test_virt_tcg is reported to take >7min to run.
> Add a possibility to users to skip this particular test, by setting
> the AVOCADO_SKIP_SLOW_TESTS environment variable:
> 
>   $ AVOCADO_SKIP_SLOW_TESTS=please make check-acceptance
>   ...
> (05/88) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg: 
> SKIP: Test takes >7min
>   ...
> 
> Reported-by: Peter Maydell 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  tests/acceptance/boot_linux.py | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/tests/acceptance/boot_linux.py b/tests/acceptance/boot_linux.py
> index 075a386300..a8df50d769 100644
> --- a/tests/acceptance/boot_linux.py
> +++ b/tests/acceptance/boot_linux.py
> @@ -15,6 +15,7 @@
>  from qemu.accel import kvm_available
>  from qemu.accel import tcg_available
>  
> +from avocado import skipIf
>  from avocado.utils import cloudinit
>  from avocado.utils import network
>  from avocado.utils import vmimage
> @@ -159,6 +160,7 @@ def add_common_args(self):
>  self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
>  self.vm.add_args('-object', 
> 'rng-random,id=rng0,filename=/dev/urandom')
>  
> +@skipIf(os.getenv('AVOCADO_SKIP_SLOW_TESTS'), 'Test takes >7min')
>  def test_virt_tcg(self):
>  """
>  :avocado: tags=accel:tcg

Hi,
Why not simply add slow tag to the test. Like:
:avocado: tags=slow

The slow tests can then be skipped with
$ make check-acceptance AVOCADO_TAGS='-t -slow'

Regards,
Lukas Straub


pgp6hhth052VR.pgp
Description: OpenPGP digital signature


[PATCH 11/11] exec/cpu-common: Poison hwaddr type in user-mode emulation

2020-05-09 Thread Philippe Mathieu-Daudé
The 'hwaddr' type is restricted to system-mode.
Declare it poisoned on user-mode emulation.

Signed-off-by: Philippe Mathieu-Daudé 
---
Checkpatch complains:

 WARNING: architecture specific defines should be avoided
 #10: FILE: include/exec/cpu-common.h:7:
 +#ifdef __GNUC__
---
 include/exec/cpu-common.h | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index b47e5630e7..56cfce8153 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -3,9 +3,13 @@
 
 /* CPU interfaces that are target independent.  */
 
-#ifndef CONFIG_USER_ONLY
+#ifdef CONFIG_USER_ONLY
+#ifdef __GNUC__
+#pragma GCC poison hwaddr
+#endif /* __GNUC__ */
+#else
 #include "exec/hwaddr.h"
-#endif
+#endif /* CONFIG_USER_ONLY */
 
 /* The CPU list lock nests outside page_(un)lock or mmap_(un)lock */
 void qemu_init_cpu_list(void);
-- 
2.21.3




[PATCH 09/11] target/cpu: Restrict handlers using hwaddr type to system-mode

2020-05-09 Thread Philippe Mathieu-Daudé
Restrict the following handlers to system-mode:
- do_unaligned_access
- do_transaction_failed
- get_phys_page_debug
- get_phys_page_attrs_debug

Signed-off-by: Philippe Mathieu-Daudé 
---
 include/hw/core/cpu.h   |  8 +---
 target/alpha/cpu.h  |  4 +++-
 target/arm/cpu.h|  6 +++---
 target/arm/internals.h  |  4 
 target/cris/cpu.h   |  2 ++
 target/hppa/cpu.h   |  2 +-
 target/i386/cpu.h   |  2 ++
 target/m68k/cpu.h   |  7 ++-
 target/microblaze/cpu.h |  5 -
 target/mips/internal.h  |  2 +-
 target/nios2/cpu.h  |  5 -
 target/openrisc/cpu.h   |  3 ++-
 target/ppc/cpu.h|  2 +-
 target/riscv/cpu.h  | 20 ++--
 target/sh4/cpu.h|  2 +-
 target/sparc/cpu.h  |  2 ++
 target/xtensa/cpu.h | 12 +++-
 target/hppa/cpu.c   |  4 +++-
 target/ppc/translate_init.inc.c |  2 +-
 19 files changed, 62 insertions(+), 32 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 5bf94d28cf..ed09d056d1 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -167,6 +167,7 @@ typedef struct CPUClass {
 int reset_dump_flags;
 bool (*has_work)(CPUState *cpu);
 void (*do_interrupt)(CPUState *cpu);
+#ifndef CONFIG_USER_ONLY
 void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
 MMUAccessType access_type,
 int mmu_idx, uintptr_t retaddr);
@@ -174,6 +175,10 @@ typedef struct CPUClass {
   unsigned size, MMUAccessType access_type,
   int mmu_idx, MemTxAttrs attrs,
   MemTxResult response, uintptr_t retaddr);
+hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
+hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
+MemTxAttrs *attrs);
+#endif /* CONFIG_USER_ONLY */
 bool (*virtio_is_big_endian)(CPUState *cpu);
 int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
uint8_t *buf, int len, bool is_write);
@@ -189,9 +194,6 @@ typedef struct CPUClass {
 bool (*tlb_fill)(CPUState *cpu, vaddr address, int size,
  MMUAccessType access_type, int mmu_idx,
  bool probe, uintptr_t retaddr);
-hwaddr (*get_phys_page_debug)(CPUState *cpu, vaddr addr);
-hwaddr (*get_phys_page_attrs_debug)(CPUState *cpu, vaddr addr,
-MemTxAttrs *attrs);
 int (*asidx_from_attrs)(CPUState *cpu, MemTxAttrs attrs);
 int (*gdb_read_register)(CPUState *cpu, GByteArray *buf, int reg);
 int (*gdb_write_register)(CPUState *cpu, uint8_t *buf, int reg);
diff --git a/target/alpha/cpu.h b/target/alpha/cpu.h
index be29bdd530..b12021803e 100644
--- a/target/alpha/cpu.h
+++ b/target/alpha/cpu.h
@@ -279,12 +279,14 @@ extern const VMStateDescription vmstate_alpha_cpu;
 void alpha_cpu_do_interrupt(CPUState *cpu);
 bool alpha_cpu_exec_interrupt(CPUState *cpu, int int_req);
 void alpha_cpu_dump_state(CPUState *cs, FILE *f, int flags);
-hwaddr alpha_cpu_get_phys_page_debug(CPUState *cpu, vaddr addr);
 int alpha_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
 int alpha_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
+#ifndef CONFIG_USER_ONLY
+hwaddr alpha_cpu_get_phys_page_debug(CPUState *cpu, vaddr addr);
 void alpha_cpu_do_unaligned_access(CPUState *cpu, vaddr addr,
MMUAccessType access_type,
int mmu_idx, uintptr_t retaddr);
+#endif
 
 #define cpu_list alpha_cpu_list
 #define cpu_signal_handler cpu_alpha_signal_handler
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 8608da6b6f..d333036bb4 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -966,15 +966,15 @@ uint64_t arm_cpu_mp_affinity(int idx, uint8_t clustersz);
 
 #ifndef CONFIG_USER_ONLY
 extern const VMStateDescription vmstate_arm_cpu;
+
+hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cpu, vaddr addr,
+ MemTxAttrs *attrs);
 #endif
 
 void arm_cpu_do_interrupt(CPUState *cpu);
 void arm_v7m_cpu_do_interrupt(CPUState *cpu);
 bool arm_cpu_exec_interrupt(CPUState *cpu, int int_req);
 
-hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cpu, vaddr addr,
- MemTxAttrs *attrs);
-
 int arm_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
 int arm_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
 
diff --git a/target/arm/internals.h b/target/arm/internals.h
index e633aff36e..c0a2a5df71 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -812,6 +812,8 @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, 
bool secstate);
  * tables */
 bool 

[PATCH 10/11] exec: Use 'cpu-common.h' instead of system-mode specific 'hwaddr.h'

2020-05-09 Thread Philippe Mathieu-Daudé
The "exec/hwaddr.h" header is restricted to system-mode emulation.
Instead, use "exec/cpu-common.h", which is meant for all modes.

Signed-off-by: Philippe Mathieu-Daudé 
---
 include/disas/disas.h  | 2 +-
 include/hw/core/cpu.h  | 2 +-
 include/sysemu/accel.h | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/disas/disas.h b/include/disas/disas.h
index 36c33f6f19..531b42e002 100644
--- a/include/disas/disas.h
+++ b/include/disas/disas.h
@@ -1,7 +1,7 @@
 #ifndef QEMU_DISAS_H
 #define QEMU_DISAS_H
 
-#include "exec/hwaddr.h"
+#include "exec/cpu-common.h"
 
 #ifdef NEED_CPU_H
 #include "cpu.h"
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index ed09d056d1..a215ae451d 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -22,7 +22,7 @@
 
 #include "hw/qdev-core.h"
 #include "disas/dis-asm.h"
-#include "exec/hwaddr.h"
+#include "exec/cpu-common.h"
 #include "exec/memattrs.h"
 #include "qapi/qapi-types-run-state.h"
 #include "qemu/bitmap.h"
diff --git a/include/sysemu/accel.h b/include/sysemu/accel.h
index e08b8ab8fa..e223a1d87b 100644
--- a/include/sysemu/accel.h
+++ b/include/sysemu/accel.h
@@ -24,7 +24,7 @@
 #define HW_ACCEL_H
 
 #include "qom/object.h"
-#include "exec/hwaddr.h"
+#include "exec/cpu-common.h"
 
 typedef struct AccelState {
 /*< private >*/
-- 
2.21.3




[PATCH 07/11] target/s390x/helper: Clean ifdef'ry

2020-05-09 Thread Philippe Mathieu-Daudé
All this code is guarded checking CONFIG_USER_ONLY definition.
Drop the duplicated checks.

Signed-off-by: Philippe Mathieu-Daudé 
---
Suspicious ifdef'ry in s390_handle_wait() from commit 83f7f32901c.
---
 target/s390x/helper.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/target/s390x/helper.c b/target/s390x/helper.c
index 09f60406aa..26e3b366e8 100644
--- a/target/s390x/helper.c
+++ b/target/s390x/helper.c
@@ -33,6 +33,7 @@
 #endif
 
 #ifndef CONFIG_USER_ONLY
+
 void s390x_tod_timer(void *opaque)
 {
 cpu_inject_clock_comparator((S390CPU *) opaque);
@@ -42,9 +43,6 @@ void s390x_cpu_timer(void *opaque)
 {
 cpu_inject_cpu_timer((S390CPU *) opaque);
 }
-#endif
-
-#ifndef CONFIG_USER_ONLY
 
 hwaddr s390_cpu_get_phys_page_debug(CPUState *cs, vaddr vaddr)
 {
@@ -98,14 +96,12 @@ void s390_handle_wait(S390CPU *cpu)
 CPUState *cs = CPU(cpu);
 
 if (s390_cpu_halt(cpu) == 0) {
-#ifndef CONFIG_USER_ONLY
 if (is_special_wait_psw(cpu->env.psw.addr)) {
 qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
 } else {
 cpu->env.crash_reason = S390_CRASH_REASON_DISABLED_WAIT;
 qemu_system_guest_panicked(cpu_get_crash_info(cs));
 }
-#endif
 }
 }
 
@@ -328,6 +324,7 @@ int s390_store_adtl_status(S390CPU *cpu, hwaddr addr, 
hwaddr len)
 cpu_physical_memory_unmap(sa, len, 1, len);
 return 0;
 }
+
 #endif /* CONFIG_USER_ONLY */
 
 void s390_cpu_dump_state(CPUState *cs, FILE *f, int flags)
-- 
2.21.3




[PATCH 04/11] sysemu/hvf: Only declare hvf_allowed when HVF is available

2020-05-09 Thread Philippe Mathieu-Daudé
When HVF is not available, the tcg_allowed variable does not exist.

Signed-off-by: Philippe Mathieu-Daudé 
---
 include/sysemu/hvf.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/sysemu/hvf.h b/include/sysemu/hvf.h
index d211e808e9..fe95743124 100644
--- a/include/sysemu/hvf.h
+++ b/include/sysemu/hvf.h
@@ -18,7 +18,6 @@
 #include "exec/memory.h"
 #include "sysemu/accel.h"
 
-extern bool hvf_allowed;
 #ifdef CONFIG_HVF
 #include 
 #include 
@@ -26,11 +25,12 @@ extern bool hvf_allowed;
 #include "target/i386/cpu.h"
 uint32_t hvf_get_supported_cpuid(uint32_t func, uint32_t idx,
  int reg);
+extern bool hvf_allowed;
 #define hvf_enabled() (hvf_allowed)
-#else
+#else /* !CONFIG_HVF */
 #define hvf_enabled() 0
 #define hvf_get_supported_cpuid(func, idx, reg) 0
-#endif
+#endif /* !CONFIG_HVF */
 
 /* hvf_slot flags */
 #define HVF_SLOT_LOG (1 << 0)
-- 
2.21.3




[PATCH 08/11] target/s390x: Restrict system-mode declarations

2020-05-09 Thread Philippe Mathieu-Daudé
As these declarations are restricted to !CONFIG_USER_ONLY in
helper.c, only declare them when system-mode emulation is used.

Signed-off-by: Philippe Mathieu-Daudé 
---
 target/s390x/internal.h | 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/target/s390x/internal.h b/target/s390x/internal.h
index c1678dc6bc..ddc276cdf4 100644
--- a/target/s390x/internal.h
+++ b/target/s390x/internal.h
@@ -236,7 +236,6 @@ int s390_cpu_write_elf64_note(WriteCoreDumpFunction f, 
CPUState *cs,
 
 /* cc_helper.c */
 const char *cc_name(enum cc_op cc_op);
-void load_psw(CPUS390XState *env, uint64_t mask, uint64_t addr);
 uint32_t calc_cc(CPUS390XState *env, uint32_t cc_op, uint64_t src, uint64_t 
dst,
  uint64_t vr);
 
@@ -303,18 +302,20 @@ void s390_cpu_gdb_init(CPUState *cs);
 
 /* helper.c */
 void s390_cpu_dump_state(CPUState *cpu, FILE *f, int flags);
-hwaddr s390_cpu_get_phys_page_debug(CPUState *cpu, vaddr addr);
-hwaddr s390_cpu_get_phys_addr_debug(CPUState *cpu, vaddr addr);
+void do_restart_interrupt(CPUS390XState *env);
+
+#ifndef CONFIG_USER_ONLY
+void load_psw(CPUS390XState *env, uint64_t mask, uint64_t addr);
 uint64_t get_psw_mask(CPUS390XState *env);
 void s390_cpu_recompute_watchpoints(CPUState *cs);
 void s390x_tod_timer(void *opaque);
 void s390x_cpu_timer(void *opaque);
-void do_restart_interrupt(CPUS390XState *env);
 void s390_handle_wait(S390CPU *cpu);
+hwaddr s390_cpu_get_phys_page_debug(CPUState *cpu, vaddr addr);
+hwaddr s390_cpu_get_phys_addr_debug(CPUState *cpu, vaddr addr);
 #define S390_STORE_STATUS_DEF_ADDR offsetof(LowCore, floating_pt_save_area)
 int s390_store_status(S390CPU *cpu, hwaddr addr, bool store_arch);
 int s390_store_adtl_status(S390CPU *cpu, hwaddr addr, hwaddr len);
-#ifndef CONFIG_USER_ONLY
 LowCore *cpu_map_lowcore(CPUS390XState *env);
 void cpu_unmap_lowcore(LowCore *lowcore);
 #endif /* CONFIG_USER_ONLY */
-- 
2.21.3




[PATCH 06/11] target/s390x: Only compile decode_basedisp() on system-mode

2020-05-09 Thread Philippe Mathieu-Daudé
The decode_basedisp*() methods are only used in ioinst.c,
which is only build in system-mode emulation.

Signed-off-by: Philippe Mathieu-Daudé 
---
 target/s390x/internal.h | 4 
 1 file changed, 4 insertions(+)

diff --git a/target/s390x/internal.h b/target/s390x/internal.h
index 8c95c734db..c1678dc6bc 100644
--- a/target/s390x/internal.h
+++ b/target/s390x/internal.h
@@ -204,6 +204,8 @@ enum cc_op {
 CC_OP_MAX
 };
 
+#ifndef CONFIG_USER_ONLY
+
 static inline hwaddr decode_basedisp_s(CPUS390XState *env, uint32_t ipb,
uint8_t *ar)
 {
@@ -225,6 +227,8 @@ static inline hwaddr decode_basedisp_s(CPUS390XState *env, 
uint32_t ipb,
 /* Base/displacement are at the same locations. */
 #define decode_basedisp_rs decode_basedisp_s
 
+#endif /* CONFIG_USER_ONLY */
+
 /* arch_dump.c */
 int s390_cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cs,
   int cpuid, void *opaque);
-- 
2.21.3




[PATCH 01/11] plugins: Restrict functions handling hwaddr to system-mode

2020-05-09 Thread Philippe Mathieu-Daudé
Restrict qemu_plugin_hwaddr_is_io() and
qemu_plugin_hwaddr_device_offset() to system-mode.

Signed-off-by: Philippe Mathieu-Daudé 
---
 include/qemu/qemu-plugin.h |  2 ++
 plugins/api.c  | 17 ++---
 2 files changed, 4 insertions(+), 15 deletions(-)

diff --git a/include/qemu/qemu-plugin.h b/include/qemu/qemu-plugin.h
index 5502e112c8..06c271a107 100644
--- a/include/qemu/qemu-plugin.h
+++ b/include/qemu/qemu-plugin.h
@@ -326,6 +326,7 @@ bool qemu_plugin_mem_is_store(qemu_plugin_meminfo_t info);
 struct qemu_plugin_hwaddr *qemu_plugin_get_hwaddr(qemu_plugin_meminfo_t info,
   uint64_t vaddr);
 
+#ifndef CONFIG_USER_ONLY
 /*
  * The following additional queries can be run on the hwaddr structure
  * to return information about it. For non-IO accesses the device
@@ -333,6 +334,7 @@ struct qemu_plugin_hwaddr 
*qemu_plugin_get_hwaddr(qemu_plugin_meminfo_t info,
  */
 bool qemu_plugin_hwaddr_is_io(struct qemu_plugin_hwaddr *hwaddr);
 uint64_t qemu_plugin_hwaddr_device_offset(const struct qemu_plugin_hwaddr 
*haddr);
+#endif /* CONFIG_USER_ONLY */
 
 typedef void
 (*qemu_plugin_vcpu_mem_cb_t)(unsigned int vcpu_index,
diff --git a/plugins/api.c b/plugins/api.c
index 53c8a73582..785ad2e45e 100644
--- a/plugins/api.c
+++ b/plugins/api.c
@@ -249,7 +249,8 @@ bool qemu_plugin_mem_is_store(qemu_plugin_meminfo_t info)
  * Virtual Memory queries
  */
 
-#ifdef CONFIG_SOFTMMU
+#ifndef CONFIG_USER_ONLY
+
 static __thread struct qemu_plugin_hwaddr hwaddr_info;
 
 struct qemu_plugin_hwaddr *qemu_plugin_get_hwaddr(qemu_plugin_meminfo_t info,
@@ -267,26 +268,14 @@ struct qemu_plugin_hwaddr 
*qemu_plugin_get_hwaddr(qemu_plugin_meminfo_t info,
 
 return _info;
 }
-#else
-struct qemu_plugin_hwaddr *qemu_plugin_get_hwaddr(qemu_plugin_meminfo_t info,
-  uint64_t vaddr)
-{
-return NULL;
-}
-#endif
 
 bool qemu_plugin_hwaddr_is_io(struct qemu_plugin_hwaddr *hwaddr)
 {
-#ifdef CONFIG_SOFTMMU
 return hwaddr->is_io;
-#else
-return false;
-#endif
 }
 
 uint64_t qemu_plugin_hwaddr_device_offset(const struct qemu_plugin_hwaddr 
*haddr)
 {
-#ifdef CONFIG_SOFTMMU
 if (haddr) {
 if (!haddr->is_io) {
 ram_addr_t ram_addr = qemu_ram_addr_from_host((void *) 
haddr->v.ram.hostaddr);
@@ -299,7 +288,6 @@ uint64_t qemu_plugin_hwaddr_device_offset(const struct 
qemu_plugin_hwaddr *haddr
 return haddr->v.io.offset;
 }
 }
-#endif
 return 0;
 }
 
@@ -308,7 +296,6 @@ uint64_t qemu_plugin_hwaddr_device_offset(const struct 
qemu_plugin_hwaddr *haddr
  * will be. This helps the plugin dimension per-vcpu arrays.
  */
 
-#ifndef CONFIG_USER_ONLY
 static MachineState * get_ms(void)
 {
 return MACHINE(qdev_get_machine());
-- 
2.21.3




[PATCH 00/11] exec/cpu: Poison 'hwaddr' type in user-mode emulation

2020-05-09 Thread Philippe Mathieu-Daudé
The 'hwaddr' type declared in "exec/hwaddr.h" is meant for
system-mode emulation only. Poison it in user-mode code.

Philippe Mathieu-Daudé (11):
  plugins: Restrict functions handling hwaddr to system-mode
  sysemu/accel: Restrict machine methods to system-mode
  sysemu/tcg: Only declare tcg_allowed when TCG is available
  sysemu/hvf: Only declare hvf_allowed when HVF is available
  target/ppc: Restrict PPCVirtualHypervisorClass to system-mode
  target/s390x: Only compile decode_basedisp() on system-mode
  target/s390x/helper: Clean ifdef'ry
  target/s390x: Restrict system-mode declarations
  target/cpu: Restrict handlers using hwaddr type to system-mode
  exec: Use 'cpu-common.h' instead of system-mode specific 'hwaddr.h'
  exec/cpu-common: Poison hwaddr type in user-mode emulation

 include/disas/disas.h   |  2 +-
 include/exec/cpu-common.h   |  8 ++--
 include/hw/core/cpu.h   | 10 ++
 include/qemu/qemu-plugin.h  |  2 ++
 include/sysemu/accel.h  |  4 +++-
 include/sysemu/hvf.h|  6 +++---
 include/sysemu/tcg.h|  2 +-
 target/alpha/cpu.h  |  4 +++-
 target/arm/cpu.h|  6 +++---
 target/arm/internals.h  |  4 
 target/cris/cpu.h   |  2 ++
 target/hppa/cpu.h   |  2 +-
 target/i386/cpu.h   |  2 ++
 target/m68k/cpu.h   |  7 ++-
 target/microblaze/cpu.h |  5 -
 target/mips/internal.h  |  2 +-
 target/nios2/cpu.h  |  5 -
 target/openrisc/cpu.h   |  3 ++-
 target/ppc/cpu.h|  6 +++---
 target/ppc/kvm_ppc.h| 22 +++---
 target/riscv/cpu.h  | 20 ++--
 target/s390x/internal.h | 15 ++-
 target/sh4/cpu.h|  2 +-
 target/sparc/cpu.h  |  2 ++
 target/xtensa/cpu.h | 12 +++-
 plugins/api.c   | 17 ++---
 target/hppa/cpu.c   |  4 +++-
 target/ppc/translate_init.inc.c |  6 +-
 target/s390x/helper.c   |  7 ++-
 29 files changed, 110 insertions(+), 79 deletions(-)

-- 
2.21.3




[PATCH 02/11] sysemu/accel: Restrict machine methods to system-mode

2020-05-09 Thread Philippe Mathieu-Daudé
Restrict init_machine(), setup_post() and has_memory()
to system-mode.

Signed-off-by: Philippe Mathieu-Daudé 
---
 include/sysemu/accel.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/sysemu/accel.h b/include/sysemu/accel.h
index 47e5788530..e08b8ab8fa 100644
--- a/include/sysemu/accel.h
+++ b/include/sysemu/accel.h
@@ -37,10 +37,12 @@ typedef struct AccelClass {
 /*< public >*/
 
 const char *name;
+#ifndef CONFIG_USER_ONLY
 int (*init_machine)(MachineState *ms);
 void (*setup_post)(MachineState *ms, AccelState *accel);
 bool (*has_memory)(MachineState *ms, AddressSpace *as,
hwaddr start_addr, hwaddr size);
+#endif
 bool *allowed;
 /*
  * Array of global properties that would be applied when specific
-- 
2.21.3




[PATCH 05/11] target/ppc: Restrict PPCVirtualHypervisorClass to system-mode

2020-05-09 Thread Philippe Mathieu-Daudé
The code related to PPC Virtual Hypervisor is pointless in user-mode.

Signed-off-by: Philippe Mathieu-Daudé 
---
 target/ppc/cpu.h|  4 ++--
 target/ppc/kvm_ppc.h| 22 +++---
 target/ppc/translate_init.inc.c |  4 
 3 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 6b6dd7e483..73920a9cac 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1176,6 +1176,7 @@ PowerPCCPUClass *ppc_cpu_class_by_pvr(uint32_t pvr);
 PowerPCCPUClass *ppc_cpu_class_by_pvr_mask(uint32_t pvr);
 PowerPCCPUClass *ppc_cpu_get_family_class(PowerPCCPUClass *pcc);
 
+#ifndef CONFIG_USER_ONLY
 struct PPCVirtualHypervisorClass {
 InterfaceClass parent;
 void (*hypercall)(PPCVirtualHypervisor *vhyp, PowerPCCPU *cpu);
@@ -1189,10 +1190,8 @@ struct PPCVirtualHypervisorClass {
 void (*hpte_set_r)(PPCVirtualHypervisor *vhyp, hwaddr ptex, uint64_t pte1);
 void (*get_pate)(PPCVirtualHypervisor *vhyp, ppc_v3_pate_t *entry);
 target_ulong (*encode_hpt_for_kvm_pr)(PPCVirtualHypervisor *vhyp);
-#ifndef CONFIG_USER_ONLY
 void (*cpu_exec_enter)(PPCVirtualHypervisor *vhyp, PowerPCCPU *cpu);
 void (*cpu_exec_exit)(PPCVirtualHypervisor *vhyp, PowerPCCPU *cpu);
-#endif
 };
 
 #define TYPE_PPC_VIRTUAL_HYPERVISOR "ppc-virtual-hypervisor"
@@ -1204,6 +1203,7 @@ struct PPCVirtualHypervisorClass {
 #define PPC_VIRTUAL_HYPERVISOR_GET_CLASS(obj) \
 OBJECT_GET_CLASS(PPCVirtualHypervisorClass, (obj), \
  TYPE_PPC_VIRTUAL_HYPERVISOR)
+#endif /* CONFIG_USER_ONLY */
 
 void ppc_cpu_do_interrupt(CPUState *cpu);
 bool ppc_cpu_exec_interrupt(CPUState *cpu, int int_req);
diff --git a/target/ppc/kvm_ppc.h b/target/ppc/kvm_ppc.h
index fcaf745516..701c0c262b 100644
--- a/target/ppc/kvm_ppc.h
+++ b/target/ppc/kvm_ppc.h
@@ -280,6 +280,17 @@ static inline bool kvmppc_has_cap_spapr_vfio(void)
 return false;
 }
 
+static inline void kvmppc_read_hptes(ppc_hash_pte64_t *hptes,
+ hwaddr ptex, int n)
+{
+abort();
+}
+
+static inline void kvmppc_write_hpte(hwaddr ptex, uint64_t pte0, uint64_t pte1)
+{
+abort();
+}
+
 #endif /* !CONFIG_USER_ONLY */
 
 static inline bool kvmppc_has_cap_epr(void)
@@ -310,17 +321,6 @@ static inline int kvmppc_load_htab_chunk(QEMUFile *f, int 
fd, uint32_t index,
 abort();
 }
 
-static inline void kvmppc_read_hptes(ppc_hash_pte64_t *hptes,
- hwaddr ptex, int n)
-{
-abort();
-}
-
-static inline void kvmppc_write_hpte(hwaddr ptex, uint64_t pte0, uint64_t pte1)
-{
-abort();
-}
-
 static inline bool kvmppc_has_cap_fixup_hcalls(void)
 {
 abort();
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 2b6e832c4c..4ea0cc501b 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -10946,16 +10946,20 @@ static const TypeInfo ppc_cpu_type_info = {
 .class_init = ppc_cpu_class_init,
 };
 
+#ifndef CONFIG_USER_ONLY
 static const TypeInfo ppc_vhyp_type_info = {
 .name = TYPE_PPC_VIRTUAL_HYPERVISOR,
 .parent = TYPE_INTERFACE,
 .class_size = sizeof(PPCVirtualHypervisorClass),
 };
+#endif
 
 static void ppc_cpu_register_types(void)
 {
 type_register_static(_cpu_type_info);
+#ifndef CONFIG_USER_ONLY
 type_register_static(_vhyp_type_info);
+#endif
 }
 
 type_init(ppc_cpu_register_types)
-- 
2.21.3




[PATCH 03/11] sysemu/tcg: Only declare tcg_allowed when TCG is available

2020-05-09 Thread Philippe Mathieu-Daudé
When TCG is not available, the tcg_allowed variable does not exist.

Signed-off-by: Philippe Mathieu-Daudé 
---
 include/sysemu/tcg.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/sysemu/tcg.h b/include/sysemu/tcg.h
index 7d116d2e80..d9d3ca8559 100644
--- a/include/sysemu/tcg.h
+++ b/include/sysemu/tcg.h
@@ -8,9 +8,9 @@
 #ifndef SYSEMU_TCG_H
 #define SYSEMU_TCG_H
 
-extern bool tcg_allowed;
 void tcg_exec_init(unsigned long tb_size);
 #ifdef CONFIG_TCG
+extern bool tcg_allowed;
 #define tcg_enabled() (tcg_allowed)
 #else
 #define tcg_enabled() 0
-- 
2.21.3




Re: [INFO] Some preliminary performance data

2020-05-09 Thread Aleksandar Markovic
суб, 9. мај 2020. у 14:50 Laurent Desnogues
 је написао/ла:
>
> On Sat, May 9, 2020 at 2:38 PM Aleksandar Markovic
>  wrote:
> >
> > суб, 9. мај 2020. у 13:37 Laurent Desnogues
> >  је написао/ла:
> > >
> > > On Sat, May 9, 2020 at 12:17 PM Aleksandar Markovic
> > >  wrote:
> > > >  сре, 6. мај 2020. у 13:26 Alex Bennée  је 
> > > > написао/ла:
> > > >
> > > > > This is very much driven by how much code generation vs running you 
> > > > > see.
> > > > > In most of my personal benchmarks I never really notice code 
> > > > > generation
> > > > > because I give my machines large amounts of RAM so code tends to stay
> > > > > resident so not need to be re-translated. When the optimiser shows up
> > > > > it's usually accompanied by high TB flush and invalidate counts in 
> > > > > "info
> > > > > jit" because we are doing more translation that we usually do.
> > > > >
> > > >
> > > > Yes, I think the machine was setup with only 128MB RAM.
> > > >
> > > > That would be an interesting experiment for Ahmed actually - to
> > > > measure impact of given RAM memory to performance.
> > > >
> > > > But it looks that at least for machines with small RAM, translation
> > > > phase will take significant percentage.
> > > >
> > > > I am attaching call graph for translation phase for "Hello World" built
> > > > for mips, and emulated by QEMU: *tb_gen_code() and its calees)
> > >
> >
> > Hi, Laurent,
> >
> > "Hello world" was taken as an example where code generation is
> > dominant. It was taken to illustrate how performance-wise code
> > generation overhead is distributed (illustrating dominance of a
> > single function).
> >
> > While "Hello world" by itself is not a significant example, it conveys
> > a useful information: it says how much is the overhead of QEMU
> > linux-user executable initialization, and code generation spent on
> > emulation of loading target executable and printing a simple
> > message. This can be roughly deducted from the result for
> > a meaningful benchmark.
> >
> > Booting of a virtual machine is a legitimate scenario for measuring
> > performance, and perhaps even attempting improving it.
> >
> > Everything should be measured - code generation, JIT-ed code
> > execution, and helpers execution - in all cases, and checked
> > whether it departs from expected behavior.
> >
> > Let's say that we emulate a benchmark that basically runs some
> > code in a loop, or an algorithm - one would expect that after a
> > while, while increasing number of iterations of the loop, or the
> > size of data in the algorithm, code generation becomes less and
> > less significant, converging to zero. Well, this should be confirmed
> > with an experiment, and not taken for granted.
> >
> > I think limiting measurements only on, let's say, execution of
> > JIT-ed code (if that is what you implied) is a logical mistake.
> > The right conclusions should be drawn from the complete
> > picture, shouldn't it?
>
> I explicitly wrote that you should consider a wide spectrum of
> programs so I think we're in violent agreement ;-)
>

lol, I will write down "violent agreement" in my mental notebook
of useful phrases. :))

Yours,
Aleksandar

> Thanks,
>
> Laurent
>
> > Yours,
> > Aleksandar
> >
> > > Sorry if I'm stating the obvious but both "Hello World" and a
> > > Linux boot will exhibit similar behaviors with low reuse of
> > > translated blocks, which means translation will show up in
> > > profiles as a lot of time is spent in translating blocks that
> > > will run once.  If you push in that direction you might reach
> > > the conclusion that a non JIST simulator is faster than QEMU.
> > >
> > > You will have to carefully select the tests you run:  you need
> > > a large spectrum from Linux boot, "Hello World" up to synthetic
> > > benchmarks.
> > >
> > > Again sorry if that was too trivial :-)
> > >
> > > Laurent



Re: [INFO] Some preliminary performance data

2020-05-09 Thread Laurent Desnogues
On Sat, May 9, 2020 at 2:38 PM Aleksandar Markovic
 wrote:
>
> суб, 9. мај 2020. у 13:37 Laurent Desnogues
>  је написао/ла:
> >
> > On Sat, May 9, 2020 at 12:17 PM Aleksandar Markovic
> >  wrote:
> > >  сре, 6. мај 2020. у 13:26 Alex Bennée  је 
> > > написао/ла:
> > >
> > > > This is very much driven by how much code generation vs running you see.
> > > > In most of my personal benchmarks I never really notice code generation
> > > > because I give my machines large amounts of RAM so code tends to stay
> > > > resident so not need to be re-translated. When the optimiser shows up
> > > > it's usually accompanied by high TB flush and invalidate counts in "info
> > > > jit" because we are doing more translation that we usually do.
> > > >
> > >
> > > Yes, I think the machine was setup with only 128MB RAM.
> > >
> > > That would be an interesting experiment for Ahmed actually - to
> > > measure impact of given RAM memory to performance.
> > >
> > > But it looks that at least for machines with small RAM, translation
> > > phase will take significant percentage.
> > >
> > > I am attaching call graph for translation phase for "Hello World" built
> > > for mips, and emulated by QEMU: *tb_gen_code() and its calees)
> >
>
> Hi, Laurent,
>
> "Hello world" was taken as an example where code generation is
> dominant. It was taken to illustrate how performance-wise code
> generation overhead is distributed (illustrating dominance of a
> single function).
>
> While "Hello world" by itself is not a significant example, it conveys
> a useful information: it says how much is the overhead of QEMU
> linux-user executable initialization, and code generation spent on
> emulation of loading target executable and printing a simple
> message. This can be roughly deducted from the result for
> a meaningful benchmark.
>
> Booting of a virtual machine is a legitimate scenario for measuring
> performance, and perhaps even attempting improving it.
>
> Everything should be measured - code generation, JIT-ed code
> execution, and helpers execution - in all cases, and checked
> whether it departs from expected behavior.
>
> Let's say that we emulate a benchmark that basically runs some
> code in a loop, or an algorithm - one would expect that after a
> while, while increasing number of iterations of the loop, or the
> size of data in the algorithm, code generation becomes less and
> less significant, converging to zero. Well, this should be confirmed
> with an experiment, and not taken for granted.
>
> I think limiting measurements only on, let's say, execution of
> JIT-ed code (if that is what you implied) is a logical mistake.
> The right conclusions should be drawn from the complete
> picture, shouldn't it?

I explicitly wrote that you should consider a wide spectrum of
programs so I think we're in violent agreement ;-)

Thanks,

Laurent

> Yours,
> Aleksandar
>
> > Sorry if I'm stating the obvious but both "Hello World" and a
> > Linux boot will exhibit similar behaviors with low reuse of
> > translated blocks, which means translation will show up in
> > profiles as a lot of time is spent in translating blocks that
> > will run once.  If you push in that direction you might reach
> > the conclusion that a non JIST simulator is faster than QEMU.
> >
> > You will have to carefully select the tests you run:  you need
> > a large spectrum from Linux boot, "Hello World" up to synthetic
> > benchmarks.
> >
> > Again sorry if that was too trivial :-)
> >
> > Laurent



Re: [INFO] Some preliminary performance data

2020-05-09 Thread Aleksandar Markovic
суб, 9. мај 2020. у 13:37 Laurent Desnogues
 је написао/ла:
>
> On Sat, May 9, 2020 at 12:17 PM Aleksandar Markovic
>  wrote:
> >  сре, 6. мај 2020. у 13:26 Alex Bennée  је 
> > написао/ла:
> >
> > > This is very much driven by how much code generation vs running you see.
> > > In most of my personal benchmarks I never really notice code generation
> > > because I give my machines large amounts of RAM so code tends to stay
> > > resident so not need to be re-translated. When the optimiser shows up
> > > it's usually accompanied by high TB flush and invalidate counts in "info
> > > jit" because we are doing more translation that we usually do.
> > >
> >
> > Yes, I think the machine was setup with only 128MB RAM.
> >
> > That would be an interesting experiment for Ahmed actually - to
> > measure impact of given RAM memory to performance.
> >
> > But it looks that at least for machines with small RAM, translation
> > phase will take significant percentage.
> >
> > I am attaching call graph for translation phase for "Hello World" built
> > for mips, and emulated by QEMU: *tb_gen_code() and its calees)
>

Hi, Laurent,

"Hello world" was taken as an example where code generation is
dominant. It was taken to illustrate how performance-wise code
generation overhead is distributed (illustrating dominance of a
single function).

While "Hello world" by itself is not a significant example, it conveys
a useful information: it says how much is the overhead of QEMU
linux-user executable initialization, and code generation spent on
emulation of loading target executable and printing a simple
message. This can be roughly deducted from the result for
a meaningful benchmark.

Booting of a virtual machine is a legitimate scenario for measuring
performance, and perhaps even attempting improving it.

Everything should be measured - code generation, JIT-ed code
execution, and helpers execution - in all cases, and checked
whether it departs from expected behavior.

Let's say that we emulate a benchmark that basically runs some
code in a loop, or an algorithm - one would expect that after a
while, while increasing number of iterations of the loop, or the
size of data in the algorithm, code generation becomes less and
less significant, converging to zero. Well, this should be confirmed
with an experiment, and not taken for granted.

I think limiting measurements only on, let's say, execution of
JIT-ed code (if that is what you implied) is a logical mistake.
The right conclusions should be drawn from the complete
picture, shouldn't it?

Yours,
Aleksandar

> Sorry if I'm stating the obvious but both "Hello World" and a
> Linux boot will exhibit similar behaviors with low reuse of
> translated blocks, which means translation will show up in
> profiles as a lot of time is spent in translating blocks that
> will run once.  If you push in that direction you might reach
> the conclusion that a non JIST simulator is faster than QEMU.
>
> You will have to carefully select the tests you run:  you need
> a large spectrum from Linux boot, "Hello World" up to synthetic
> benchmarks.
>
> Again sorry if that was too trivial :-)
>
> Laurent



Re: [PATCH v4 5/6] net/colo-compare.c, softmmu/vl.c: Check that colo-compare is active

2020-05-09 Thread Lukas Straub
On Fri, 8 May 2020 06:50:39 +
"Zhang, Chen"  wrote:

> > -Original Message-
> > From: Lukas Straub 
> > Sent: Friday, May 8, 2020 2:11 PM
> > To: Zhang, Chen 
> > Cc: qemu-devel ; Li Zhijian
> > ; Jason Wang ; Marc-
> > André Lureau ; Paolo Bonzini
> > 
> > Subject: Re: [PATCH v4 5/6] net/colo-compare.c, softmmu/vl.c: Check that
> > colo-compare is active
> > 
> > On Fri, 8 May 2020 02:26:21 +
> > "Zhang, Chen"  wrote:
> >   
> > > > -Original Message-
> > > > From: Lukas Straub 
> > > > Sent: Thursday, May 7, 2020 11:54 PM
> > > > To: Zhang, Chen 
> > > > Cc: qemu-devel ; Li Zhijian
> > > > ; Jason Wang ; Marc-
> > > > André Lureau ; Paolo Bonzini
> > > > 
> > > > Subject: Re: [PATCH v4 5/6] net/colo-compare.c, softmmu/vl.c: Check
> > > > that colo-compare is active
> > > >
> > > > On Thu, 7 May 2020 11:38:04 +
> > > > "Zhang, Chen"  wrote:
> > > >  
> > > > > > -Original Message-
> > > > > > From: Lukas Straub 
> > > > > > Sent: Monday, May 4, 2020 6:28 PM
> > > > > > To: qemu-devel 
> > > > > > Cc: Zhang, Chen ; Li Zhijian
> > > > > > ; Jason Wang ;
> > > > > > Marc- André Lureau ; Paolo Bonzini
> > > > > > 
> > > > > > Subject: [PATCH v4 5/6] net/colo-compare.c, softmmu/vl.c: Check
> > > > > > that
> > > > > > colo- compare is active
> > > > > >
> > > > > > If the colo-compare object is removed before failover and a
> > > > > > checkpoint happens, qemu crashes because it tries to lock the
> > > > > > destroyed event_mtx in colo_notify_compares_event.
> > > > > >
> > > > > > Fix this by checking if everything is initialized by introducing
> > > > > > a new variable colo_compare_active which is protected by a new
> > > > > > mutex colo_compare_mutex. The new mutex also protects against
> > > > > > concurrent access of the net_compares list and makes sure that
> > > > > > colo_notify_compares_event isn't active while we destroy
> > > > > > event_mtx and event_complete_cond.
> > > > > >
> > > > > > With this it also is again possible to use colo without
> > > > > > colo-compare (periodic
> > > > > > mode) and to use multiple colo-compare for multiple network  
> > interfaces.  
> > > > > >  
> > > > >
> > > > > Hi Lukas,
> > > > >
> > > > > For this case I think we don't need to touch vl.c code, we can
> > > > > solve this  
> > > > issue from another perspective:  
> > > > > How to remove colo-compare?
> > > > > User will use qemu-monitor or QMP command to disable an object, so
> > > > > we just need return operation failed When user try to remove
> > > > > colo-compare  
> > > > object while COLO is running.
> > > >
> > > > Yeah, but that still leaves the other problem that colo can't be
> > > > used without colo-compare (qemu crashes then).  
> > >
> > > Yes, the COLO-compare is necessary module in COLO original design.
> > > At most cases, user need it do dynamic sync.
> > > For rare cases, maybe we can add a new colo-compare parameter to  
> > bypass all the network workload.
> > 
> > I think such an parameter would only be a workaround instead of a real
> > solution like this patch.  
> 
> The root problem is why COLO-compare is necessary.
> Yes, maybe someone want to use pure periodic synchronization mode,
> But it means it will lost all guest network support(without 
> colo-compare/filter-mirror/filter-redirector/filter-rewriter).
> The secondary guest just a solid backup for the primary guest, when occur 
> failover the new build stateful connection (like TCP)
> will crashed, need userspace to handle this status. It lost the original 
> meaning for COLO FT/HA solution, no need use do HA in application layer.
> it looks like normal/remote periodic VM snapshot here. 

Sure, but maybe the user doesn't need (reliable) network on failover. Also 
proper network support with periodic mode can easily be implemented by 
modifying filter-buffer to buffer packets until checkpoint. Being able to use 
colo without colo-compare gives more flexibility to the user.

Regards,
Lukas Straub

> Dave or Jason have any comments here? 
> 
> Thanks
> Zhang Chen
> 
> > 
> > Regards,
> > Lukas Straub
> >   
> > > Thanks
> > > Zhang Chen
> > >  
> > > >
> > > > Regards,
> > > > Lukas Straub
> > > >  
> > > > > Thanks
> > > > > Zhang Chen
> > > > >  
> > > > > > Signed-off-by: Lukas Straub 
> > > > > > ---
> > > > > >  net/colo-compare.c | 35 +--
> > > > > >  net/colo-compare.h |  1 +
> > > > > >  softmmu/vl.c   |  2 ++
> > > > > >  3 files changed, 32 insertions(+), 6 deletions(-)
> > > > > >
> > > > > > diff --git a/net/colo-compare.c b/net/colo-compare.c index
> > > > > > 56db3d3bfc..c7572d75e9 100644
> > > > > > --- a/net/colo-compare.c
> > > > > > +++ b/net/colo-compare.c
> > > > > > @@ -54,6 +54,8 @@ static NotifierList colo_compare_notifiers =
> > > > > > #define REGULAR_PACKET_CHECK_MS 3000  #define  
> > > > DEFAULT_TIME_OUT_MS  
> > > > > > 3000
> > > > > >
> > > > > > +static QemuMutex colo_compare_mutex; static bool
> > > > > > 

Re: [RFC v2 7/9] virito-pci: implement queue_enabled method

2020-05-09 Thread Philippe Mathieu-Daudé

Typo "virtio-pci" in patch subject.

On 5/8/20 6:32 PM, Cindy Lu wrote:

From: Jason Wang 

With version 1, we can detect whether a queue is enabled via
queue_enabled.

Signed-off-by: Jason Wang 
---
  hw/virtio/virtio-pci.c | 13 +
  1 file changed, 13 insertions(+)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index c6b47a9c73..4aaf5d953e 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1103,6 +1103,18 @@ static AddressSpace *virtio_pci_get_dma_as(DeviceState 
*d)
  return pci_get_address_space(dev);
  }
  
+static bool virtio_pci_queue_enabled(DeviceState *d, int n)

+{
+VirtIOPCIProxy *proxy = VIRTIO_PCI(d);
+VirtIODevice *vdev = virtio_bus_get_device(>bus);
+
+if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
+return proxy->vqs[vdev->queue_sel].enabled;
+}
+
+return virtio_queue_get_desc_addr(vdev, n) != 0;
+}
+
  static int virtio_pci_add_mem_cap(VirtIOPCIProxy *proxy,
 struct virtio_pci_cap *cap)
  {
@@ -2053,6 +2065,7 @@ static void virtio_pci_bus_class_init(ObjectClass *klass, 
void *data)
  k->ioeventfd_enabled = virtio_pci_ioeventfd_enabled;
  k->ioeventfd_assign = virtio_pci_ioeventfd_assign;
  k->get_dma_as = virtio_pci_get_dma_as;
+k->queue_enabled = virtio_pci_queue_enabled;
  }
  
  static const TypeInfo virtio_pci_bus_info = {







Re: [PATCH v5 02/19] exec: Fix cpu_watchpoint_address_matches address length

2020-05-09 Thread Philippe Mathieu-Daudé

On 5/8/20 5:43 PM, Richard Henderson wrote:

The only caller of cpu_watchpoint_address_matches passes
TARGET_PAGE_SIZE, so the bug is not currently visible.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
  exec.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/exec.c b/exec.c
index 2874bb5088..5162f0d12f 100644
--- a/exec.c
+++ b/exec.c
@@ -1127,7 +1127,7 @@ int cpu_watchpoint_address_matches(CPUState *cpu, vaddr 
addr, vaddr len)
  int ret = 0;
  
  QTAILQ_FOREACH(wp, >watchpoints, entry) {

-if (watchpoint_address_matches(wp, addr, TARGET_PAGE_SIZE)) {
+if (watchpoint_address_matches(wp, addr, len)) {
  ret |= wp->flags;
  }
  }



Reviewed-by: Philippe Mathieu-Daudé 



Re: [INFO] Some preliminary performance data

2020-05-09 Thread Laurent Desnogues
On Sat, May 9, 2020 at 12:17 PM Aleksandar Markovic
 wrote:
>  сре, 6. мај 2020. у 13:26 Alex Bennée  је написао/ла:
>
> > This is very much driven by how much code generation vs running you see.
> > In most of my personal benchmarks I never really notice code generation
> > because I give my machines large amounts of RAM so code tends to stay
> > resident so not need to be re-translated. When the optimiser shows up
> > it's usually accompanied by high TB flush and invalidate counts in "info
> > jit" because we are doing more translation that we usually do.
> >
>
> Yes, I think the machine was setup with only 128MB RAM.
>
> That would be an interesting experiment for Ahmed actually - to
> measure impact of given RAM memory to performance.
>
> But it looks that at least for machines with small RAM, translation
> phase will take significant percentage.
>
> I am attaching call graph for translation phase for "Hello World" built
> for mips, and emulated by QEMU: *tb_gen_code() and its calees)

Sorry if I'm stating the obvious but both "Hello World" and a
Linux boot will exhibit similar behaviors with low reuse of
translated blocks, which means translation will show up in
profiles as a lot of time is spent in translating blocks that
will run once.  If you push in that direction you might reach
the conclusion that a non JIST simulator is faster than QEMU.

You will have to carefully select the tests you run:  you need
a large spectrum from Linux boot, "Hello World" up to synthetic
benchmarks.

Again sorry if that was too trivial :-)

Laurent



[Bug 1877716] Re: Win10 guest unsuable after a few minutes

2020-05-09 Thread zkrx
I can also replicate the problem on current master. I can solve it by
building master with --disable-linux-io-uring.

I also tried building Linux 5.4.39 where the issue happens too:
Linux cc 5.4.39qemu #1 SMP PREEMPT Sat May 9 12:11:38 CEST 2020 x86_64 GNU/Linux

I attached the logs of that latest run.

** Attachment added: "win10_5.4.39.log"
   
https://bugs.launchpad.net/qemu/+bug/1877716/+attachment/5368934/+files/win10_5.4.39.log

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877716

Title:
  Win10 guest unsuable after a few minutes

Status in QEMU:
  New

Bug description:
  On Arch Linux, the recent qemu package update seems to misbehave on
  some systems. In my case, my Windows 10 guest runs fine for around 5
  minutes and then start to get really sluggish, even unresponsive. It
  needs to be forced off. I could reproduce this on a minimal VM with no
  passthrough, although my current testing setup involves an nvme pcie
  passthrough.

  I bisected it to the following commit which rapidly starts to run sluggishly 
on my setup:
  https://github.com/qemu/qemu/commit/73fd282e7b6dd4e4ea1c3bbb3d302c8db51e4ccf

  I've ran the previous commit (
  https://github.com/qemu/qemu/commit/b321051cf48ccc2d3d832af111d688f2282f089b
  ) for the entire night without an issue so far.

  I believe this might be a duplicate of
  https://bugs.launchpad.net/qemu/+bug/1873032 , although I'm not sure.

  Linux cc 5.6.10-arch1-1 #1 SMP PREEMPT Sat, 02 May 2020 19:11:54 + x86_64 
GNU/Linux
  AMD Ryzen 7 2700X Eight-Core Processor

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877716/+subscriptions



Re: [INFO] Some preliminary performance data

2020-05-09 Thread Aleksandar Markovic
суб, 9. мај 2020. у 12:16 Aleksandar Markovic <
aleksandar.qemu.de...@gmail.com> је написао/ла:
>
>
>
> сре, 6. мај 2020. у 13:26 Alex Bennée  је
написао/ла:
> >
> >
> > Aleksandar Markovic  writes:
> >
> > Some preliminary thoughts
> >
>
> Alex, many thanks for all your thoughts and hints that are truly helpful!
>
> I will most likely respond to all of them in some future mail, but for now
> I will comment just one.
>

It looks right-click and "View Image" works for html mails with
embedded images - it displays the image in its original resolution.
So, no need for attachments. Good to know for potential Ahmed's
reports with images.

Aleksandar

> > >> Hi, all.
> > >>
> > >> I just want to share with you some bits and pieces of data that I got
> > >> while doing some preliminary experimentation for the GSoC project
"TCG
> > >> Continuous Benchmarking", that Ahmed Karaman, a student of the
fourth final
> > >> year of Electical Engineering Faculty in Cairo, will execute.
> > >>
> > >> *User Mode*
> > >>
> > >>* As expected, for any program dealing with any substantional
> > >> floating-point calculation, softfloat library will be the the
heaviest CPU
> > >> cycles consumer.
> > >>* We plan to examine the performance behaviour of non-FP programs
> > >> (integer arithmetic), or even non-numeric programs (sorting strings,
for
> > >> example).
> >
> > Emilio was the last person to do extensive bench-marking on TCG and he
> > used a mild fork of the venerable nbench:
> >
> >   https://github.com/cota/dbt-bench
> >
> > as the hot code is fairly small it offers a good way of testing quality
> > of the output. Larger programs will differ as they can involve more code
> > generation.
> >
> > >>
> > >> *System Mode*
> > >>
> > >>* I did profiling of booting several machines using a tool called
> > >> callgrind (a part of valgrind). The tool offers pletora of
information,
> > >> however it looks it is little confused by usage of coroutines, and
that
> > >> makes some of its reports look very illogical, or plain ugly.
> >
> > Doesn't running through valgrind inherently serialise execution anyway?
> > If you are looking for latency caused by locks we have support for the
> > QEMU sync profiler built into the code. See "help sync-profile" on the
HMP.
> >
> > >> Still, it
> > >> seems valid data can be extracted from it. Without going into
details, here
> > >> is what it says for one machine (bear in mind that results may vary
to a
> > >> great extent between machines):
> >
> > You can also use perf to use sampling to find hot points in the code.
> > One of last years GSoC student wrote some patches that included the
> > ability to dump a jit info file for perf to consume. We never got it
> > merged in the end but it might be worth having a go at pulling the
> > relevant bits out from:
> >
> >   Subject: [PATCH  v9 00/13] TCG code quality tracking and perf
integration
> >   Date: Mon,  7 Oct 2019 16:28:26 +0100
> >   Message-Id: <20191007152839.30804-1-alex.ben...@linaro.org>
> >
> > >>  ** The booting involved six threads, one for display handling,
one
> > >> for emulations, and four more. The last four did almost nothing
during
> > >> boot, just almost entire time siting idle, waiting for something. As
far as
> > >> "Total Instruction Fetch Count" (this is the main measure used in
> > >> callgrind), they were distributed in proportion 1:3 between display
thread
> > >> and emulation thread (the rest of threads were negligible) (but,
> > >> interestingly enough, for another machine that proportion was 1:20).
> > >>  ** The display thread is dominated by vga_update_display()
function
> > >> (21.5% "self" time, and 51.6% "self + callees" time, called almost
4
> > >> times). Other functions worth mentioning are
> > >> cpu_physical_memory_snapshot_get_dirty() and
> > >> memory_region_snapshot_get_dirty(), which are very small functions,
but are
> > >> both invoked over 26 000 000 times, and contribute with over 20% of
display
> > >> thread instruction fetch count together.
> >
> > The memory region tracking code will end up forcing the slow path for a
> > lot of memory accesses to video memory via softmmu. You may want to
> > measure if there is a difference using one of the virtio based graphics
> > displays.
> >
> > >>  ** Focusing now on emulation thread, "Total Instruction Fetch
Counts"
> > >> were roughly distributed this way:
> > >>- 15.7% is execution of GIT-ed code from translation block
> > >> buffer
> > >>- 39.9% is execution of helpers
> > >>- 44.4% is code translation stage, including some
coroutine
> > >> activities
> > >> Top two among helpers:
> > >>   - helper_le_stl_memory()
> >
> > I assume that is the MMU slow-path being called from the generated code.
> >
> > >>   - helper_lookup_tb_ptr() (this one is invoked whopping 36
000
> > >> 000 times)
> >
> > This is an optimisation to avoid exiting the run-loop to find the 

Re: [PATCH v5 00/19] target/arm: sve load/store improvements

2020-05-09 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20200508154359.7494-1-richard.hender...@linaro.org/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Message-id: 20200508154359.7494-1-richard.hender...@linaro.org
Subject: [PATCH v5 00/19] target/arm: sve load/store improvements
Type: series

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
6459e7a target/arm: Remove sve_memopidx
7205242 target/arm: Reuse sve_probe_page for gather loads
edbaab9 target/arm: Reuse sve_probe_page for scatter stores
6fc3734 target/arm: Reuse sve_probe_page for gather first-fault loads
4a49d05 target/arm: Use SVEContLdSt for contiguous stores
c3adb6a target/arm: Update contiguous first-fault and no-fault loads
8a8b770 target/arm: Use SVEContLdSt for multi-register contiguous loads
389d29b target/arm: Handle watchpoints in sve_ld1_r
63d2628 target/arm: Use SVEContLdSt in sve_ld1_r
469f92b target/arm: Adjust interface of sve_ld1_host_fn
12bbd07 target/arm: Add sve infrastructure for page lookup
14e0e49 target/arm: Drop manual handling of set/clear_helper_retaddr
62e4351 target/arm: Use cpu_*_data_ra for sve_ldst_tlb_fn
d904801 accel/tcg: Add endian-specific cpu_{ld, st}* operations
0551425 accel/tcg: Add probe_access_flags
64d0017 accel/tcg: Adjust probe_access call to page_check_range
35fd2ff accel/tcg: Add block comment for probe_access
7edaa91 exec: Fix cpu_watchpoint_address_matches address length
80f9aea exec: Add block comments for watchpoint routines

=== OUTPUT BEGIN ===
1/19 Checking commit 80f9aea15f56 (exec: Add block comments for watchpoint 
routines)
2/19 Checking commit 7edaa91b2b6b (exec: Fix cpu_watchpoint_address_matches 
address length)
3/19 Checking commit 35fd2ff72cf8 (accel/tcg: Add block comment for 
probe_access)
4/19 Checking commit 64d0017d5acf (accel/tcg: Adjust probe_access call to 
page_check_range)
5/19 Checking commit 055142569318 (accel/tcg: Add probe_access_flags)
6/19 Checking commit d90480155016 (accel/tcg: Add endian-specific cpu_{ld, st}* 
operations)
7/19 Checking commit 62e4351c804c (target/arm: Use cpu_*_data_ra for 
sve_ldst_tlb_fn)
ERROR: spaces required around that '*' (ctx:VxV)
#63: FILE: target/arm/sve_helper.c:4029:
+TLB(env, addr, (TYPEM)*(TYPEE *)(vd + H(reg_off)), ra); \
   ^

ERROR: spaces required around that '*' (ctx:WxV)
#153: FILE: target/arm/sve_helper.c:4162:
+  sve_ldst1_tlb_fn *tlb_fn)
^

total: 2 errors, 0 warnings, 455 lines checked

Patch 7/19 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

8/19 Checking commit 14e0e496b39d (target/arm: Drop manual handling of 
set/clear_helper_retaddr)
9/19 Checking commit 12bbd07bd501 (target/arm: Add sve infrastructure for page 
lookup)
WARNING: Block comments use a leading /* on a separate line
#32: FILE: target/arm/sve_helper.c:1633:
+/* Big-endian hosts need to frob the byte indices.  If the copy

total: 0 errors, 1 warnings, 281 lines checked

Patch 9/19 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
10/19 Checking commit 469f92b781ac (target/arm: Adjust interface of 
sve_ld1_host_fn)
11/19 Checking commit 63d26285f1a3 (target/arm: Use SVEContLdSt in sve_ld1_r)
12/19 Checking commit 389d29b65ff2 (target/arm: Handle watchpoints in sve_ld1_r)
13/19 Checking commit 8a8b7701adba (target/arm: Use SVEContLdSt for 
multi-register contiguous loads)
14/19 Checking commit c3adb6af934c (target/arm: Update contiguous first-fault 
and no-fault loads)
15/19 Checking commit 4a49d058ce11 (target/arm: Use SVEContLdSt for contiguous 
stores)
16/19 Checking commit 6fc3734e3350 (target/arm: Reuse sve_probe_page for gather 
first-fault loads)
17/19 Checking commit edbaab96ac2b (target/arm: Reuse sve_probe_page for 
scatter stores)
18/19 Checking commit 72052427669a (target/arm: Reuse sve_probe_page for gather 
loads)
19/19 Checking commit 6459e7a099d7 (target/arm: Remove sve_memopidx)
=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/20200508154359.7494-1-richard.hender...@linaro.org/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

[Bug 1877716] Re: Win10 guest unsuable after a few minutes

2020-05-09 Thread Xavier
** Attachment added: "libvirt/qemu log of a 73fd272 run reproducing the issue"
   
https://bugs.launchpad.net/qemu/+bug/1877716/+attachment/5368818/+files/win10.log

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877716

Title:
  Win10 guest unsuable after a few minutes

Status in QEMU:
  New

Bug description:
  On Arch Linux, the recent qemu package update seems to misbehave on
  some systems. In my case, my Windows 10 guest runs fine for around 5
  minutes and then start to get really sluggish, even unresponsive. It
  needs to be forced off. I could reproduce this on a minimal VM with no
  passthrough, although my current testing setup involves an nvme pcie
  passthrough.

  I bisected it to the following commit which rapidly starts to run sluggishly 
on my setup:
  https://github.com/qemu/qemu/commit/73fd282e7b6dd4e4ea1c3bbb3d302c8db51e4ccf

  I've ran the previous commit (
  https://github.com/qemu/qemu/commit/b321051cf48ccc2d3d832af111d688f2282f089b
  ) for the entire night without an issue so far.

  I believe this might be a duplicate of
  https://bugs.launchpad.net/qemu/+bug/1873032 , although I'm not sure.

  Linux cc 5.6.10-arch1-1 #1 SMP PREEMPT Sat, 02 May 2020 19:11:54 + x86_64 
GNU/Linux
  AMD Ryzen 7 2700X Eight-Core Processor

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877716/+subscriptions



[Bug 1877716] Re: Win10 guest unsuable after a few minutes

2020-05-09 Thread Xavier
Note that bisecting is difficult due to the nature of the bug (does not
appear before 5 to 10 minutes on my machine).

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877716

Title:
  Win10 guest unsuable after a few minutes

Status in QEMU:
  New

Bug description:
  On Arch Linux, the recent qemu package update seems to misbehave on
  some systems. In my case, my Windows 10 guest runs fine for around 5
  minutes and then start to get really sluggish, even unresponsive. It
  needs to be forced off. I could reproduce this on a minimal VM with no
  passthrough, although my current testing setup involves an nvme pcie
  passthrough.

  I bisected it to the following commit which rapidly starts to run sluggishly 
on my setup:
  https://github.com/qemu/qemu/commit/73fd282e7b6dd4e4ea1c3bbb3d302c8db51e4ccf

  I've ran the previous commit (
  https://github.com/qemu/qemu/commit/b321051cf48ccc2d3d832af111d688f2282f089b
  ) for the entire night without an issue so far.

  I believe this might be a duplicate of
  https://bugs.launchpad.net/qemu/+bug/1873032 , although I'm not sure.

  Linux cc 5.6.10-arch1-1 #1 SMP PREEMPT Sat, 02 May 2020 19:11:54 + x86_64 
GNU/Linux
  AMD Ryzen 7 2700X Eight-Core Processor

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877716/+subscriptions



[Bug 1877716] Re: Win10 guest unsuable after a few minutes

2020-05-09 Thread Xavier
** Attachment added: "libvirt description of the machine used to debug"
   
https://bugs.launchpad.net/qemu/+bug/1877716/+attachment/5368819/+files/win10.xml

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877716

Title:
  Win10 guest unsuable after a few minutes

Status in QEMU:
  New

Bug description:
  On Arch Linux, the recent qemu package update seems to misbehave on
  some systems. In my case, my Windows 10 guest runs fine for around 5
  minutes and then start to get really sluggish, even unresponsive. It
  needs to be forced off. I could reproduce this on a minimal VM with no
  passthrough, although my current testing setup involves an nvme pcie
  passthrough.

  I bisected it to the following commit which rapidly starts to run sluggishly 
on my setup:
  https://github.com/qemu/qemu/commit/73fd282e7b6dd4e4ea1c3bbb3d302c8db51e4ccf

  I've ran the previous commit (
  https://github.com/qemu/qemu/commit/b321051cf48ccc2d3d832af111d688f2282f089b
  ) for the entire night without an issue so far.

  I believe this might be a duplicate of
  https://bugs.launchpad.net/qemu/+bug/1873032 , although I'm not sure.

  Linux cc 5.6.10-arch1-1 #1 SMP PREEMPT Sat, 02 May 2020 19:11:54 + x86_64 
GNU/Linux
  AMD Ryzen 7 2700X Eight-Core Processor

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877716/+subscriptions



[Bug 1877716] [NEW] Win10 guest unsuable after a few minutes

2020-05-09 Thread Xavier
Public bug reported:

On Arch Linux, the recent qemu package update seems to misbehave on some
systems. In my case, my Windows 10 guest runs fine for around 5 minutes
and then start to get really sluggish, even unresponsive. It needs to be
forced off. I could reproduce this on a minimal VM with no passthrough,
although my current testing setup involves an nvme pcie passthrough.

I bisected it to the following commit which rapidly starts to run sluggishly on 
my setup:
https://github.com/qemu/qemu/commit/73fd282e7b6dd4e4ea1c3bbb3d302c8db51e4ccf

I've ran the previous commit (
https://github.com/qemu/qemu/commit/b321051cf48ccc2d3d832af111d688f2282f089b
) for the entire night without an issue so far.

I believe this might be a duplicate of
https://bugs.launchpad.net/qemu/+bug/1873032 , although I'm not sure.

Linux cc 5.6.10-arch1-1 #1 SMP PREEMPT Sat, 02 May 2020 19:11:54 + x86_64 
GNU/Linux
AMD Ryzen 7 2700X Eight-Core Processor

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877716

Title:
  Win10 guest unsuable after a few minutes

Status in QEMU:
  New

Bug description:
  On Arch Linux, the recent qemu package update seems to misbehave on
  some systems. In my case, my Windows 10 guest runs fine for around 5
  minutes and then start to get really sluggish, even unresponsive. It
  needs to be forced off. I could reproduce this on a minimal VM with no
  passthrough, although my current testing setup involves an nvme pcie
  passthrough.

  I bisected it to the following commit which rapidly starts to run sluggishly 
on my setup:
  https://github.com/qemu/qemu/commit/73fd282e7b6dd4e4ea1c3bbb3d302c8db51e4ccf

  I've ran the previous commit (
  https://github.com/qemu/qemu/commit/b321051cf48ccc2d3d832af111d688f2282f089b
  ) for the entire night without an issue so far.

  I believe this might be a duplicate of
  https://bugs.launchpad.net/qemu/+bug/1873032 , although I'm not sure.

  Linux cc 5.6.10-arch1-1 #1 SMP PREEMPT Sat, 02 May 2020 19:11:54 + x86_64 
GNU/Linux
  AMD Ryzen 7 2700X Eight-Core Processor

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877716/+subscriptions



Re: [RFC v2 5/9] vhost-vdpa: implement vhost-vdpa backend

2020-05-09 Thread Cindy Lu
On Sat, May 9, 2020 at 11:00 AM Jason Wang  wrote:
>
>
> On 2020/5/9 上午12:32, Cindy Lu wrote:
> > From: Tiwei Bie 
> >
> > Currently we have 2 types of vhost backends in QEMU: vhost kernel and
> > vhost-user. The above patch provides a generic device for vDPA purpose,
> > this vDPA device exposes to user space a non-vendor-specific configuration
> > interface for setting up a vhost HW accelerator, this patch set introduces
> > a third vhost backend called vhost-vdpa based on the vDPA interface.
> >
> > Vhost-vdpa usage:
> >
> >qemu-system-x86_64 -cpu host -enable-kvm \
> >  ..
> >-netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-id,id=vhost-vdpa0 \
> >-device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on \
> >
> > Co-Authored-By: Lingshan zhu 
> > Signed-off-by: Cindy Lu 
>
>
> Signed-off-by: Jason Wang 
>
>
> > ---
> >   hw/net/vhost_net.c|  39 ++-
> >   hw/virtio/Makefile.objs   |   1 +
> >   hw/virtio/vhost-backend.c |   5 +
> >   hw/virtio/vhost-vdpa.c| 447 ++
> >   hw/virtio/vhost.c |  14 +
> >   include/hw/virtio/vhost-backend.h |   8 +-
> >   include/hw/virtio/vhost-vdpa.h|  25 ++
> >   include/hw/virtio/vhost.h |   1 +
> >   8 files changed, 538 insertions(+), 2 deletions(-)
> >   create mode 100644 hw/virtio/vhost-vdpa.c
> >   create mode 100644 include/hw/virtio/vhost-vdpa.h
> >
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index 63b2a85d6e..1af39abaf3 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -17,8 +17,10 @@
> >   #include "net/net.h"
> >   #include "net/tap.h"
> >   #include "net/vhost-user.h"
> > +#include "net/vhost-vdpa.h"
> >
> >   #include "standard-headers/linux/vhost_types.h"
> > +#include "linux-headers/linux/vhost.h"
> >   #include "hw/virtio/virtio-net.h"
> >   #include "net/vhost_net.h"
> >   #include "qemu/error-report.h"
> > @@ -85,6 +87,29 @@ static const int user_feature_bits[] = {
> >   VHOST_INVALID_FEATURE_BIT
> >   };
> >
> > +static const int vdpa_feature_bits[] = {
> > +VIRTIO_F_NOTIFY_ON_EMPTY,
> > +VIRTIO_RING_F_INDIRECT_DESC,
> > +VIRTIO_RING_F_EVENT_IDX,
> > +VIRTIO_F_ANY_LAYOUT,
> > +VIRTIO_F_VERSION_1,
> > +VIRTIO_NET_F_CSUM,
> > +VIRTIO_NET_F_GUEST_CSUM,
> > +VIRTIO_NET_F_GSO,
> > +VIRTIO_NET_F_GUEST_TSO4,
> > +VIRTIO_NET_F_GUEST_TSO6,
> > +VIRTIO_NET_F_GUEST_ECN,
> > +VIRTIO_NET_F_GUEST_UFO,
> > +VIRTIO_NET_F_HOST_TSO4,
> > +VIRTIO_NET_F_HOST_TSO6,
> > +VIRTIO_NET_F_HOST_ECN,
> > +VIRTIO_NET_F_HOST_UFO,
> > +VIRTIO_NET_F_MRG_RXBUF,
> > +VIRTIO_NET_F_MTU,
> > +VIRTIO_F_IOMMU_PLATFORM,
> > +VIRTIO_NET_F_GUEST_ANNOUNCE,
> > +VHOST_INVALID_FEATURE_BIT
> > +};
>
>
> The framework should be ready for packed ring, let's add that.
>
Will fix this
>
> >   static const int *vhost_net_get_feature_bits(struct vhost_net *net)
> >   {
> >   const int *feature_bits = 0;
> > @@ -96,6 +121,9 @@ static const int *vhost_net_get_feature_bits(struct 
> > vhost_net *net)
> >   case NET_CLIENT_DRIVER_VHOST_USER:
> >   feature_bits = user_feature_bits;
> >   break;
> > +case NET_CLIENT_DRIVER_VHOST_VDPA:
> > +feature_bits = vdpa_feature_bits;
> > +break;
> >   default:
> >   error_report("Feature bits not defined for this type: %d",
> >   net->nc->info->type);
> > @@ -110,7 +138,10 @@ uint64_t vhost_net_get_features(struct vhost_net *net, 
> > uint64_t features)
> >   return vhost_get_features(>dev, vhost_net_get_feature_bits(net),
> >   features);
> >   }
> > -
> > +int vhost_net_get_device_id(struct vhost_net *net, uint32_t * device_id)
> > +{
> > +return vhost_dev_get_device_id(>dev, device_id);
> > +}
> >   void vhost_net_ack_features(struct vhost_net *net, uint64_t features)
> >   {
> >   net->dev.acked_features = net->dev.backend_features;
> > @@ -433,6 +464,12 @@ VHostNetState *get_vhost_net(NetClientState *nc)
> >   vhost_net = vhost_user_get_vhost_net(nc);
> >   assert(vhost_net);
> >   break;
> > +#endif
> > +#ifdef CONFIG_VHOST_NET_VDPA
> > +case NET_CLIENT_DRIVER_VHOST_VDPA:
> > +vhost_net = vhost_vdpa_get_vhost_net(nc);
> > +assert(vhost_net);
> > +break;
> >   #endif
> >   default:
> >   break;
> > diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> > index e2f70fbb89..e7c5d4a862 100644
> > --- a/hw/virtio/Makefile.objs
> > +++ b/hw/virtio/Makefile.objs
> > @@ -5,6 +5,7 @@ obj-y += virtio.o
> >   obj-$(call lor,$(CONFIG_VHOST_USER),$(CONFIG_VHOST_KERNEL)) += vhost.o 
> > vhost-backend.o
> >   common-obj-$(call lnot,$(call 
> > lor,$(CONFIG_VHOST_USER),$(CONFIG_VHOST_KERNEL))) += vhost-stub.o
> >   obj-$(CONFIG_VHOST_USER) += vhost-user.o
> > +obj-$(CONFIG_VHOST_VDPA) += vhost-vdpa.o
> >
> >   common-obj-$(CONFIG_VIRTIO_RNG) += virtio-rng.o
> >   

Re: [PATCH v1 0/8] s390: Extended-Length SCCB & DIAGNOSE 0x318

2020-05-09 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20200508230823.22956-1-wall...@linux.ibm.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Message-id: 20200508230823.22956-1-wall...@linux.ibm.com
Subject: [PATCH v1 0/8] s390: Extended-Length SCCB & DIAGNOSE 0x318
Type: series

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
101f5c5 s390: diagnose 318 info reset and migration support
07dbec2 s390/kvm: header sync for diag318
9985457 s390/sclp: add extended-length sccb support for kvm guest
8d13a8a s390/sclp: use cpu offset to locate cpu entries
4d17516 s390/sclp: read sccb from mem based on sccb length
4a1b469 s390/sclp: rework sclp boundary and length checks
699f978 s390/sclp: check sccb len before filling in data
e4476cc s390/sclp: remove SCLPDevice param from prepare_cpu_entries

=== OUTPUT BEGIN ===
1/8 Checking commit e4476ccd2e2b (s390/sclp: remove SCLPDevice param from 
prepare_cpu_entries)
2/8 Checking commit 699f978f26e8 (s390/sclp: check sccb len before filling in 
data)
WARNING: line over 80 characters
#22: FILE: hw/s390x/sclp.c:79:
+if (be16_to_cpu(sccb->h.length) < (sizeof(ReadInfo) + cpu_count * 
sizeof(CPUEntry))) {

ERROR: line over 90 characters
#47: FILE: hw/s390x/sclp.c:137:
+if (be16_to_cpu(sccb->h.length) < (sizeof(ReadCpuInfo) + cpu_count * 
sizeof(CPUEntry))) {

WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#67: 
new file mode 100644

total: 1 errors, 2 warnings, 45 lines checked

Patch 2/8 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

3/8 Checking commit 4a1b469bbbcb (s390/sclp: rework sclp boundary and length 
checks)
4/8 Checking commit 4d17516859c5 (s390/sclp: read sccb from mem based on sccb 
length)
5/8 Checking commit 8d13a8a15966 (s390/sclp: use cpu offset to locate cpu 
entries)
6/8 Checking commit 9985457a4f23 (s390/sclp: add extended-length sccb support 
for kvm guest)
WARNING: line over 80 characters
#87: FILE: hw/s390x/sclp.c:128:
+warn_report("insufficient sccb size to store full read scp info 
response");

WARNING: line over 80 characters
#111: FILE: target/s390x/cpu_features_def.inc.h:100:
+DEF_FEAT(EXTENDED_LENGTH_SCCB, "els", STFL, 140, "Extended-length SCCB 
facility")

total: 0 errors, 2 warnings, 74 lines checked

Patch 6/8 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
7/8 Checking commit 07dbec216007 (s390/kvm: header sync for diag318)
8/8 Checking commit 101f5c56e483 (s390: diagnose 318 info reset and migration 
support)
WARNING: line over 80 characters
#219: FILE: target/s390x/cpu_features_def.inc.h:126:
+DEF_FEAT(DIAG318, "diag318", SCLP_BYTE_134, 0, "Control program name and 
version codes")

total: 0 errors, 1 warnings, 255 lines checked

Patch 8/8 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/20200508230823.22956-1-wall...@linux.ibm.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

Re: [PATCH V3 10/14] KVM: MIPS: Add Loongson-3 Virtual IPI interrupt support

2020-05-09 Thread Huacai Chen
Hi, Aleksandar,

On Fri, May 8, 2020 at 7:23 PM Aleksandar Markovic
 wrote:
>
> нед, 3. мај 2020. у 12:14 Huacai Chen  је написао/ла:
> >
> > This patch add Loongson-3 Virtual IPI interrupt support in the kernel,
> > because emulate it in QEMU is too expensive for performance.
> >
>
> Huacei, hi!
>
> While, in principle, I support this patch, could you please, just for
> the sake of everybody being on the same page, describe the
> hyphothetical alternative implementation in QEMU, and explain why it
> would be more expensive than the solution proposed in this patch.
>
> No need to go into details, I would like to see just the general idea
> you had in your mind.
The current implementation of IPI emulation in QEMU is based on GIC for
MIPS, but Loongson-3 doesn't use GIC. Furthermore, IPI emulation in QEMU
is too expensive for performance (because of too many context switches
between Host and Guest). With current solution, the IPI delay may even
cause RCU stall warnings in a multi-core Guest. So, we design a faster
solution that emulate IPI interrupt in kernel (only used by Loongson-3
now).


Huacai


>
> Yours,
> Aleksandar
>
>
> > Signed-off-by: Huacai Chen 
> > Co-developed-by: Jiaxun Yang 
> > ---
> >  arch/mips/include/asm/kvm_host.h |  32 ++
> >  arch/mips/kvm/Makefile   |   3 +
> >  arch/mips/kvm/emulate.c  |  23 -
> >  arch/mips/kvm/loongson_ipi.c | 214 
> > +++
> >  arch/mips/kvm/mips.c |   6 ++
> >  5 files changed, 277 insertions(+), 1 deletion(-)
> >  create mode 100644 arch/mips/kvm/loongson_ipi.c
> >
> > diff --git a/arch/mips/include/asm/kvm_host.h 
> > b/arch/mips/include/asm/kvm_host.h
> > index a7758c0..f165902 100644
> > --- a/arch/mips/include/asm/kvm_host.h
> > +++ b/arch/mips/include/asm/kvm_host.h
> > @@ -23,6 +23,8 @@
> >  #include 
> >  #include 
> >
> > +#include 
> > +
> >  /* MIPS KVM register ids */
> >  #define MIPS_CP0_32(_R, _S)\
> > (KVM_REG_MIPS_CP0 | KVM_REG_SIZE_U32 | (8 * (_R) + (_S)))
> > @@ -181,11 +183,39 @@ struct kvm_vcpu_stat {
> >  struct kvm_arch_memory_slot {
> >  };
> >
> > +#ifdef CONFIG_CPU_LOONGSON64
> > +struct ipi_state {
> > +   uint32_t status;
> > +   uint32_t en;
> > +   uint32_t set;
> > +   uint32_t clear;
> > +   uint64_t buf[4];
> > +};
> > +
> > +struct loongson_kvm_ipi;
> > +
> > +struct ipi_io_device {
> > +   int node_id;
> > +   struct loongson_kvm_ipi *ipi;
> > +   struct kvm_io_device device;
> > +};
> > +
> > +struct loongson_kvm_ipi {
> > +   spinlock_t lock;
> > +   struct kvm *kvm;
> > +   struct ipi_state ipistate[16];
> > +   struct ipi_io_device dev_ipi[4];
> > +};
> > +#endif
> > +
> >  struct kvm_arch {
> > /* Guest physical mm */
> > struct mm_struct gpa_mm;
> > /* Mask of CPUs needing GPA ASID flush */
> > cpumask_t asid_flush_mask;
> > +#ifdef CONFIG_CPU_LOONGSON64
> > +   struct loongson_kvm_ipi ipi;
> > +#endif
> >  };
> >
> >  #define N_MIPS_COPROC_REGS 32
> > @@ -1133,6 +1163,8 @@ extern int kvm_mips_trans_mtc0(union mips_instruction 
> > inst, u32 *opc,
> >  /* Misc */
> >  extern void kvm_mips_dump_stats(struct kvm_vcpu *vcpu);
> >  extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
> > +extern int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
> > +struct kvm_mips_interrupt *irq);
> >
> >  static inline void kvm_arch_hardware_unsetup(void) {}
> >  static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> > diff --git a/arch/mips/kvm/Makefile b/arch/mips/kvm/Makefile
> > index 0a3cef6..506c4ac 100644
> > --- a/arch/mips/kvm/Makefile
> > +++ b/arch/mips/kvm/Makefile
> > @@ -13,6 +13,9 @@ kvm-objs := $(common-objs-y) mips.o emulate.o entry.o \
> > fpu.o
> >  kvm-objs += hypcall.o
> >  kvm-objs += mmu.o
> > +ifdef CONFIG_CPU_LOONGSON64
> > +kvm-objs += loongson_ipi.o
> > +endif
> >
> >  ifdef CONFIG_KVM_MIPS_VZ
> >  kvm-objs   += vz.o
> > diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
> > index 754094b..3946499 100644
> > --- a/arch/mips/kvm/emulate.c
> > +++ b/arch/mips/kvm/emulate.c
> > @@ -1600,6 +1600,7 @@ enum emulation_result kvm_mips_emulate_store(union 
> > mips_instruction inst,
> >  struct kvm_run *run,
> >  struct kvm_vcpu *vcpu)
> >  {
> > +   int r;
> > enum emulation_result er;
> > u32 rt;
> > void *data = run->mmio.data;
> > @@ -1666,9 +1667,18 @@ enum emulation_result kvm_mips_emulate_store(union 
> > mips_instruction inst,
> > goto out_fail;
> > }
> >
> > -   run->mmio.is_write = 1;
> > vcpu->mmio_needed = 1;
> > +   run->mmio.is_write = 1;
> > vcpu->mmio_is_write = 1;
> > +
> > +   r = kvm_io_bus_write(vcpu, KVM_MMIO_BUS,
> > +   

[Bug 1877706] Re: [Feature request] qemu does not support for Octeon MIPS64 on X86

2020-05-09 Thread Aleksandar Markovic
Probably not, but there may be a workaround. The closest cpu to Octeon
that is supported in QEMU is "MIPS64R2-generic".

Please try using switch -cpu MIPS64R2-generic in your QEMU command line.

Also, I think you should use qemu-mipsn32 rather than qemu-mips or qemu-
mips64.

I don't have much hope that all this will work, but it is worth trying.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877706

Title:
   [Feature request] qemu does not support for Octeon MIPS64 on X86

Status in QEMU:
  New

Bug description:
  Description of problem:

  I use mips64-octeon-linux-gnu-gcc cross toolchain on X86,and generate
  binary file.

  > mips64-octeon-linux-gnu-gcc hello.c -static
  > file a.out
  > a.out: ELF 32-bit MSB executable, MIPS, N32 MIPS64 rel2 version 1 (SYSV), 
statically linked, for GNU/Linux 2.4.0, not stripped

  I execute it with mips64-linux-user mode in qemu, it is invalid.

  > ./qemu-5.0.0/mips64-linux-user/qemu-mips64 a.out
  > a.out: Invalid ELF image for this architecture

  when I choose mips-linux-user mode, it regards as illegal instruction.

  > ./qemu-5.0.0/mips-linux-user/qemu-mips a.out
  > qemu: uncaught target signal 4 (Illegal instruction) - core dumped
  > Illegal instruction (core dumped)

  I would like to know, is this due to my problem or does qemu not
  support Octeon MIPS64 on X86?

  if qemu has supported Octeon MIPS64 on X86, how can I emulate it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1877706/+subscriptions



Re: [RFC v2 4/9] vhost-vdpa: introduce vhost-vdpa net client

2020-05-09 Thread Cindy Lu
On Sat, May 9, 2020 at 10:40 AM Jason Wang  wrote:
>
>
> On 2020/5/9 上午12:32, Cindy Lu wrote:
> > From: Tiwei Bie 
>
>
> If you think you've done a huge refactor on the code, you can change the
> author but need to keep the sob of Tiwei.
>
>
> >
> > This patch set introduces a new net client type: vhost-vdpa.
> > vhost-vdpa net client will set up a vDPA device which is specified
> > by a "vhostdev" parameter.
> >
> > Co-authored-by: Lingshan Zhu 
> > Signed-off-by: Cindy Lu 
> > ---
> >   configure|  21 
> >   include/net/vhost-vdpa.h |  19 
> >   include/net/vhost_net.h  |   1 +
>
>
> Patch 5 which is the infrastructure of vhost-vpda should come first.
> Please re-order the patch in next version.
>
Sure, Will fix this
>
> >   net/Makefile.objs|   2 +-
> >   net/clients.h|   2 +
> >   net/net.c|   3 +
> >   net/vhost-vdpa.c | 227 +++
> >   qapi/net.json|  22 +++-
> >   qemu-options.hx  |  19 
> >   9 files changed, 313 insertions(+), 3 deletions(-)
> >   create mode 100644 include/net/vhost-vdpa.h
> >   create mode 100644 net/vhost-vdpa.c
> >
> > diff --git a/configure b/configure
> > index 6099be1d84..bdd732e3bb 100755
> > --- a/configure
> > +++ b/configure
> > @@ -1505,6 +1505,10 @@ for opt do
> > ;;
> > --enable-vhost-user) vhost_user="yes"
> > ;;
> > +  --disable-vhost-vdpa) vhost_vdpa="no"
> > +  ;;
> > +  --enable-vhost-vdpa) vhost_vdpa="yes"
> > +  ;;
> > --disable-vhost-kernel) vhost_kernel="no"
> > ;;
> > --enable-vhost-kernel) vhost_kernel="yes"
> > @@ -1780,6 +1784,7 @@ disabled with --disable-FEATURE, default is enabled 
> > if available:
> > vhost-cryptovhost-user-crypto backend support
> > vhost-kernelvhost kernel backend support
> > vhost-user  vhost-user backend support
> > +  vhost-vdpa  vhost-vdpa backend support
>
>
> Maybe "vhost-vdpa kernel backend support" is better.
>
>
Will  fix this
> > spice   spice
> > rbd rados block device (rbd)
> > libiscsiiscsi support
> > @@ -2241,6 +2246,10 @@ test "$vhost_user" = "" && vhost_user=yes
> >   if test "$vhost_user" = "yes" && test "$mingw32" = "yes"; then
> > error_exit "vhost-user isn't available on win32"
> >   fi
> > +test "$vhost_vdpa" = "" && vhost_vdpa=yes
> > +if test "$vhost_vdpa" = "yes" && test "$mingw32" = "yes"; then
> > +  error_exit "vhost-vdpa isn't available on win32"
> > +fi
>
>
> Let's add a check for Linux like vhost kernel below.
>
Will fix this
>
> >   test "$vhost_kernel" = "" && vhost_kernel=$linux
> >   if test "$vhost_kernel" = "yes" && test "$linux" != "yes"; then
> > error_exit "vhost-kernel is only available on Linux"
> > @@ -2269,6 +2278,11 @@ test "$vhost_user_fs" = "" && 
> > vhost_user_fs=$vhost_user
> >   if test "$vhost_user_fs" = "yes" && test "$vhost_user" = "no"; then
> > error_exit "--enable-vhost-user-fs requires --enable-vhost-user"
> >   fi
> > +#vhost-vdpa backends
> > +test "$vhost_net_vdpa" = "" && vhost_net_vdpa=$vhost_vdpa
> > +if test "$vhost_net_vdpa" = "yes" && test "$vhost_vdpa" = "no"; then
> > +  error_exit "--enable-vhost-net-vdpa requires --enable-vhost-vdpa"
> > +fi
> >
> >   # OR the vhost-kernel and vhost-user values for simplicity
> >   if test "$vhost_net" = ""; then
> > @@ -6543,6 +6557,7 @@ echo "vhost-scsi support $vhost_scsi"
> >   echo "vhost-vsock support $vhost_vsock"
> >   echo "vhost-user support $vhost_user"
> >   echo "vhost-user-fs support $vhost_user_fs"
> > +echo "vhost-vdpa support $vhost_vdpa"
> >   echo "Trace backends$trace_backends"
> >   if have_backend "simple"; then
> >   echo "Trace output file $trace_file-"
> > @@ -7031,6 +7046,9 @@ fi
> >   if test "$vhost_net_user" = "yes" ; then
> > echo "CONFIG_VHOST_NET_USER=y" >> $config_host_mak
> >   fi
> > +if test "$vhost_net_vdpa" = "yes" ; then
> > +  echo "CONFIG_VHOST_NET_VDPA=y" >> $config_host_mak
> > +fi
> >   if test "$vhost_crypto" = "yes" ; then
> > echo "CONFIG_VHOST_CRYPTO=y" >> $config_host_mak
> >   fi
> > @@ -7043,6 +7061,9 @@ fi
> >   if test "$vhost_user" = "yes" ; then
> > echo "CONFIG_VHOST_USER=y" >> $config_host_mak
> >   fi
> > +if test "$vhost_vdpa" = "yes" ; then
> > +  echo "CONFIG_VHOST_VDPA=y" >> $config_host_mak
> > +fi
> >   if test "$vhost_user_fs" = "yes" ; then
> > echo "CONFIG_VHOST_USER_FS=y" >> $config_host_mak
> >   fi
> > diff --git a/include/net/vhost-vdpa.h b/include/net/vhost-vdpa.h
> > new file mode 100644
> > index 00..6ce0d04f72
> > --- /dev/null
> > +++ b/include/net/vhost-vdpa.h
> > @@ -0,0 +1,19 @@
> > +/*
> > + * vhost-vdpa.h
> > + *
> > + * Copyright(c) 2017-2018 Intel Corporation.
> > + * Copyright(c) 2020 Red Hat, Inc.
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or 
> > later.
> > + * See the COPYING file in the top-level directory.
> > + *
> > + */
> > +
> > +#ifndef 

Re: [RFC v2 4/9] vhost-vdpa: introduce vhost-vdpa net client

2020-05-09 Thread Cindy Lu
On Sat, May 9, 2020 at 12:42 AM Eric Blake  wrote:
>
> On 5/8/20 11:32 AM, Cindy Lu wrote:
> > From: Tiwei Bie 
> >
> > This patch set introduces a new net client type: vhost-vdpa.
> > vhost-vdpa net client will set up a vDPA device which is specified
> > by a "vhostdev" parameter.
> >
> > Co-authored-by: Lingshan Zhu 
> > Signed-off-by: Cindy Lu 
> > ---
>
> Just looking at UI:
>
>
> > +++ b/qapi/net.json
> > @@ -441,6 +441,23 @@
> >   '*queues':'int' } }
> >
> >   ##
> > +# @NetdevVhostVDPAOptions:
> > +#
> > +# Vhost-vdpa network backend
> > +#
> > +# @vhostdev: name of a vdpa dev path in sysfs
> > +#
> > +# @queues: number of queues to be created for multiqueue vhost-vdpa
> > +#  (default: 1) (Since 5.1)
>
> No need to mark a 'since' tag on a member introduced at the same time as
> the overall struct itself.
>
Will fix this
> > +#
> > +# Since: 5.1
> > +##
> > +{ 'struct': 'NetdevVhostVDPAOptions',
> > +  'data': {
> > +'*vhostdev': 'str',
>
> What does this default to if omitted?
>
> > +'*fd':   'str',
>
> Not documented above.
>
> > +'*queues':   'int' } }
> > +##
>
> Missing blank line separator.
>
Thanks Eric, Will fix  these all
> >   # @NetClientDriver:
> >   #
> >   # Available netdev drivers.
> > @@ -451,7 +468,7 @@
> >   ##
> >   { 'enum': 'NetClientDriver',
> > 'data': [ 'none', 'nic', 'user', 'tap', 'l2tpv3', 'socket', 'vde',
> > -'bridge', 'hubport', 'netmap', 'vhost-user' ] }
> > +'bridge', 'hubport', 'netmap', 'vhost-user', 'vhost-vdpa' ] }
>
> Missing a line above that 'vhost-vdpa' is new to 5.1.
>
>
Will fix this
> > @@ -2749,6 +2756,18 @@ qemu -m 512 -object 
> > memory-backend-file,id=mem,size=512M,mem-path=/hugetlbfs,sha
> >-device virtio-net-pci,netdev=net0
> >   @end example
> >
> > +@item -netdev vhost-vdpa,vhostdev=/path/to/dev
> > +Establish a vhost-vdpa netdev, backed by a vhostdev. The chardev should
> > +be a unix domain socket backed one. The vhost-vdpa uses a specifically 
> > defined
> > +protocol to pass vhost ioctl replacement messages to an application on the 
> > other
> > +end of the socket.
> > +Example:
> > +@example
> > +qemu -m 512 -object 
> > memory-backend-file,id=mem,size=512M,mem-path=/hugetlbfs,share=on \
> > + -numa node,memdev=mem \
> > + -netdev type=vhost-vdpa,id=net0,vhostdev=/path/to/dev \
> > + -device virtio-net-pci,netdev=net0
> > +@end example
> >   @item -netdev hubport,id=@var{id},hubid=@var{hubid}[,netdev=@var{nd}]
> >
> >   Create a hub port on the emulated hub with ID @var{hubid}.
> >
>
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.   +1-919-301-3226
> Virtualization:  qemu.org | libvirt.org
>




Re: [RFC v2 3/9] virtio_net: introduce vhost_set_state

2020-05-09 Thread Cindy Lu
On Sat, May 9, 2020 at 10:25 AM Jason Wang  wrote:
>
>
> On 2020/5/9 上午12:32, Cindy Lu wrote:
> > Introduce a function to set the state to the vhost driver.
> > vDPA need to sync the driver's state to NIC
>
>
> Let's split this patch into two.
>
> 1) introduce vhost_set_state
> 2) make virtio-net use of vhost_set_state
>
Sure, Will fix this
> >
> > Signed-off-by: Cindy Lu 
> > ---
> >   hw/net/vhost_net.c| 9 +
> >   hw/net/virtio-net.c   | 9 +
> >   include/hw/virtio/vhost-backend.h | 2 ++
> >   include/net/vhost_net.h   | 2 +-
> >   4 files changed, 21 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index d1d421e3d9..63b2a85d6e 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -465,3 +465,12 @@ int vhost_net_set_mtu(struct vhost_net *net, uint16_t 
> > mtu)
> >
> >   return vhost_ops->vhost_net_set_mtu(>dev, mtu);
> >   }
> > +int vhost_set_state(NetClientState *nc, uint8_t state)
> > +{
> > +struct vhost_net *net = get_vhost_net(nc);
> > +struct vhost_dev *hdev = >dev;
> > +if (hdev->vhost_ops->vhost_set_state) {
>
>
> Indentation looks wrong.
>
Will fix this
>
> > +return hdev->vhost_ops->vhost_set_state(hdev, state);
> > +}
> > +return 0;
> > +}
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index db3d7c38e6..1bddb4b4af 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -206,6 +206,9 @@ static void virtio_net_vhost_status(VirtIONet *n, 
> > uint8_t status)
> >   VirtIODevice *vdev = VIRTIO_DEVICE(n);
> >   NetClientState *nc = qemu_get_queue(n->nic);
> >   int queues = n->multiqueue ? n->max_queues : 1;
> > +NetClientState *peer = qemu_get_peer(nc, 0);
> > +uint8_t status_set  = vdev->status ;
> > +uint8_t vhost_started_pre = n->vhost_started;
> >
> >   if (!get_vhost_net(nc->peer)) {
> >   return;
> > @@ -245,6 +248,7 @@ static void virtio_net_vhost_status(VirtIONet *n, 
> > uint8_t status)
> >   return;
> >   }
> >   }
> > +status_set = status_set | VIRTIO_CONFIG_S_DRIVER_OK;
> >
> >   n->vhost_started = 1;
> >   r = vhost_net_start(vdev, n->nic->ncs, queues);
> > @@ -252,11 +256,16 @@ static void virtio_net_vhost_status(VirtIONet *n, 
> > uint8_t status)
> >   error_report("unable to start vhost net: %d: "
> >"falling back on userspace virtio", -r);
> >   n->vhost_started = 0;
> > +status_set = status_set & ~VIRTIO_CONFIG_S_DRIVER_OK;
> >   }
> >   } else {
> >   vhost_net_stop(vdev, n->nic->ncs, queues);
> > +status_set = status_set & ~VIRTIO_CONFIG_S_DRIVER_OK;
> >   n->vhost_started = 0;
> >   }
> > +if (vhost_started_pre != n->vhost_started) {
> > +vhost_set_state(peer, status_set);
>
>
> Any reason why not just passing virtio device status to vhost-vdpa?
>
I will double check and fix this
>
> > +}
> >   }
> >
> >   static int virtio_net_set_vnet_endian_one(VirtIODevice *vdev,
> > diff --git a/include/hw/virtio/vhost-backend.h 
> > b/include/hw/virtio/vhost-backend.h
> > index 6f6670783f..f823055167 100644
> > --- a/include/hw/virtio/vhost-backend.h
> > +++ b/include/hw/virtio/vhost-backend.h
> > @@ -112,6 +112,7 @@ typedef int (*vhost_get_inflight_fd_op)(struct 
> > vhost_dev *dev,
> >   typedef int (*vhost_set_inflight_fd_op)(struct vhost_dev *dev,
> >   struct vhost_inflight *inflight);
> >
> > +typedef int (*vhost_set_state_op)(struct vhost_dev *dev, uint8_t state);
>
>
> Need document what's the meaning of state here, is it e.g virtio device
> status? If yes, is it better to rename it to vhost_set_status()?
>
> Thanks
>
sure will fix this
>
> >   typedef struct VhostOps {
> >   VhostBackendType backend_type;
> >   vhost_backend_init vhost_backend_init;
> > @@ -152,6 +153,7 @@ typedef struct VhostOps {
> >   vhost_backend_mem_section_filter_op vhost_backend_mem_section_filter;
> >   vhost_get_inflight_fd_op vhost_get_inflight_fd;
> >   vhost_set_inflight_fd_op vhost_set_inflight_fd;
> > +vhost_set_state_op vhost_set_state;
> >   } VhostOps;
> >
> >   extern const VhostOps user_ops;
> > diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> > index 77e47398c4..6548a5a105 100644
> > --- a/include/net/vhost_net.h
> > +++ b/include/net/vhost_net.h
> > @@ -39,5 +39,5 @@ int vhost_set_vring_enable(NetClientState * nc, int 
> > enable);
> >   uint64_t vhost_net_get_acked_features(VHostNetState *net);
> >
> >   int vhost_net_set_mtu(struct vhost_net *net, uint16_t mtu);
> > -
> > +int vhost_set_state(NetClientState *nc, uint8_t state);
> >   #endif
>




Re: [RFC v2 6/9] virtio-bus: introduce queue_enabled method

2020-05-09 Thread Cindy Lu
On Sat, May 9, 2020 at 11:02 AM Jason Wang  wrote:
>
>
> On 2020/5/9 上午12:32, Cindy Lu wrote:
> > From: Jason Wang 
> >
> > This patch introduces queue_enabled() method which allows the
> > transport to implement its own way to report whether or not a queue is
> > enabled.
> >
> > Signed-off-by: Jason Wang 
>
>
> This patch should come before any of the vhost-vpda patch.
>
> Thanks
>
Sure, Will fix this
>
> > ---
> >   hw/virtio/virtio.c | 6 ++
> >   include/hw/virtio/virtio-bus.h | 4 
> >   2 files changed, 10 insertions(+)
> >
> > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > index 04716b5f6c..09732a8836 100644
> > --- a/hw/virtio/virtio.c
> > +++ b/hw/virtio/virtio.c
> > @@ -3169,6 +3169,12 @@ hwaddr virtio_queue_get_desc_addr(VirtIODevice 
> > *vdev, int n)
> >
> >   bool virtio_queue_enabled(VirtIODevice *vdev, int n)
> >   {
> > +BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
> > +VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> > +
> > +if (k->queue_enabled)
> > +return k->queue_enabled(qbus->parent, n);
> > +
> >   return virtio_queue_get_desc_addr(vdev, n) != 0;
> >   }
> >
> > diff --git a/include/hw/virtio/virtio-bus.h b/include/hw/virtio/virtio-bus.h
> > index 38c9399cd4..0f6f215925 100644
> > --- a/include/hw/virtio/virtio-bus.h
> > +++ b/include/hw/virtio/virtio-bus.h
> > @@ -83,6 +83,10 @@ typedef struct VirtioBusClass {
> >*/
> >   int (*ioeventfd_assign)(DeviceState *d, EventNotifier *notifier,
> >   int n, bool assign);
> > +/*
> > + * Whether queue number n is enabled.
> > + */
> > +bool (*queue_enabled)(DeviceState *d, int n);
> >   /*
> >* Does the transport have variable vring alignment?
> >* (ie can it ever call virtio_queue_set_align()?)
>




Re: [RFC v2 8/9] vhost_net: set vq ready during start if necessary

2020-05-09 Thread Cindy Lu
On Sat, May 9, 2020 at 11:04 AM Jason Wang  wrote:
>
>
> On 2020/5/9 上午12:32, Cindy Lu wrote:
> > From: Jason Wang 
> >
> > Signed-off-by: Jason Wang 
> > ---
> >   hw/net/vhost_net.c | 4 
> >   1 file changed, 4 insertions(+)
> >
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index 1af39abaf3..eff9ec9177 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -383,6 +383,10 @@ int vhost_net_start(VirtIODevice *dev, NetClientState 
> > *ncs,
> >   goto err_start;
> >   }
> >   }
> > +
> > +if (virtio_queue_enabled(dev, i)) {
> > +vhost_set_vring_ready(peer);
> > +}
> >   }
> >
> >   return 0;
>
>
> Please place the patch before vhost-vdpa.
>
> Thanks
>
Sure will fix this




Re: [RFC v2 2/9] net: use the function qemu_get_peer

2020-05-09 Thread Cindy Lu
On Sat, May 9, 2020 at 10:20 AM Jason Wang  wrote:
>
>
> On 2020/5/9 上午12:32, Cindy Lu wrote:
> > user the qemu_get_peer to replace the old process
>
>
> The title should be "vhost_net: use the function qemu_get_peer".
>
> Thanks
>
Sure, I will fix this
>
> >
> > Signed-off-by: Cindy Lu 
> > ---
> >   hw/net/vhost_net.c | 14 +-
> >   1 file changed, 9 insertions(+), 5 deletions(-)
> >
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index 6b82803fa7..d1d421e3d9 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -306,7 +306,9 @@ int vhost_net_start(VirtIODevice *dev, NetClientState 
> > *ncs,
> >   BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
> >   VirtioBusState *vbus = VIRTIO_BUS(qbus);
> >   VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
> > +struct vhost_net *net;
> >   int r, e, i;
> > +NetClientState *peer;
> >
> >   if (!k->set_guest_notifiers) {
> >   error_report("binding does not support guest notifiers");
> > @@ -314,9 +316,9 @@ int vhost_net_start(VirtIODevice *dev, NetClientState 
> > *ncs,
> >   }
> >
> >   for (i = 0; i < total_queues; i++) {
> > -struct vhost_net *net;
> >
> > -net = get_vhost_net(ncs[i].peer);
> > +peer = qemu_get_peer(ncs, i);
> > +net = get_vhost_net(peer);
> >   vhost_net_set_vq_index(net, i * 2);
> >
> >   /* Suppress the masking guest notifiers on vhost user
> > @@ -335,7 +337,8 @@ int vhost_net_start(VirtIODevice *dev, NetClientState 
> > *ncs,
> >   }
> >
> >   for (i = 0; i < total_queues; i++) {
> > -r = vhost_net_start_one(get_vhost_net(ncs[i].peer), dev);
> > +peer = qemu_get_peer(ncs, i);
> > +r = vhost_net_start_one(get_vhost_net(peer), dev);
> >
> >   if (r < 0) {
> >   goto err_start;
> > @@ -343,7 +346,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState 
> > *ncs,
> >
> >   if (ncs[i].peer->vring_enable) {
> >   /* restore vring enable state */
> > -r = vhost_set_vring_enable(ncs[i].peer, 
> > ncs[i].peer->vring_enable);
> > +r = vhost_set_vring_enable(peer, peer->vring_enable);
> >
> >   if (r < 0) {
> >   goto err_start;
> > @@ -355,7 +358,8 @@ int vhost_net_start(VirtIODevice *dev, NetClientState 
> > *ncs,
> >
> >   err_start:
> >   while (--i >= 0) {
> > -vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
> > +peer = qemu_get_peer(ncs , i);
> > +vhost_net_stop_one(get_vhost_net(peer), dev);
> >   }
> >   e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
> >   if (e < 0) {
>




[Bug 1868221] Re: /usr/share/applications/qemu.desktop should have an "Exec=" key.

2020-05-09 Thread Lockywolf
I am sorry I haven't dealt with this bug for quite a while. KDE 5 is not
properly working on my distro, and I wanted to test it when it
stabilises.

If qemu dislikes long-standing bugs, this bug can be closed, and I'll
open a new one when I have time to test it on the new KDE.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1868221

Title:
  /usr/share/applications/qemu.desktop should have an "Exec=" key.

Status in QEMU:
  New

Bug description:
  According to the www.freedesktop.org .desktop-file specification, all
  "Application" desktop files should have an "Exec=" key. The one in
  qemu doesn't.

  This can be easily verified by running kbuildsycoca4 if KDE4 is
  present, but the issue is not DE-dependent.

  Which binary exactly should be assigned as the default one, I don't
  know.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1868221/+subscriptions



Re: [PATCH v2] e1000e: Added ICR clearing by corresponding IMS bit.

2020-05-09 Thread Jason Wang



On 2020/5/9 上午2:13, Andrew Melnichenko wrote:

Yo, I've used OpenSDM_8257x-18.pdf specification.
This document was recommended by Intel guys(Also, they referenced to 
that note).
I've made a fast fix and it works. Before that I had a fix for Linux 
e1000e driver.
Overall, the issue was in pending interrupts that can't be cleared by 
reading ICR in Linux(Windows driver clears by writing to ICR).


You can download spec for example from:
http://iweb.dl.sourceforge.net/project/e1000/8257x%20Developer%20Manual/Revision%201.8/OpenSDM_8257x-18.pdf



Interesting, this spec doesn't include 82574l which is what e1000e 
claims to emulate:


    c->vendor_id = PCI_VENDOR_ID_INTEL;
    c->device_id = E1000_DEV_ID_82574L;

Looking at 82574l spec (using the link mentioned in e1000e_core.c), it 
said (7.4.3):


In MSI-X mode the bits in this register can be configured to auto-clear 
when the MSI-X
interrupt message is sent, in order to minimize driver overhead, and 
when using MSI-X

interrupt signaling.
In systems that do not support MSI-X, reading the ICR register clears 
it's bits or writing

1b's clears the corresponding bits in this register.

So the auto clear is under the control of EIAC (MSIX) or unconditionally 
(non MSI-X).


But what has been implemented in e1000e_mac_icr_read() is something 
similar to the behavior of non 82574l card.


So I think we should implement the 82574l behavior?

Thanks




On Fri, May 8, 2020 at 5:21 AM Jason Wang > wrote:



On 2020/5/7 上午5:26, and...@daynix.com 
wrote:
> From: Andrew Melnychenko mailto:and...@daynix.com>>
>
> Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1707441
> Added ICR clearing if there is IMS bit - according to the note by
> section 13.3.27 of the 8257X developers manual.
>
> Signed-off-by: Andrew Melnychenko mailto:and...@daynix.com>>
> ---
>   hw/net/e1000e_core.c | 9 +
>   hw/net/trace-events  | 1 +
>   2 files changed, 10 insertions(+)
>
> diff --git a/hw/net/e1000e_core.c b/hw/net/e1000e_core.c
> index d5676871fa..302e99ff46 100644
> --- a/hw/net/e1000e_core.c
> +++ b/hw/net/e1000e_core.c
> @@ -2624,6 +2624,15 @@ e1000e_mac_icr_read(E1000ECore *core, int
index)
>           e1000e_clear_ims_bits(core, core->mac[IAM]);
>       }
>
> +    /*
> +     * PCIe* GbE Controllers Open Source Software Developer's
Manual
> +     * 13.3.27 Interrupt Cause Read Register
> +     */


Hi Andrew:

Which version of the manual did you use? I try to use the one
mentioned
in e1000e.c which is

http://www.intel.com/content/dam/doc/datasheet/82574l-gbe-controller-datasheet.pdf.

But I couldn't find chapter 13.3.27.

Thanks


> +    if (core->mac[ICR] & core->mac[IMS]) {
> + trace_e1000e_irq_icr_clear_icr_bit_ims(core->mac[ICR],
core->mac[IMS]);
> +        core->mac[ICR] = 0;
> +    }
> +
>       trace_e1000e_irq_icr_read_exit(core->mac[ICR]);
>       e1000e_update_interrupt_state(core);
>       return ret;
> diff --git a/hw/net/trace-events b/hw/net/trace-events
> index e18f883cfd..46e40fcfa9 100644
> --- a/hw/net/trace-events
> +++ b/hw/net/trace-events
> @@ -237,6 +237,7 @@ e1000e_irq_icr_read_entry(uint32_t icr)
"Starting ICR read. Current ICR: 0x%x"
>   e1000e_irq_icr_read_exit(uint32_t icr) "Ending ICR read.
Current ICR: 0x%x"
>   e1000e_irq_icr_clear_zero_ims(void) "Clearing ICR on read due
to zero IMS"
>   e1000e_irq_icr_clear_iame(void) "Clearing ICR on read due to IAME"
> +e1000e_irq_icr_clear_icr_bit_ims(uint32_t icr, uint32_t ims)
"Clearing ICR on read due corresponding IMS bit: 0x%x & 0x%x"
>   e1000e_irq_iam_clear_eiame(uint32_t iam, uint32_t cause)
"Clearing IMS due to EIAME, IAM: 0x%X, cause: 0x%X"
>   e1000e_irq_icr_clear_eiac(uint32_t icr, uint32_t eiac)
"Clearing ICR bits due to EIAC, ICR: 0x%X, EIAC: 0x%X"
>   e1000e_irq_ims_clear_set_imc(uint32_t val) "Clearing IMS bits
due to IMC write 0x%x"