[PATCH v1] xen/docs: design doc for GICv4.0 vLPI support

2023-06-24 Thread Penny Zheng
This is a design doc for GICv4.0 vLPI support.

Signed-off-by: Penny Zheng 
---
 docs/designs/gicv4_vlpi.md | 333 +
 1 file changed, 333 insertions(+)
 create mode 100644 docs/designs/gicv4_vlpi.md

diff --git a/docs/designs/gicv4_vlpi.md b/docs/designs/gicv4_vlpi.md
new file mode 100644
index 00..9a1969d7cc
--- /dev/null
+++ b/docs/designs/gicv4_vlpi.md
@@ -0,0 +1,333 @@
+# GICv4.0 Virtual LPI Support
+
+We will have four stages to add GICv4.0/GICv4.1 support to Xen.
+
+   * Stage#1: Add GICv4.0 Virtual LPI support
+   * Stage#2: Add GICv4.0 Virtual SGI support
+   * Stage#3: Add GICv4.1 Virtual LPI support
+   * Stage#4: Add GICv4.1 Virtual SGI support
+
+This design doc is only for "Stage#1: Add GICv4.0 Virtual LPI support".
+
+# Introduction
+
+In GICv3, the hypervisor uses the system registers to present LPIs to a
+virtualized system. A virtual LPI (vLPI) is generated when the hypervisor
+writes to a List register. Now with GICv4.0, it provides support for the direct
+injection of vLPIs, with no hypervisor involvement at runtime.
+
+With the direct injection of vLPIs, the GICR_* registers use structures in
+memory for each vPE to hold virtual LPI configuration and virtual pending
+configuration for vLPIs in the same way that they use structures in memory to
+hold LPI configuration and pending configuration for physical LPIs.
+
+The following summarises the hardware and serves as a set of assumptions
+for the GICv4.0 virtual LPI support software design. For full details see
+the "GIC Architecture Specification"[1].
+
+This design refers to the Linux KVM GICv4 patches[2] and we adapt them to
+Xen gic virtualization framework.
+
+# Hardware background
+
+## 4.0 ITS with direct injection of virtual LPI interrupts
+
+The 4.0 ITS could maps an EventID and a DeviceID to an vINTID associated
+with a vPE.
+
+### vPE table
+
+The vPE table consists of vPE table entries that provide a mapping from the
+vPEID generated by the ITS to:
+
+  * The target Redistributor, in the format defined by GITS_TYPER.PTA.
+  * The base address of the virtual LPI Pending table associated with the 
target
+vPE.
+
+An area of memory defined by GITS_BASER2 holds the vPE table and indicates
+the size of each entry in the table.
+
+### Doorbell interrupt
+
+Virtual interrupts can be directly injected for the *scheduled vPE*.
+If the target vPE is not scheduled, the virtual interrupt is recorded as
+being pending in the appropriate VPT(Virtual Pending Table).
+
+Besides this, We can configure a physical LPI that is sent to a PE when the
+vLPI becomes pending and the vPE is not scheduled on that PE. This physical LPI
+is a Doorbell Interrupt.
+
+### ITT table with vLPI and doorbell interrupt support
+
+We could use ITS VMAPTI command to write an new ITTE(Interruption Translation
+Table Entry) entry in ITT(Interruption Translation Table) for a direct
+event/vLPI pair. The new ITS interruption translation table entry is
+updated to be configured with:
+
+   * A control flag that indicates that the EventID is associated with a
+virtual LPI.
+   * A vPEID to index into the ITS vPE table.
+   * A virtual INTID (vINTID) that indicates which vLPI becomes pending.
+   * A physical INTID (pINTID) that can be used as a doorbell interrupt to the
+hypervisor if the vPE is not scheduled on a PE. The value 1023 is used where a
+doorbell interrupt is not required, otherwise an INTID in the physical LPI
+range must be provided
+
+### New ITS commands summary
+
+The commands used to control the handling of virtual LPIs are as follows:
+
+* VINVALL
+* VMAPI
+* VMAPP GICv4.0
+* VMAPTI
+* VMOVI
+* VMOVP GICv4.0
+* VSYNC
+
+## 4.0 Redistributor with direct injection of virtual LPI interrupts
+
+### GICR_VPROPBASER
+
+This register sets the address of the virtual LPI Configuration table, which
+records the configuration of vLPIs.
+
+The configuration of vLPIs is global to all vPEs in the same VM, so we shall
+assume that all vPEs in a VM will use the same copy of the virtual
+Configuration Table.
+
+### GICR_VPENDBASER
+
+This register sets the address of the virtual LPI Pending table(VPT), which
+records the pending state of the vLPIs. Each vPE has its own private VPT.
+
+# Implementation on Xen
+
+## Probe GICv4.0
+
+The GICv4.0 is just an augmented GICv3, and it is reusing quantities of
+GICv3 routines.
+The way to probe whether the hardware supports GICv4.0 is to check whether the
+Redistributors support direct injection of virtual LPIs(vLPIs), through
+GICR_TYPER.VLPIS.
+
+## vPE initialization
+
+In Xen, we assign a vPE instance for each vCPU. When creating a VM, the low
+level GICv4 code is responsible for creating vPE instance for each vcpu, which
+includes:
+
+  * allocating each vPE a unique VPEID. In Xen, we simply use the VCPUID
+as VPEID.
+  * allocating a doorbell interrupt for each vPE, which follows the current
+allocation of a free physical LPI.
+  * allocating the 

[linux-linus test] 181580: regressions - FAIL

2023-06-24 Thread osstest service owner
flight 181580 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181580/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd  21 guest-start/debian.repeat  fail pass in 181573

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl   8 xen-boot fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 180278
 test-armhf-armhf-libvirt  8 xen-boot fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot fail  like 180278
 test-armhf-armhf-examine  8 reboot   fail  like 180278
 test-armhf-armhf-xl-rtds  8 xen-boot fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-bootfail like 180278
 test-armhf-armhf-xl-vhd   8 xen-boot fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 180278
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass

version targeted for testing:
 linuxa92b7d26c743b9dc06d520f863d624e94978a1d9
baseline version:
 linux6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   69 days
Failing since180281  2023-04-17 06:24:36 Z   68 days  129 attempts
Testing same since   181573  2023-06-24 02:11:10 Z1 days2 attempts


2770 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  

[linux-5.4 test] 181577: regressions - FAIL

2023-06-24 Thread osstest service owner
flight 181577 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181577/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-build  fail in 181563 REGR. vs. 181363

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail in 181563 
pass in 181577
 test-armhf-armhf-xl  18 guest-start/debian.repeat  fail pass in 181563

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 
181563 n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)blocked in 181563 n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 
181563 n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)  blocked in 181563 n/a
 test-amd64-i386-examine-bios  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)blocked in 181563 n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)blocked in 181563 n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)blocked in 181563 n/a
 test-amd64-coresched-i386-xl  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 
181563 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 181563 n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-examine-bios  1 build-check(1)  blocked in 181563 n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-examine  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-examine-uefi  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 181563 
n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)  blocked in 181563 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1) blocked in 181563 n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1) blocked in 181563 n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
in 181563 n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)blocked in 181563 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)blocked in 181563 n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)  blocked in 181563 n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)  blocked in 181563 n/a
 test-amd64-i386-examine   1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)blocked in 181563 n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1) blocked in 181563 n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)  blocked in 181563 n/a
 test-amd64-i386-xl1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)blocked in 181563 n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1) blocked in 181563 n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 181563 n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)  blocked in 181563 n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)blocked in 181563 n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked 

[PATCH v3] xen: speed up grant-table reclaim

2023-06-24 Thread Demi Marie Obenour
When a grant entry is still in use by the remote domain, Linux must put
it on a deferred list.  Normally, this list is very short, because
the PV network and block protocols expect the backend to unmap the grant
first.  However, Qubes OS's GUI protocol is subject to the constraints
of the X Window System, and as such winds up with the frontend unmapping
the window first.  As a result, the list can grow very large, resulting
in a massive memory leak and eventual VM freeze.

To partially solve this problem, make the number of entries that the VM
will attempt to free at each iteration tunable.  The default is still
10, but it can be overridden at compile-time (via Kconfig), boot-time
(via a kernel command-line option), or runtime (via sysfs).

This is Cc: stable because (when combined with appropriate userspace
changes) it fixes a severe performance and stability problem for Qubes
OS users.

Cc: sta...@vger.kernel.org
Signed-off-by: Demi Marie Obenour 
---
 drivers/xen/grant-table.c | 40 ---
 2 files changed, 41 insertions(+), 11 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 
e1ec725c2819d4d5dede063eb00d86a6d52944c0..fa666aa6abc3e786dddc94f895641505ec0b23d8
 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -498,14 +498,20 @@ static LIST_HEAD(deferred_list);
 static void gnttab_handle_deferred(struct timer_list *);
 static DEFINE_TIMER(deferred_timer, gnttab_handle_deferred);
 
+static atomic64_t deferred_count;
+static atomic64_t leaked_count;
+static unsigned int free_per_iteration = 10;
+
 static void gnttab_handle_deferred(struct timer_list *unused)
 {
-   unsigned int nr = 10;
+   unsigned int nr = READ_ONCE(free_per_iteration);
+   const bool ignore_limit = nr == 0;
struct deferred_entry *first = NULL;
unsigned long flags;
+   size_t freed = 0;
 
spin_lock_irqsave(_list_lock, flags);
-   while (nr--) {
+   while ((ignore_limit || nr--) && !list_empty(_list)) {
struct deferred_entry *entry
= list_first_entry(_list,
   struct deferred_entry, list);
@@ -515,10 +521,13 @@ static void gnttab_handle_deferred(struct timer_list 
*unused)
list_del(>list);
spin_unlock_irqrestore(_list_lock, flags);
if (_gnttab_end_foreign_access_ref(entry->ref)) {
+   uint64_t ret = atomic64_sub_return(1, _count);
put_free_entry(entry->ref);
-   pr_debug("freeing g.e. %#x (pfn %#lx)\n",
-entry->ref, page_to_pfn(entry->page));
+   pr_debug("freeing g.e. %#x (pfn %#lx), %llu 
remaining\n",
+entry->ref, page_to_pfn(entry->page),
+(unsigned long long)ret);
put_page(entry->page);
+   freed++;
kfree(entry);
entry = NULL;
} else {
@@ -530,21 +539,22 @@ static void gnttab_handle_deferred(struct timer_list 
*unused)
spin_lock_irqsave(_list_lock, flags);
if (entry)
list_add_tail(>list, _list);
-   else if (list_empty(_list))
-   break;
}
-   if (!list_empty(_list) && !timer_pending(_timer)) {
+   if (list_empty(_list))
+   WARN_ON(atomic64_read(_count));
+   else if (!timer_pending(_timer)) {
deferred_timer.expires = jiffies + HZ;
add_timer(_timer);
}
spin_unlock_irqrestore(_list_lock, flags);
+   pr_debug("Freed %zu references", freed);
 }
 
 static void gnttab_add_deferred(grant_ref_t ref, struct page *page)
 {
struct deferred_entry *entry;
gfp_t gfp = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;
-   const char *what = KERN_WARNING "leaking";
+   uint64_t leaked, deferred;
 
entry = kmalloc(sizeof(*entry), gfp);
if (!page) {
@@ -567,12 +577,20 @@ static void gnttab_add_deferred(grant_ref_t ref, struct 
page *page)
add_timer(_timer);
}
spin_unlock_irqrestore(_list_lock, flags);
-   what = KERN_DEBUG "deferring";
+   deferred = atomic64_add_return(1, _count);
+   leaked = atomic64_read(_count);
+   pr_debug("deferring g.e. %#x (pfn %#lx) (total deferred %llu, 
total leaked %llu)\n",
+ref, page ? page_to_pfn(page) : -1, deferred, leaked);
+   } else {
+   deferred = atomic64_read(_count);
+   leaked = atomic64_add_return(1, _count);
+   pr_warn("leaking g.e. %#x (pfn %#lx) (total deferred %llu, 
total leaked %llu)\n",
+   ref, page ? page_to_pfn(page) : -1, deferred, leaked);
   

[xen-unstable test] 181575: tolerable trouble: fail/pass/starved

2023-06-24 Thread osstest service owner
flight 181575 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181575/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-migrupgrade  11 xen-install/dst_host   fail pass in 181565
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 
181565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 181565 like 
181558
 test-arm64-arm64-xl-xsm 15 migrate-support-check fail in 181565 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 181565 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 181565 never 
pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 181565 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 181565 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 181565 never 
pass
 test-arm64-arm64-xl-vhd 14 migrate-support-check fail in 181565 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 181565 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 181565
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 181565
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 181565
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 181565
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 181565
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 181565
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 181565
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 181565
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 181565
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 181565
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 181565
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 181565
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-arm64-arm64-xl-xsm   3 hosts-allocate   starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate   starved  n/a
 

[PATCH v3 16/16] accel: Rename HVF 'struct hvf_vcpu_state' -> AccelCPUState

2023-06-24 Thread Philippe Mathieu-Daudé
We want all accelerators to share the same opaque pointer in
CPUState.

Rename the 'hvf_vcpu_state' structure as 'AccelCPUState'.

Use the generic 'accel' field of CPUState instead of 'hvf'.

Replace g_malloc0() by g_new0() for readability.

Signed-off-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
---
Not even built on x86!
---
 include/hw/core/cpu.h   |   4 -
 include/sysemu/hvf_int.h|   2 +-
 target/i386/hvf/vmx.h   |  22 ++--
 accel/hvf/hvf-accel-ops.c   |  18 ++--
 target/arm/hvf/hvf.c| 108 +--
 target/i386/hvf/hvf.c   | 104 +-
 target/i386/hvf/x86.c   |  28 ++---
 target/i386/hvf/x86_descr.c |  26 ++---
 target/i386/hvf/x86_emu.c   |  62 +--
 target/i386/hvf/x86_mmu.c   |   4 +-
 target/i386/hvf/x86_task.c  |  10 +-
 target/i386/hvf/x86hvf.c| 208 ++--
 12 files changed, 296 insertions(+), 300 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 8b40946afc..44c91240f2 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -240,8 +240,6 @@ typedef struct SavedIOTLB {
 struct KVMState;
 struct kvm_run;
 
-struct hvf_vcpu_state;
-
 /* work queue */
 
 /* The union type allows passing of 64 bit target pointers on 32 bit
@@ -441,8 +439,6 @@ struct CPUState {
 /* Used for user-only emulation of prctl(PR_SET_UNALIGN). */
 bool prctl_unalign_sigbus;
 
-struct hvf_vcpu_state *hvf;
-
 /* track IOMMUs whose translations we've cached in the TCG TLB */
 GArray *iommu_notifiers;
 };
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index 6ab119e49f..718beddcdd 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -49,7 +49,7 @@ struct HVFState {
 };
 extern HVFState *hvf_state;
 
-struct hvf_vcpu_state {
+struct AccelCPUState {
 uint64_t fd;
 void *exit;
 bool vtimer_masked;
diff --git a/target/i386/hvf/vmx.h b/target/i386/hvf/vmx.h
index fcd9a95e5b..0fffcfa46c 100644
--- a/target/i386/hvf/vmx.h
+++ b/target/i386/hvf/vmx.h
@@ -180,15 +180,15 @@ static inline void macvm_set_rip(CPUState *cpu, uint64_t 
rip)
 uint64_t val;
 
 /* BUG, should take considering overlap.. */
-wreg(cpu->hvf->fd, HV_X86_RIP, rip);
+wreg(cpu->accel->fd, HV_X86_RIP, rip);
 env->eip = rip;
 
 /* after moving forward in rip, we need to clean INTERRUPTABILITY */
-   val = rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
+   val = rvmcs(cpu->accel->fd, VMCS_GUEST_INTERRUPTIBILITY);
if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING |
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) {
 env->hflags &= ~HF_INHIBIT_IRQ_MASK;
-wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY,
+wvmcs(cpu->accel->fd, VMCS_GUEST_INTERRUPTIBILITY,
val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING |
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING));
}
@@ -200,9 +200,9 @@ static inline void vmx_clear_nmi_blocking(CPUState *cpu)
 CPUX86State *env = _cpu->env;
 
 env->hflags2 &= ~HF2_NMI_MASK;
-uint32_t gi = (uint32_t) rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
+uint32_t gi = (uint32_t) rvmcs(cpu->accel->fd, 
VMCS_GUEST_INTERRUPTIBILITY);
 gi &= ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
-wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
+wvmcs(cpu->accel->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
 }
 
 static inline void vmx_set_nmi_blocking(CPUState *cpu)
@@ -211,16 +211,16 @@ static inline void vmx_set_nmi_blocking(CPUState *cpu)
 CPUX86State *env = _cpu->env;
 
 env->hflags2 |= HF2_NMI_MASK;
-uint32_t gi = (uint32_t)rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
+uint32_t gi = (uint32_t)rvmcs(cpu->accel->fd, VMCS_GUEST_INTERRUPTIBILITY);
 gi |= VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
-wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
+wvmcs(cpu->accel->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
 }
 
 static inline void vmx_set_nmi_window_exiting(CPUState *cpu)
 {
 uint64_t val;
-val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
-wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val |
+val = rvmcs(cpu->accel->fd, VMCS_PRI_PROC_BASED_CTLS);
+wvmcs(cpu->accel->fd, VMCS_PRI_PROC_BASED_CTLS, val |
   VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
 
 }
@@ -229,8 +229,8 @@ static inline void vmx_clear_nmi_window_exiting(CPUState 
*cpu)
 {
 
 uint64_t val;
-val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
-wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val &
+val = rvmcs(cpu->accel->fd, VMCS_PRI_PROC_BASED_CTLS);
+wvmcs(cpu->accel->fd, VMCS_PRI_PROC_BASED_CTLS, val &
   ~VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
 }
 
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 9c3da03c94..444d6aaaec 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -372,19 +372,19 @@ type_init(hvf_type_init);
 
 static void hvf_vcpu_destroy(CPUState *cpu)
 {
-   

[PATCH v3 12/16] accel: Remove WHPX unreachable error path

2023-06-24 Thread Philippe Mathieu-Daudé
g_new0() can not fail. Remove the unreachable error path.

https://developer-old.gnome.org/glib/stable/glib-Memory-Allocation.html#glib-Memory-Allocation.description

Reported-by: Richard Henderson 
Signed-off-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
---
 target/i386/whpx/whpx-all.c | 6 --
 1 file changed, 6 deletions(-)

diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 410b34d8ec..cad7bd0f88 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -2179,12 +2179,6 @@ int whpx_init_vcpu(CPUState *cpu)
 
 vcpu = g_new0(struct whpx_vcpu, 1);
 
-if (!vcpu) {
-error_report("WHPX: Failed to allocte VCPU context.");
-ret = -ENOMEM;
-goto error;
-}
-
 hr = whp_dispatch.WHvEmulatorCreateEmulator(
 _emu_callbacks,
 >emulator);
-- 
2.38.1




[PATCH v3 09/16] accel: Remove NVMM unreachable error path

2023-06-24 Thread Philippe Mathieu-Daudé
g_malloc0() can not fail. Remove the unreachable error path.

https://developer-old.gnome.org/glib/stable/glib-Memory-Allocation.html#glib-Memory-Allocation.description

Signed-off-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
---
 target/i386/nvmm/nvmm-all.c | 4 
 1 file changed, 4 deletions(-)

diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index b3c3adc59a..90e9e0a5b2 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -943,10 +943,6 @@ nvmm_init_vcpu(CPUState *cpu)
 }
 
 qcpu = g_malloc0(sizeof(*qcpu));
-if (qcpu == NULL) {
-error_report("NVMM: Failed to allocate VCPU context.");
-return -ENOMEM;
-}
 
 ret = nvmm_vcpu_create(mach, cpu->cpu_index, >vcpu);
 if (ret == -1) {
-- 
2.38.1




[PATCH v3 14/16] accel: Inline WHPX get_whpx_vcpu()

2023-06-24 Thread Philippe Mathieu-Daudé
No need for this helper to access the CPUState::accel field.

Reviewed-by: Richard Henderson 
Signed-off-by: Philippe Mathieu-Daudé 
---
 target/i386/whpx/whpx-all.c | 29 ++---
 1 file changed, 10 insertions(+), 19 deletions(-)

diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 4ddd2d076a..0903327ac5 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -256,15 +256,6 @@ static bool whpx_has_xsave(void)
 return whpx_xsave_cap.XsaveSupport;
 }
 
-/*
- * VP support
- */
-
-static AccelCPUState *get_whpx_vcpu(CPUState *cpu)
-{
-return (AccelCPUState *)cpu->accel;
-}
-
 static WHV_X64_SEGMENT_REGISTER whpx_seg_q2h(const SegmentCache *qs, int v86,
  int r86)
 {
@@ -390,7 +381,7 @@ static uint64_t whpx_cr8_to_apic_tpr(uint64_t cr8)
 static void whpx_set_registers(CPUState *cpu, int level)
 {
 struct whpx_state *whpx = _global;
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct whpx_register_set vcxt;
@@ -609,7 +600,7 @@ static void whpx_get_xcrs(CPUState *cpu)
 static void whpx_get_registers(CPUState *cpu)
 {
 struct whpx_state *whpx = _global;
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct whpx_register_set vcxt;
@@ -892,7 +883,7 @@ static const WHV_EMULATOR_CALLBACKS whpx_emu_callbacks = {
 static int whpx_handle_mmio(CPUState *cpu, WHV_MEMORY_ACCESS_CONTEXT *ctx)
 {
 HRESULT hr;
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 WHV_EMULATOR_STATUS emu_status;
 
 hr = whp_dispatch.WHvEmulatorTryMmioEmulation(
@@ -917,7 +908,7 @@ static int whpx_handle_portio(CPUState *cpu,
   WHV_X64_IO_PORT_ACCESS_CONTEXT *ctx)
 {
 HRESULT hr;
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 WHV_EMULATOR_STATUS emu_status;
 
 hr = whp_dispatch.WHvEmulatorTryIoEmulation(
@@ -1417,7 +1408,7 @@ static vaddr whpx_vcpu_get_pc(CPUState *cpu, bool 
exit_context_valid)
  * of QEMU, nor this port by calling WHvSetVirtualProcessorRegisters().
  * This is the most common case.
  */
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 return vcpu->exit_ctx.VpContext.Rip;
 } else {
 /*
@@ -1468,7 +1459,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 {
 HRESULT hr;
 struct whpx_state *whpx = _global;
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 int irq;
@@ -1590,7 +1581,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
 static void whpx_vcpu_post_run(CPUState *cpu)
 {
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 
@@ -1617,7 +1608,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 
 if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
 !(env->hflags & HF_SMM_MASK)) {
@@ -1656,7 +1647,7 @@ static int whpx_vcpu_run(CPUState *cpu)
 {
 HRESULT hr;
 struct whpx_state *whpx = _global;
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 struct whpx_breakpoint *stepped_over_bp = NULL;
 WhpxStepMode exclusive_step_mode = WHPX_STEP_NONE;
 int ret;
@@ -2290,7 +2281,7 @@ int whpx_vcpu_exec(CPUState *cpu)
 void whpx_destroy_vcpu(CPUState *cpu)
 {
 struct whpx_state *whpx = _global;
-AccelCPUState *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = cpu->accel;
 
 whp_dispatch.WHvDeleteVirtualProcessor(whpx->partition, cpu->cpu_index);
 whp_dispatch.WHvEmulatorDestroyEmulator(vcpu->emulator);
-- 
2.38.1




[PATCH v3 08/16] accel: Move HAX hThread to accelerator context

2023-06-24 Thread Philippe Mathieu-Daudé
hThread variable is only used by the HAX accelerator,
so move it to the accelerator specific context.

Signed-off-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
---
 include/hw/core/cpu.h   | 1 -
 target/i386/hax/hax-i386.h  | 3 +++
 target/i386/hax/hax-accel-ops.c | 2 +-
 target/i386/hax/hax-all.c   | 2 +-
 target/i386/hax/hax-windows.c   | 2 +-
 5 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index a7fae8571e..8b40946afc 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -337,7 +337,6 @@ struct CPUState {
 
 struct QemuThread *thread;
 #ifdef _WIN32
-HANDLE hThread;
 QemuSemaphore sem;
 #endif
 int thread_id;
diff --git a/target/i386/hax/hax-i386.h b/target/i386/hax/hax-i386.h
index 4372ee596d..87153f40ab 100644
--- a/target/i386/hax/hax-i386.h
+++ b/target/i386/hax/hax-i386.h
@@ -27,6 +27,9 @@ typedef HANDLE hax_fd;
 extern struct hax_state hax_global;
 
 struct AccelCPUState {
+#ifdef _WIN32
+HANDLE hThread;
+#endif
 hax_fd fd;
 int vcpu_id;
 struct hax_tunnel *tunnel;
diff --git a/target/i386/hax/hax-accel-ops.c b/target/i386/hax/hax-accel-ops.c
index a8512efcd5..5031096760 100644
--- a/target/i386/hax/hax-accel-ops.c
+++ b/target/i386/hax/hax-accel-ops.c
@@ -73,7 +73,7 @@ static void hax_start_vcpu_thread(CPUState *cpu)
cpu, QEMU_THREAD_JOINABLE);
 assert(cpu->accel);
 #ifdef _WIN32
-cpu->hThread = qemu_thread_get_handle(cpu->thread);
+cpu->accel->hThread = qemu_thread_get_handle(cpu->thread);
 #endif
 }
 
diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index 9d9011cc38..18d78e5b6b 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -206,7 +206,7 @@ int hax_vcpu_destroy(CPUState *cpu)
 hax_close_fd(vcpu->fd);
 hax_global.vm->vcpus[vcpu->vcpu_id] = NULL;
 #ifdef _WIN32
-CloseHandle(cpu->hThread);
+CloseHandle(vcpu->hThread);
 #endif
 g_free(vcpu);
 cpu->accel = NULL;
diff --git a/target/i386/hax/hax-windows.c b/target/i386/hax/hax-windows.c
index bf4b0ad941..4bf6cc08d2 100644
--- a/target/i386/hax/hax-windows.c
+++ b/target/i386/hax/hax-windows.c
@@ -476,7 +476,7 @@ void hax_kick_vcpu_thread(CPUState *cpu)
  */
 cpu->exit_request = 1;
 if (!qemu_cpu_is_self(cpu)) {
-if (!QueueUserAPC(dummy_apc_func, cpu->hThread, 0)) {
+if (!QueueUserAPC(dummy_apc_func, cpu->accel->hThread, 0)) {
 fprintf(stderr, "%s: QueueUserAPC failed with error %lu\n",
 __func__, GetLastError());
 exit(1);
-- 
2.38.1




[PATCH v3 13/16] accel: Rename WHPX 'struct whpx_vcpu' -> AccelCPUState

2023-06-24 Thread Philippe Mathieu-Daudé
We want all accelerators to share the same opaque pointer in
CPUState. Rename WHPX 'whpx_vcpu' as 'AccelCPUState'; use
the typedef.

Signed-off-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
---
 target/i386/whpx/whpx-all.c | 30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index cad7bd0f88..4ddd2d076a 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -229,7 +229,7 @@ typedef enum WhpxStepMode {
 WHPX_STEP_EXCLUSIVE,
 } WhpxStepMode;
 
-struct whpx_vcpu {
+struct AccelCPUState {
 WHV_EMULATOR_HANDLE emulator;
 bool window_registered;
 bool interruptable;
@@ -260,9 +260,9 @@ static bool whpx_has_xsave(void)
  * VP support
  */
 
-static struct whpx_vcpu *get_whpx_vcpu(CPUState *cpu)
+static AccelCPUState *get_whpx_vcpu(CPUState *cpu)
 {
-return (struct whpx_vcpu *)cpu->accel;
+return (AccelCPUState *)cpu->accel;
 }
 
 static WHV_X64_SEGMENT_REGISTER whpx_seg_q2h(const SegmentCache *qs, int v86,
@@ -390,7 +390,7 @@ static uint64_t whpx_cr8_to_apic_tpr(uint64_t cr8)
 static void whpx_set_registers(CPUState *cpu, int level)
 {
 struct whpx_state *whpx = _global;
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct whpx_register_set vcxt;
@@ -609,7 +609,7 @@ static void whpx_get_xcrs(CPUState *cpu)
 static void whpx_get_registers(CPUState *cpu)
 {
 struct whpx_state *whpx = _global;
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct whpx_register_set vcxt;
@@ -892,7 +892,7 @@ static const WHV_EMULATOR_CALLBACKS whpx_emu_callbacks = {
 static int whpx_handle_mmio(CPUState *cpu, WHV_MEMORY_ACCESS_CONTEXT *ctx)
 {
 HRESULT hr;
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 WHV_EMULATOR_STATUS emu_status;
 
 hr = whp_dispatch.WHvEmulatorTryMmioEmulation(
@@ -917,7 +917,7 @@ static int whpx_handle_portio(CPUState *cpu,
   WHV_X64_IO_PORT_ACCESS_CONTEXT *ctx)
 {
 HRESULT hr;
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 WHV_EMULATOR_STATUS emu_status;
 
 hr = whp_dispatch.WHvEmulatorTryIoEmulation(
@@ -1417,7 +1417,7 @@ static vaddr whpx_vcpu_get_pc(CPUState *cpu, bool 
exit_context_valid)
  * of QEMU, nor this port by calling WHvSetVirtualProcessorRegisters().
  * This is the most common case.
  */
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 return vcpu->exit_ctx.VpContext.Rip;
 } else {
 /*
@@ -1468,7 +1468,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 {
 HRESULT hr;
 struct whpx_state *whpx = _global;
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 int irq;
@@ -1590,7 +1590,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
 static void whpx_vcpu_post_run(CPUState *cpu)
 {
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 
@@ -1617,7 +1617,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 
 if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
 !(env->hflags & HF_SMM_MASK)) {
@@ -1656,7 +1656,7 @@ static int whpx_vcpu_run(CPUState *cpu)
 {
 HRESULT hr;
 struct whpx_state *whpx = _global;
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 struct whpx_breakpoint *stepped_over_bp = NULL;
 WhpxStepMode exclusive_step_mode = WHPX_STEP_NONE;
 int ret;
@@ -2154,7 +2154,7 @@ int whpx_init_vcpu(CPUState *cpu)
 {
 HRESULT hr;
 struct whpx_state *whpx = _global;
-struct whpx_vcpu *vcpu = NULL;
+AccelCPUState *vcpu = NULL;
 Error *local_error = NULL;
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
@@ -2177,7 +2177,7 @@ int whpx_init_vcpu(CPUState *cpu)
 }
 }
 
-vcpu = g_new0(struct whpx_vcpu, 1);
+vcpu = g_new0(AccelCPUState, 1);
 
 hr = whp_dispatch.WHvEmulatorCreateEmulator(
 _emu_callbacks,
@@ -2290,7 +2290,7 @@ int whpx_vcpu_exec(CPUState *cpu)
 void whpx_destroy_vcpu(CPUState *cpu)
 {
 struct whpx_state *whpx = _global;
-struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+AccelCPUState *vcpu = get_whpx_vcpu(cpu);
 
 

[PATCH v3 11/16] accel: Inline NVMM get_qemu_vcpu()

2023-06-24 Thread Philippe Mathieu-Daudé
No need for this helper to access the CPUState::accel field.

Reviewed-by: Richard Henderson 
Signed-off-by: Philippe Mathieu-Daudé 
---
 target/i386/nvmm/nvmm-all.c | 28 +++-
 1 file changed, 11 insertions(+), 17 deletions(-)

diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index e5ee4af084..72a3a9e3ae 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -49,12 +49,6 @@ struct qemu_machine {
 static bool nvmm_allowed;
 static struct qemu_machine qemu_mach;
 
-static AccelCPUState *
-get_qemu_vcpu(CPUState *cpu)
-{
-return cpu->accel;
-}
-
 static struct nvmm_machine *
 get_nvmm_mach(void)
 {
@@ -86,7 +80,7 @@ nvmm_set_registers(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 struct nvmm_machine *mach = get_nvmm_mach();
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 struct nvmm_vcpu *vcpu = >vcpu;
 struct nvmm_x64_state *state = vcpu->state;
 uint64_t bitmap;
@@ -223,7 +217,7 @@ nvmm_get_registers(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 struct nvmm_machine *mach = get_nvmm_mach();
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_x64_state *state = vcpu->state;
@@ -347,7 +341,7 @@ static bool
 nvmm_can_take_int(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 struct nvmm_vcpu *vcpu = >vcpu;
 struct nvmm_machine *mach = get_nvmm_mach();
 
@@ -372,7 +366,7 @@ nvmm_can_take_int(CPUState *cpu)
 static bool
 nvmm_can_take_nmi(CPUState *cpu)
 {
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 
 /*
  * Contrary to INTs, NMIs always schedule an exit when they are
@@ -395,7 +389,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 struct nvmm_machine *mach = get_nvmm_mach();
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_x64_state *state = vcpu->state;
@@ -478,7 +472,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
 static void
 nvmm_vcpu_post_run(CPUState *cpu, struct nvmm_vcpu_exit *exit)
 {
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 uint64_t tpr;
@@ -565,7 +559,7 @@ static int
 nvmm_handle_rdmsr(struct nvmm_machine *mach, CPUState *cpu,
 struct nvmm_vcpu_exit *exit)
 {
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_x64_state *state = vcpu->state;
@@ -610,7 +604,7 @@ static int
 nvmm_handle_wrmsr(struct nvmm_machine *mach, CPUState *cpu,
 struct nvmm_vcpu_exit *exit)
 {
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_x64_state *state = vcpu->state;
@@ -686,7 +680,7 @@ nvmm_vcpu_loop(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 struct nvmm_machine *mach = get_nvmm_mach();
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_vcpu_exit *exit = vcpu->exit;
@@ -892,7 +886,7 @@ static void
 nvmm_ipi_signal(int sigcpu)
 {
 if (current_cpu) {
-AccelCPUState *qcpu = get_qemu_vcpu(current_cpu);
+AccelCPUState *qcpu = current_cpu->accel;
 #if NVMM_USER_VERSION >= 2
 struct nvmm_vcpu *vcpu = >vcpu;
 nvmm_vcpu_stop(vcpu);
@@ -1023,7 +1017,7 @@ void
 nvmm_destroy_vcpu(CPUState *cpu)
 {
 struct nvmm_machine *mach = get_nvmm_mach();
-AccelCPUState *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = cpu->accel;
 
 nvmm_vcpu_destroy(mach, >vcpu);
 g_free(cpu->accel);
-- 
2.38.1




[PATCH v3 10/16] accel: Rename NVMM 'struct qemu_vcpu' -> AccelCPUState

2023-06-24 Thread Philippe Mathieu-Daudé
We want all accelerators to share the same opaque pointer in
CPUState. Rename NVMM 'qemu_vcpu' as 'AccelCPUState'; directly
use the typedef, remove unnecessary casts.

Reviewed-by: Richard Henderson 
Signed-off-by: Philippe Mathieu-Daudé 
---
 target/i386/nvmm/nvmm-all.c | 32 
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index 90e9e0a5b2..e5ee4af084 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -26,7 +26,7 @@
 
 #include 
 
-struct qemu_vcpu {
+struct AccelCPUState {
 struct nvmm_vcpu vcpu;
 uint8_t tpr;
 bool stop;
@@ -49,10 +49,10 @@ struct qemu_machine {
 static bool nvmm_allowed;
 static struct qemu_machine qemu_mach;
 
-static struct qemu_vcpu *
+static AccelCPUState *
 get_qemu_vcpu(CPUState *cpu)
 {
-return (struct qemu_vcpu *)cpu->accel;
+return cpu->accel;
 }
 
 static struct nvmm_machine *
@@ -86,7 +86,7 @@ nvmm_set_registers(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 struct nvmm_machine *mach = get_nvmm_mach();
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(cpu);
 struct nvmm_vcpu *vcpu = >vcpu;
 struct nvmm_x64_state *state = vcpu->state;
 uint64_t bitmap;
@@ -223,7 +223,7 @@ nvmm_get_registers(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 struct nvmm_machine *mach = get_nvmm_mach();
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(cpu);
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_x64_state *state = vcpu->state;
@@ -347,7 +347,7 @@ static bool
 nvmm_can_take_int(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(cpu);
 struct nvmm_vcpu *vcpu = >vcpu;
 struct nvmm_machine *mach = get_nvmm_mach();
 
@@ -372,7 +372,7 @@ nvmm_can_take_int(CPUState *cpu)
 static bool
 nvmm_can_take_nmi(CPUState *cpu)
 {
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(cpu);
 
 /*
  * Contrary to INTs, NMIs always schedule an exit when they are
@@ -395,7 +395,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 struct nvmm_machine *mach = get_nvmm_mach();
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(cpu);
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_x64_state *state = vcpu->state;
@@ -478,7 +478,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
 static void
 nvmm_vcpu_post_run(CPUState *cpu, struct nvmm_vcpu_exit *exit)
 {
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(cpu);
 CPUX86State *env = cpu->env_ptr;
 X86CPU *x86_cpu = X86_CPU(cpu);
 uint64_t tpr;
@@ -565,7 +565,7 @@ static int
 nvmm_handle_rdmsr(struct nvmm_machine *mach, CPUState *cpu,
 struct nvmm_vcpu_exit *exit)
 {
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(cpu);
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_x64_state *state = vcpu->state;
@@ -610,7 +610,7 @@ static int
 nvmm_handle_wrmsr(struct nvmm_machine *mach, CPUState *cpu,
 struct nvmm_vcpu_exit *exit)
 {
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(cpu);
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_x64_state *state = vcpu->state;
@@ -686,7 +686,7 @@ nvmm_vcpu_loop(CPUState *cpu)
 {
 CPUX86State *env = cpu->env_ptr;
 struct nvmm_machine *mach = get_nvmm_mach();
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(cpu);
 struct nvmm_vcpu *vcpu = >vcpu;
 X86CPU *x86_cpu = X86_CPU(cpu);
 struct nvmm_vcpu_exit *exit = vcpu->exit;
@@ -892,7 +892,7 @@ static void
 nvmm_ipi_signal(int sigcpu)
 {
 if (current_cpu) {
-struct qemu_vcpu *qcpu = get_qemu_vcpu(current_cpu);
+AccelCPUState *qcpu = get_qemu_vcpu(current_cpu);
 #if NVMM_USER_VERSION >= 2
 struct nvmm_vcpu *vcpu = >vcpu;
 nvmm_vcpu_stop(vcpu);
@@ -926,7 +926,7 @@ nvmm_init_vcpu(CPUState *cpu)
 struct nvmm_vcpu_conf_cpuid cpuid;
 struct nvmm_vcpu_conf_tpr tpr;
 Error *local_error = NULL;
-struct qemu_vcpu *qcpu;
+AccelCPUState *qcpu;
 int ret, err;
 
 nvmm_init_cpu_signals();
@@ -942,7 +942,7 @@ nvmm_init_vcpu(CPUState *cpu)
 }
 }
 
-qcpu = g_malloc0(sizeof(*qcpu));
+qcpu = g_new0(AccelCPUState, 1);
 
 ret = nvmm_vcpu_create(mach, cpu->cpu_index, >vcpu);
 if (ret == -1) {
@@ -1023,7 +1023,7 @@ void
 nvmm_destroy_vcpu(CPUState *cpu)
 {
 struct nvmm_machine *mach = get_nvmm_mach();
-struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+AccelCPUState *qcpu = 

[PATCH v3 15/16] accel: Rename 'cpu_state' -> 'cs'

2023-06-24 Thread Philippe Mathieu-Daudé
Most of the codebase uses 'CPUState *cpu' or 'CPUState *cs'.
While 'cpu_state' is kind of explicit, it makes the code
harder to review. Simply rename as 'cs'.

Acked-by: Richard Henderson 
Signed-off-by: Philippe Mathieu-Daudé 
---
 target/i386/hvf/x86hvf.h |  18 +-
 target/i386/hvf/x86hvf.c | 372 +++
 2 files changed, 195 insertions(+), 195 deletions(-)

diff --git a/target/i386/hvf/x86hvf.h b/target/i386/hvf/x86hvf.h
index db6003d6bd..423a89b6ad 100644
--- a/target/i386/hvf/x86hvf.h
+++ b/target/i386/hvf/x86hvf.h
@@ -20,15 +20,15 @@
 #include "cpu.h"
 #include "x86_descr.h"
 
-int hvf_process_events(CPUState *);
-bool hvf_inject_interrupts(CPUState *);
-void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
+int hvf_process_events(CPUState *cs);
+bool hvf_inject_interrupts(CPUState *cs);
+void hvf_set_segment(CPUState *cs, struct vmx_segment *vmx_seg,
  SegmentCache *qseg, bool is_tr);
 void hvf_get_segment(SegmentCache *qseg, struct vmx_segment *vmx_seg);
-void hvf_put_xsave(CPUState *cpu_state);
-void hvf_put_msrs(CPUState *cpu_state);
-void hvf_get_xsave(CPUState *cpu_state);
-void hvf_get_msrs(CPUState *cpu_state);
-void vmx_clear_int_window_exiting(CPUState *cpu);
-void vmx_update_tpr(CPUState *cpu);
+void hvf_put_xsave(CPUState *cs);
+void hvf_put_msrs(CPUState *cs);
+void hvf_get_xsave(CPUState *cs);
+void hvf_get_msrs(CPUState *cs);
+void vmx_clear_int_window_exiting(CPUState *cs);
+void vmx_update_tpr(CPUState *cs);
 #endif
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 69d4fb8cf5..92dfd26a01 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -32,14 +32,14 @@
 #include 
 #include 
 
-void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
+void hvf_set_segment(CPUState *cs, struct vmx_segment *vmx_seg,
  SegmentCache *qseg, bool is_tr)
 {
 vmx_seg->sel = qseg->selector;
 vmx_seg->base = qseg->base;
 vmx_seg->limit = qseg->limit;
 
-if (!qseg->selector && !x86_is_real(cpu) && !is_tr) {
+if (!qseg->selector && !x86_is_real(cs) && !is_tr) {
 /* the TR register is usable after processor reset despite
  * having a null selector */
 vmx_seg->ar = 1 << 16;
@@ -70,279 +70,279 @@ void hvf_get_segment(SegmentCache *qseg, struct 
vmx_segment *vmx_seg)
   (((vmx_seg->ar >> 15) & 1) << DESC_G_SHIFT);
 }
 
-void hvf_put_xsave(CPUState *cpu_state)
+void hvf_put_xsave(CPUState *cs)
 {
-void *xsave = X86_CPU(cpu_state)->env.xsave_buf;
-uint32_t xsave_len = X86_CPU(cpu_state)->env.xsave_buf_len;
+void *xsave = X86_CPU(cs)->env.xsave_buf;
+uint32_t xsave_len = X86_CPU(cs)->env.xsave_buf_len;
 
-x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave, xsave_len);
+x86_cpu_xsave_all_areas(X86_CPU(cs), xsave, xsave_len);
 
-if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, xsave, xsave_len)) {
+if (hv_vcpu_write_fpstate(cs->hvf->fd, xsave, xsave_len)) {
 abort();
 }
 }
 
-static void hvf_put_segments(CPUState *cpu_state)
+static void hvf_put_segments(CPUState *cs)
 {
-CPUX86State *env = _CPU(cpu_state)->env;
+CPUX86State *env = _CPU(cs)->env;
 struct vmx_segment seg;
 
-wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
-wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
+wvmcs(cs->hvf->fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
+wvmcs(cs->hvf->fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
 
-wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
-wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
+wvmcs(cs->hvf->fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
+wvmcs(cs->hvf->fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
 
-/* wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR2, env->cr[2]); */
-wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3, env->cr[3]);
-vmx_update_tpr(cpu_state);
-wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER, env->efer);
+/* wvmcs(cs->hvf->fd, VMCS_GUEST_CR2, env->cr[2]); */
+wvmcs(cs->hvf->fd, VMCS_GUEST_CR3, env->cr[3]);
+vmx_update_tpr(cs);
+wvmcs(cs->hvf->fd, VMCS_GUEST_IA32_EFER, env->efer);
 
-macvm_set_cr4(cpu_state->hvf->fd, env->cr[4]);
-macvm_set_cr0(cpu_state->hvf->fd, env->cr[0]);
+macvm_set_cr4(cs->hvf->fd, env->cr[4]);
+macvm_set_cr0(cs->hvf->fd, env->cr[0]);
 
-hvf_set_segment(cpu_state, , >segs[R_CS], false);
-vmx_write_segment_descriptor(cpu_state, , R_CS);
+hvf_set_segment(cs, , >segs[R_CS], false);
+vmx_write_segment_descriptor(cs, , R_CS);
 
-hvf_set_segment(cpu_state, , >segs[R_DS], false);
-vmx_write_segment_descriptor(cpu_state, , R_DS);
+hvf_set_segment(cs, , >segs[R_DS], false);
+vmx_write_segment_descriptor(cs, , R_DS);
 
-hvf_set_segment(cpu_state, , >segs[R_ES], false);
-vmx_write_segment_descriptor(cpu_state, , R_ES);
+hvf_set_segment(cs, , >segs[R_ES], 

[PATCH v3 07/16] accel: Rename HAX 'struct hax_vcpu_state' -> AccelCPUState

2023-06-24 Thread Philippe Mathieu-Daudé
We want all accelerators to share the same opaque pointer in
CPUState. Start with the HAX context, renaming its forward
declarated structure 'hax_vcpu_state' as 'AccelCPUState'.
Document the CPUState field. Directly use the typedef.

Remove the amusing but now unnecessary casts in NVMM / WHPX.

Signed-off-by: Philippe Mathieu-Daudé 
---
 include/hw/core/cpu.h |  5 ++---
 include/qemu/typedefs.h   |  1 +
 target/i386/hax/hax-i386.h|  9 +
 target/i386/hax/hax-all.c | 16 
 target/i386/hax/hax-posix.c   |  4 ++--
 target/i386/hax/hax-windows.c |  4 ++--
 target/i386/nvmm/nvmm-all.c   |  2 +-
 target/i386/whpx/whpx-all.c   |  2 +-
 8 files changed, 22 insertions(+), 21 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 84b5a866e7..a7fae8571e 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -240,7 +240,6 @@ typedef struct SavedIOTLB {
 struct KVMState;
 struct kvm_run;
 
-struct hax_vcpu_state;
 struct hvf_vcpu_state;
 
 /* work queue */
@@ -308,6 +307,7 @@ struct qemu_work_item;
  * @next_cpu: Next CPU sharing TB cache.
  * @opaque: User data.
  * @mem_io_pc: Host Program Counter at which the memory was accessed.
+ * @accel: Pointer to accelerator specific state.
  * @kvm_fd: vCPU file descriptor for KVM.
  * @work_mutex: Lock to prevent multiple access to @work_list.
  * @work_list: List of pending asynchronous work.
@@ -422,6 +422,7 @@ struct CPUState {
 uint32_t can_do_io;
 int32_t exception_index;
 
+AccelCPUState *accel;
 /* shared by kvm, hax and hvf */
 bool vcpu_dirty;
 
@@ -441,8 +442,6 @@ struct CPUState {
 /* Used for user-only emulation of prctl(PR_SET_UNALIGN). */
 bool prctl_unalign_sigbus;
 
-struct hax_vcpu_state *accel;
-
 struct hvf_vcpu_state *hvf;
 
 /* track IOMMUs whose translations we've cached in the TCG TLB */
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
index 8c1840bfc1..834b0e47a0 100644
--- a/include/qemu/typedefs.h
+++ b/include/qemu/typedefs.h
@@ -21,6 +21,7 @@
  * Incomplete struct types
  * Please keep this list in case-insensitive alphabetical order.
  */
+typedef struct AccelCPUState AccelCPUState;
 typedef struct AccelState AccelState;
 typedef struct AdapterInfo AdapterInfo;
 typedef struct AddressSpace AddressSpace;
diff --git a/target/i386/hax/hax-i386.h b/target/i386/hax/hax-i386.h
index 409ebdb4af..4372ee596d 100644
--- a/target/i386/hax/hax-i386.h
+++ b/target/i386/hax/hax-i386.h
@@ -25,7 +25,8 @@ typedef HANDLE hax_fd;
 #endif
 
 extern struct hax_state hax_global;
-struct hax_vcpu_state {
+
+struct AccelCPUState {
 hax_fd fd;
 int vcpu_id;
 struct hax_tunnel *tunnel;
@@ -46,7 +47,7 @@ struct hax_vm {
 hax_fd fd;
 int id;
 int numvcpus;
-struct hax_vcpu_state **vcpus;
+AccelCPUState **vcpus;
 };
 
 /* Functions exported to host specific mode */
@@ -57,7 +58,7 @@ int valid_hax_tunnel_size(uint16_t size);
 int hax_mod_version(struct hax_state *hax, struct hax_module_version *version);
 int hax_inject_interrupt(CPUArchState *env, int vector);
 struct hax_vm *hax_vm_create(struct hax_state *hax, int max_cpus);
-int hax_vcpu_run(struct hax_vcpu_state *vcpu);
+int hax_vcpu_run(AccelCPUState *vcpu);
 int hax_vcpu_create(int id);
 void hax_kick_vcpu_thread(CPUState *cpu);
 
@@ -76,7 +77,7 @@ int hax_host_create_vm(struct hax_state *hax, int *vm_id);
 hax_fd hax_host_open_vm(struct hax_state *hax, int vm_id);
 int hax_host_create_vcpu(hax_fd vm_fd, int vcpuid);
 hax_fd hax_host_open_vcpu(int vmid, int vcpuid);
-int hax_host_setup_vcpu_channel(struct hax_vcpu_state *vcpu);
+int hax_host_setup_vcpu_channel(AccelCPUState *vcpu);
 hax_fd hax_mod_open(void);
 void hax_memory_init(void);
 
diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index 3865ff9419..9d9011cc38 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -62,7 +62,7 @@ int valid_hax_tunnel_size(uint16_t size)
 
 hax_fd hax_vcpu_get_fd(CPUArchState *env)
 {
-struct hax_vcpu_state *vcpu = env_cpu(env)->accel;
+AccelCPUState *vcpu = env_cpu(env)->accel;
 if (!vcpu) {
 return HAX_INVALID_FD;
 }
@@ -136,7 +136,7 @@ static int hax_version_support(struct hax_state *hax)
 
 int hax_vcpu_create(int id)
 {
-struct hax_vcpu_state *vcpu = NULL;
+AccelCPUState *vcpu = NULL;
 int ret;
 
 if (!hax_global.vm) {
@@ -149,7 +149,7 @@ int hax_vcpu_create(int id)
 return 0;
 }
 
-vcpu = g_new0(struct hax_vcpu_state, 1);
+vcpu = g_new0(AccelCPUState, 1);
 
 ret = hax_host_create_vcpu(hax_global.vm->fd, id);
 if (ret) {
@@ -188,7 +188,7 @@ int hax_vcpu_create(int id)
 
 int hax_vcpu_destroy(CPUState *cpu)
 {
-struct hax_vcpu_state *vcpu = cpu->accel;
+AccelCPUState *vcpu = cpu->accel;
 
 if (!hax_global.vm) {
 fprintf(stderr, "vcpu %x destroy failed, vm is null\n", vcpu->vcpu_id);
@@ -263,7 +263,7 @@ struct hax_vm *hax_vm_create(struct 

[PATCH v3 06/16] accel: Rename 'hax_vcpu' as 'accel' in CPUState

2023-06-24 Thread Philippe Mathieu-Daudé
All accelerators will share a single opaque context
in CPUState. Start by renaming 'hax_vcpu' as 'accel'.

Reviewed-by: Richard Henderson 
Signed-off-by: Philippe Mathieu-Daudé 
---
 include/hw/core/cpu.h   |  2 +-
 target/i386/hax/hax-accel-ops.c |  2 +-
 target/i386/hax/hax-all.c   | 18 +-
 target/i386/nvmm/nvmm-all.c |  6 +++---
 target/i386/whpx/whpx-all.c |  6 +++---
 5 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 4871ad85f0..84b5a866e7 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -441,7 +441,7 @@ struct CPUState {
 /* Used for user-only emulation of prctl(PR_SET_UNALIGN). */
 bool prctl_unalign_sigbus;
 
-struct hax_vcpu_state *hax_vcpu;
+struct hax_vcpu_state *accel;
 
 struct hvf_vcpu_state *hvf;
 
diff --git a/target/i386/hax/hax-accel-ops.c b/target/i386/hax/hax-accel-ops.c
index 0157a628a3..a8512efcd5 100644
--- a/target/i386/hax/hax-accel-ops.c
+++ b/target/i386/hax/hax-accel-ops.c
@@ -71,7 +71,7 @@ static void hax_start_vcpu_thread(CPUState *cpu)
  cpu->cpu_index);
 qemu_thread_create(cpu->thread, thread_name, hax_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
-assert(cpu->hax_vcpu);
+assert(cpu->accel);
 #ifdef _WIN32
 cpu->hThread = qemu_thread_get_handle(cpu->thread);
 #endif
diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index 38a4323a3c..3865ff9419 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -62,7 +62,7 @@ int valid_hax_tunnel_size(uint16_t size)
 
 hax_fd hax_vcpu_get_fd(CPUArchState *env)
 {
-struct hax_vcpu_state *vcpu = env_cpu(env)->hax_vcpu;
+struct hax_vcpu_state *vcpu = env_cpu(env)->accel;
 if (!vcpu) {
 return HAX_INVALID_FD;
 }
@@ -188,7 +188,7 @@ int hax_vcpu_create(int id)
 
 int hax_vcpu_destroy(CPUState *cpu)
 {
-struct hax_vcpu_state *vcpu = cpu->hax_vcpu;
+struct hax_vcpu_state *vcpu = cpu->accel;
 
 if (!hax_global.vm) {
 fprintf(stderr, "vcpu %x destroy failed, vm is null\n", vcpu->vcpu_id);
@@ -209,7 +209,7 @@ int hax_vcpu_destroy(CPUState *cpu)
 CloseHandle(cpu->hThread);
 #endif
 g_free(vcpu);
-cpu->hax_vcpu = NULL;
+cpu->accel = NULL;
 return 0;
 }
 
@@ -223,7 +223,7 @@ int hax_init_vcpu(CPUState *cpu)
 exit(-1);
 }
 
-cpu->hax_vcpu = hax_global.vm->vcpus[cpu->cpu_index];
+cpu->accel = hax_global.vm->vcpus[cpu->cpu_index];
 cpu->vcpu_dirty = true;
 qemu_register_reset(hax_reset_vcpu_state, cpu->env_ptr);
 
@@ -415,7 +415,7 @@ static int hax_handle_io(CPUArchState *env, uint32_t df, 
uint16_t port,
 static int hax_vcpu_interrupt(CPUArchState *env)
 {
 CPUState *cpu = env_cpu(env);
-struct hax_vcpu_state *vcpu = cpu->hax_vcpu;
+struct hax_vcpu_state *vcpu = cpu->accel;
 struct hax_tunnel *ht = vcpu->tunnel;
 
 /*
@@ -447,7 +447,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
 
 void hax_raise_event(CPUState *cpu)
 {
-struct hax_vcpu_state *vcpu = cpu->hax_vcpu;
+struct hax_vcpu_state *vcpu = cpu->accel;
 
 if (!vcpu) {
 return;
@@ -468,7 +468,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
 int ret = 0;
 CPUState *cpu = env_cpu(env);
 X86CPU *x86_cpu = X86_CPU(cpu);
-struct hax_vcpu_state *vcpu = cpu->hax_vcpu;
+struct hax_vcpu_state *vcpu = cpu->accel;
 struct hax_tunnel *ht = vcpu->tunnel;
 
 if (!hax_enabled()) {
@@ -1114,8 +1114,8 @@ void hax_reset_vcpu_state(void *opaque)
 {
 CPUState *cpu;
 for (cpu = first_cpu; cpu != NULL; cpu = CPU_NEXT(cpu)) {
-cpu->hax_vcpu->tunnel->user_event_pending = 0;
-cpu->hax_vcpu->tunnel->ready_for_interrupt_injection = 0;
+cpu->accel->tunnel->user_event_pending = 0;
+cpu->accel->tunnel->ready_for_interrupt_injection = 0;
 }
 }
 
diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index b75738ee9c..cf4f0af24b 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -52,7 +52,7 @@ static struct qemu_machine qemu_mach;
 static struct qemu_vcpu *
 get_qemu_vcpu(CPUState *cpu)
 {
-return (struct qemu_vcpu *)cpu->hax_vcpu;
+return (struct qemu_vcpu *)cpu->accel;
 }
 
 static struct nvmm_machine *
@@ -995,7 +995,7 @@ nvmm_init_vcpu(CPUState *cpu)
 }
 
 cpu->vcpu_dirty = true;
-cpu->hax_vcpu = (struct hax_vcpu_state *)qcpu;
+cpu->accel = (struct hax_vcpu_state *)qcpu;
 
 return 0;
 }
@@ -1030,7 +1030,7 @@ nvmm_destroy_vcpu(CPUState *cpu)
 struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
 
 nvmm_vcpu_destroy(mach, >vcpu);
-g_free(cpu->hax_vcpu);
+g_free(cpu->accel);
 }
 
 /* -- 
*/
diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 52af81683c..d1ad6f156a 100644
--- a/target/i386/whpx/whpx-all.c
+++ 

[PATCH v3 05/16] accel: Destroy HAX vCPU threads once done

2023-06-24 Thread Philippe Mathieu-Daudé
When the vCPU thread finished its processing, destroy
it and signal its destruction to generic vCPU management
layer.

Add a sanity check for the vCPU accelerator context.

Signed-off-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
---
 target/i386/hax/hax-accel-ops.c | 3 +++
 target/i386/hax/hax-all.c   | 1 +
 2 files changed, 4 insertions(+)

diff --git a/target/i386/hax/hax-accel-ops.c b/target/i386/hax/hax-accel-ops.c
index 18114fe34d..0157a628a3 100644
--- a/target/i386/hax/hax-accel-ops.c
+++ b/target/i386/hax/hax-accel-ops.c
@@ -53,6 +53,8 @@ static void *hax_cpu_thread_fn(void *arg)
 
 qemu_wait_io_event(cpu);
 } while (!cpu->unplug || cpu_can_run(cpu));
+hax_vcpu_destroy(cpu);
+cpu_thread_signal_destroyed(cpu);
 rcu_unregister_thread();
 return NULL;
 }
@@ -69,6 +71,7 @@ static void hax_start_vcpu_thread(CPUState *cpu)
  cpu->cpu_index);
 qemu_thread_create(cpu->thread, thread_name, hax_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
+assert(cpu->hax_vcpu);
 #ifdef _WIN32
 cpu->hThread = qemu_thread_get_handle(cpu->thread);
 #endif
diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index a2321a1eff..38a4323a3c 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -209,6 +209,7 @@ int hax_vcpu_destroy(CPUState *cpu)
 CloseHandle(cpu->hThread);
 #endif
 g_free(vcpu);
+cpu->hax_vcpu = NULL;
 return 0;
 }
 
-- 
2.38.1




[PATCH v3 04/16] accel: Fix a leak on Windows HAX

2023-06-24 Thread Philippe Mathieu-Daudé
hThread is only used on the error path in hax_kick_vcpu_thread().

Fixes: b0cb0a66d6 ("Plumb the HAXM-based hardware acceleration support")
Signed-off-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
---
 target/i386/hax/hax-all.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index 3e5992a63b..a2321a1eff 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -205,6 +205,9 @@ int hax_vcpu_destroy(CPUState *cpu)
  */
 hax_close_fd(vcpu->fd);
 hax_global.vm->vcpus[vcpu->vcpu_id] = NULL;
+#ifdef _WIN32
+CloseHandle(cpu->hThread);
+#endif
 g_free(vcpu);
 return 0;
 }
-- 
2.38.1




[PATCH v3 03/16] accel: Remove unused hThread variable on TCG/WHPX

2023-06-24 Thread Philippe Mathieu-Daudé
On Windows hosts, cpu->hThread is assigned but never accessed:
remove it.

Signed-off-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
---
 accel/tcg/tcg-accel-ops-mttcg.c   | 4 
 accel/tcg/tcg-accel-ops-rr.c  | 3 ---
 target/i386/whpx/whpx-accel-ops.c | 3 ---
 3 files changed, 10 deletions(-)

diff --git a/accel/tcg/tcg-accel-ops-mttcg.c b/accel/tcg/tcg-accel-ops-mttcg.c
index b320ff0037..b276262007 100644
--- a/accel/tcg/tcg-accel-ops-mttcg.c
+++ b/accel/tcg/tcg-accel-ops-mttcg.c
@@ -152,8 +152,4 @@ void mttcg_start_vcpu_thread(CPUState *cpu)
 
 qemu_thread_create(cpu->thread, thread_name, mttcg_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
-
-#ifdef _WIN32
-cpu->hThread = qemu_thread_get_handle(cpu->thread);
-#endif
 }
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index 23e4d0f452..2d523289a8 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -329,9 +329,6 @@ void rr_start_vcpu_thread(CPUState *cpu)
 
 single_tcg_halt_cond = cpu->halt_cond;
 single_tcg_cpu_thread = cpu->thread;
-#ifdef _WIN32
-cpu->hThread = qemu_thread_get_handle(cpu->thread);
-#endif
 } else {
 /* we share the thread */
 cpu->thread = single_tcg_cpu_thread;
diff --git a/target/i386/whpx/whpx-accel-ops.c 
b/target/i386/whpx/whpx-accel-ops.c
index e8dc4b3a47..67cad86720 100644
--- a/target/i386/whpx/whpx-accel-ops.c
+++ b/target/i386/whpx/whpx-accel-ops.c
@@ -71,9 +71,6 @@ static void whpx_start_vcpu_thread(CPUState *cpu)
  cpu->cpu_index);
 qemu_thread_create(cpu->thread, thread_name, whpx_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
-#ifdef _WIN32
-cpu->hThread = qemu_thread_get_handle(cpu->thread);
-#endif
 }
 
 static void whpx_kick_vcpu_thread(CPUState *cpu)
-- 
2.38.1




[PATCH v3 02/16] accel: Document generic accelerator headers

2023-06-24 Thread Philippe Mathieu-Daudé
These headers are meant to be include by any file to check
the availability of accelerators, thus are not accelerator
specific.

Signed-off-by: Philippe Mathieu-Daudé 
Acked-by: Richard Henderson 
---
 include/sysemu/hax.h  | 2 ++
 include/sysemu/kvm.h  | 2 ++
 include/sysemu/nvmm.h | 2 ++
 include/sysemu/tcg.h  | 2 ++
 include/sysemu/whpx.h | 2 ++
 include/sysemu/xen.h  | 2 ++
 6 files changed, 12 insertions(+)

diff --git a/include/sysemu/hax.h b/include/sysemu/hax.h
index bf8f99a824..80fc716f80 100644
--- a/include/sysemu/hax.h
+++ b/include/sysemu/hax.h
@@ -19,6 +19,8 @@
  *
  */
 
+/* header to be included in non-HAX-specific code */
+
 #ifndef QEMU_HAX_H
 #define QEMU_HAX_H
 
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index 88f5ccfbce..7902acdfd9 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -11,6 +11,8 @@
  *
  */
 
+/* header to be included in non-KVM-specific code */
+
 #ifndef QEMU_KVM_H
 #define QEMU_KVM_H
 
diff --git a/include/sysemu/nvmm.h b/include/sysemu/nvmm.h
index 833670fccb..be7bc9a62d 100644
--- a/include/sysemu/nvmm.h
+++ b/include/sysemu/nvmm.h
@@ -7,6 +7,8 @@
  * See the COPYING file in the top-level directory.
  */
 
+/* header to be included in non-NVMM-specific code */
+
 #ifndef QEMU_NVMM_H
 #define QEMU_NVMM_H
 
diff --git a/include/sysemu/tcg.h b/include/sysemu/tcg.h
index 53352450ff..5e2ca9aab3 100644
--- a/include/sysemu/tcg.h
+++ b/include/sysemu/tcg.h
@@ -5,6 +5,8 @@
  * See the COPYING file in the top-level directory.
  */
 
+/* header to be included in non-TCG-specific code */
+
 #ifndef SYSEMU_TCG_H
 #define SYSEMU_TCG_H
 
diff --git a/include/sysemu/whpx.h b/include/sysemu/whpx.h
index 2889fa2278..781ca5b2b6 100644
--- a/include/sysemu/whpx.h
+++ b/include/sysemu/whpx.h
@@ -10,6 +10,8 @@
  *
  */
 
+/* header to be included in non-WHPX-specific code */
+
 #ifndef QEMU_WHPX_H
 #define QEMU_WHPX_H
 
diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
index 0ca25697e4..bc13ad5692 100644
--- a/include/sysemu/xen.h
+++ b/include/sysemu/xen.h
@@ -5,6 +5,8 @@
  * See the COPYING file in the top-level directory.
  */
 
+/* header to be included in non-Xen-specific code */
+
 #ifndef SYSEMU_XEN_H
 #define SYSEMU_XEN_H
 
-- 
2.38.1




[PATCH v3 01/16] MAINTAINERS: Update Roman Bolshakov email address

2023-06-24 Thread Philippe Mathieu-Daudé
r.bolsha...@yadro.com is bouncing: Update Roman's email address
using one found somewhere on the Internet; this way he can Ack-by.

(Reorder Taylor's line to keep the section sorted alphabetically).

Signed-off-by: Philippe Mathieu-Daudé 
---
 MAINTAINERS | 4 ++--
 .mailmap| 3 ++-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7f323cd2eb..1da135b0c8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -497,14 +497,14 @@ F: target/arm/hvf/
 
 X86 HVF CPUs
 M: Cameron Esfahani 
-M: Roman Bolshakov 
+M: Roman Bolshakov 
 W: https://wiki.qemu.org/Features/HVF
 S: Maintained
 F: target/i386/hvf/
 
 HVF
 M: Cameron Esfahani 
-M: Roman Bolshakov 
+M: Roman Bolshakov 
 W: https://wiki.qemu.org/Features/HVF
 S: Maintained
 F: accel/hvf/
diff --git a/.mailmap b/.mailmap
index b57da4827e..64ef9f4de6 100644
--- a/.mailmap
+++ b/.mailmap
@@ -76,9 +76,10 @@ Paul Burton  
 Philippe Mathieu-Daudé  
 Philippe Mathieu-Daudé  
 Philippe Mathieu-Daudé  
+Roman Bolshakov  
 Stefan Brankovic  
-Yongbok Kim  
 Taylor Simpson  
+Yongbok Kim  
 
 # Also list preferred name forms where people have changed their
 # git author config, or had utf8/latin1 encoding issues.
-- 
2.38.1




[PATCH v3 00/16] accel: Share CPUState accel context (HAX/NVMM/WHPX/HVF)

2023-06-24 Thread Philippe Mathieu-Daudé
This series is part of the single binary effort.

All accelerator will share their per-vCPU context in
an opaque 'accel' pointer within the CPUState.

First handle HAX/NVMM/WHPX/HVF. KVM and TCG will follow
as two different (bigger) follow-up series.

Except HVF/intel, all has been (cross-)build tested.

I plan to send the PR myself.

Since v2:
- Addressed rth's review comments
- Added rth's R-b tag

Since v1:
- Addressed rth's review comments
- Added rth's R-b tag
- Converted HVF intel (untested)
- Rebased

Philippe Mathieu-Daudé (16):
  MAINTAINERS: Update Roman Bolshakov email address
  accel: Document generic accelerator headers
  accel: Remove unused hThread variable on TCG/WHPX
  accel: Fix a leak on Windows HAX
  accel: Destroy HAX vCPU threads once done
  accel: Rename 'hax_vcpu' as 'accel' in CPUState
  accel: Rename HAX 'struct hax_vcpu_state' -> AccelCPUState
  accel: Move HAX hThread to accelerator context
  accel: Remove NVMM unreachable error path
  accel: Rename NVMM 'struct qemu_vcpu' -> AccelCPUState
  accel: Inline NVMM get_qemu_vcpu()
  accel: Remove WHPX unreachable error path
  accel: Rename WHPX 'struct whpx_vcpu' -> AccelCPUState
  accel: Inline WHPX get_whpx_vcpu()
  accel: Rename 'cpu_state' -> 'cs'
  accel: Rename HVF 'struct hvf_vcpu_state' -> AccelCPUState

 MAINTAINERS   |   4 +-
 include/hw/core/cpu.h |  10 +-
 include/qemu/typedefs.h   |   1 +
 include/sysemu/hax.h  |   2 +
 include/sysemu/hvf_int.h  |   2 +-
 include/sysemu/kvm.h  |   2 +
 include/sysemu/nvmm.h |   2 +
 include/sysemu/tcg.h  |   2 +
 include/sysemu/whpx.h |   2 +
 include/sysemu/xen.h  |   2 +
 target/i386/hax/hax-i386.h|  12 +-
 target/i386/hvf/vmx.h |  22 +-
 target/i386/hvf/x86hvf.h  |  18 +-
 accel/hvf/hvf-accel-ops.c |  18 +-
 accel/tcg/tcg-accel-ops-mttcg.c   |   4 -
 accel/tcg/tcg-accel-ops-rr.c  |   3 -
 target/arm/hvf/hvf.c  | 108 -
 target/i386/hax/hax-accel-ops.c   |   5 +-
 target/i386/hax/hax-all.c |  26 ++-
 target/i386/hax/hax-posix.c   |   4 +-
 target/i386/hax/hax-windows.c |   6 +-
 target/i386/hvf/hvf.c | 104 -
 target/i386/hvf/x86.c |  28 +--
 target/i386/hvf/x86_descr.c   |  26 +--
 target/i386/hvf/x86_emu.c |  62 ++---
 target/i386/hvf/x86_mmu.c |   4 +-
 target/i386/hvf/x86_task.c|  10 +-
 target/i386/hvf/x86hvf.c  | 372 +++---
 target/i386/nvmm/nvmm-all.c   |  42 ++--
 target/i386/whpx/whpx-accel-ops.c |   3 -
 target/i386/whpx/whpx-all.c   |  45 ++--
 .mailmap  |   3 +-
 32 files changed, 469 insertions(+), 485 deletions(-)

-- 
2.38.1




Re: [PATCH v2 07/16] accel: Rename HAX 'struct hax_vcpu_state' -> AccelCPUState

2023-06-24 Thread Philippe Mathieu-Daudé

On 22/6/23 19:46, Richard Henderson wrote:

On 6/22/23 18:08, Philippe Mathieu-Daudé wrote:

|+ struct AccelvCPUState *accel;|

...

+typedef struct AccelCPUState {
 hax_fd fd;
 int vcpu_id;
 struct hax_tunnel *tunnel;
 unsigned char *iobuf;
-};
+} hax_vcpu_state;



Discussed face to face, but for the record:

Put the typedef in qemu/typedefs.h, so that we can use it immediately in 
core/cpu.h and not need to re-declare it in each accelerator.


Drop hax_vcpu_state typedef and just use AccelCPUState (since you have 
to change all of those lines anyway.  Which will eventually allow



+++ b/target/i386/whpx/whpx-all.c
@@ -2258,7 +2258,7 @@ int whpx_init_vcpu(CPUState *cpu)

 vcpu->interruptable = true;
 cpu->vcpu_dirty = true;
-    cpu->accel = (struct hax_vcpu_state *)vcpu;
+    cpu->accel = (struct AccelCPUState *)vcpu;


this cast to go away.


Indeed, thanks :)




[linux-linus test] 181573: regressions - FAIL

2023-06-24 Thread osstest service owner
flight 181573 linux-linus real [real]
flight 181579 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181573/
http://logs.test-lab.xenproject.org/osstest/logs/181579/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl   8 xen-boot fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 180278
 test-armhf-armhf-libvirt  8 xen-boot fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot fail  like 180278
 test-armhf-armhf-examine  8 reboot   fail  like 180278
 test-armhf-armhf-xl-rtds  8 xen-boot fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-bootfail like 180278
 test-armhf-armhf-xl-vhd   8 xen-boot fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 180278
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass

version targeted for testing:
 linuxa92b7d26c743b9dc06d520f863d624e94978a1d9
baseline version:
 linux6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   68 days
Failing since180281  2023-04-17 06:24:36 Z   68 days  128 attempts
Testing same since   181573  2023-06-24 02:11:10 Z0 days1 attempts


2770 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 

Re: [PATCH] Updates to Xen hypercall preemption

2023-06-24 Thread Andy Lutomirski
On Thu, Jun 22, 2023, at 10:20 AM, Juergen Gross wrote:
> On 22.06.23 18:39, Andy Lutomirski wrote:
>> On Thu, Jun 22, 2023, at 3:33 AM, Juergen Gross wrote:
>>> On 22.06.23 10:26, Peter Zijlstra wrote:
 On Thu, Jun 22, 2023 at 07:22:53AM +0200, Juergen Gross wrote:

> The hypercalls we are talking of are synchronous ones. They are running
> in the context of the vcpu doing the call (like a syscall from userland is
> running in the process context).

 (so time actually passes from the guest's pov?)
>>>
>>> Correct.
>>>

> The hypervisor will return to guest context from time to time by modifying
> the registers such that the guest will do the hypercall again with 
> different
> input values for the hypervisor, resulting in a proper continuation of the
> hypercall processing.

 Eeeuw.. that's pretty terrible. And changing this isn't in the cards,
 like at all?
>>>
>>> In the long run this should be possible, but not for already existing Xen
>>> versions.
>>>

 That is, why isn't this whole thing written like:

for (;;) {
ret = hypercall(foo);
if (ret == -EAGAIN) {
cond_resched();
continue;
}
break;
}
>>>
>>> The hypervisor doesn't return -EAGAIN for hysterical reasons.
>>>
>>> This would be one of the options to change the interface. OTOH there are 
>>> cases
>>> where already existing hypercalls need to be modified in the hypervisor to 
>>> do
>>> preemption in the middle due to e.g. security reasons (avoiding cpu hogging 
>>> in
>>> special cases).
>>>
>>> Additionally some of the hypercalls being subject to preemption are allowed 
>>> in
>>> unprivileged guests, too. Those are mostly hypercalls allowed for PV guests
>>> only, but some are usable by all guests.
>>>

> It is an awful interface and I agree that switching to full preemption in
> dom0 seems to be the route which we should try to take.

 Well, I would very strongly suggest the route to take is to scrap the
 whole thing and invest in doing something saner so we don't have to jump
 through hoops like this.

 This is quite possibly the worst possible interface for this Xen could
 have come up with -- awards material for sure.
>>>
>>> Yes.
>>>

> The downside would be that some workloads might see worse performance
> due to backend I/O handling might get preempted.

 Is that an actual concern? Mark this a legaxy inteface and anybody who
 wants to get away from it updates.
>>>
>>> It isn't that easy. See above.
>>>

> Just thinking - can full preemption be enabled per process?

 Nope, that's a system wide thing. Preemption is something that's driven
 by the requirements of the tasks that preempt, not something by the
 tasks that get preempted.
>>>
>>> Depends. If a task in a non-preempt system could switch itself to be
>>> preemptable, we could do so around hypercalls without compromising the
>>> general preemption setting. Disabling preemption in a preemptable system
>>> should continue to be possible for short code paths only, of course.
>>>
 Andy's idea of having that thing intercepted as an exception (EXTABLE
 like) and relocating the IP to a place that does cond_resched() before
 going back is an option.. gross, but possibly better, dunno.

 Quite the mess indeed :/
>>>
>>> Yeah.
>> 
>> Having one implementation of interrupt handlers that schedule when they 
>> interrupt kernel code (the normal full preempt path) is one thing.  Having 
>> two of them (full preempt and super-special-Xen) is IMO quite a bit worse.  
>> Especially since no one tests the latter very well.
>> 
>> Having a horrible Xen-specific extable-like thingy seems honestly rather 
>> less bad.  It could even have a little self-contained test that runs at 
>> boot, I bet.
>> 
>> But I'll bite on the performance impact issue.  What, exactly, is wrong with 
>> full preemption?  Full preemption has two sources of overhead, I think.  One 
>> is a bit of bookkeeping.  The other is the overhead inherent in actually 
>> rescheduling -- context switch cost, losing things from cache, etc.
>> 
>> The bookkeeping part should have quite low overhead.  The scheduling part 
>> sounds like it might just need some scheduler tuning if it's really a 
>> problem.
>> 
>> In any case, for backend IO, full preemption sounds like it should be a win, 
>> not a loss.  If I'm asking dom0 to do backend IO for me, I don't want it 
>> delayed because dom0 was busy doing something else boring.  IO is faster 
>> when the latency between requesting it and actually submitting it to 
>> hardware is lower.
>
> Maybe. I was assuming that full preemption would result in more context
> switches, especially in case many guests are hammering dom0 with I/Os.
> This means that more time is spent with switching 

Re: Asking for help to debug xen efi on Kunpeng machine

2023-06-24 Thread Jiatong Shen
Hello Julien,

   Thank you very much for your reply. Can you teach me how to find the
relationship between MBI-gen and devices?
I am not sure how to find out the mbi-gen backed devices..

Best Regards,
Jiatong Shen

On Sat, Jun 24, 2023 at 4:24 PM Julien Grall  wrote:

> Hi,
>
> On 20/06/2023 08:09, Jiatong Shen wrote:
> > Hello Julien,
> >
> > Sorry for the delay.. I obtained the full xen log and attached it in
> the
> > mail. Please take a look when you are available. Thank you very much
>
> Thanks for sharing the logs. The following lines are interesting:
>
> [1.081905] Hisilicon MBIGEN-V2 HISI0152:00: Failed to create mbi-gen
> irqdomain
> [1.082107] Hisilicon MBIGEN-V2 HISI0152:01: Failed to create mbi-gen
> irqdomain
> [1.082204] Hisilicon MBIGEN-V2 HISI0152:02: Failed to create mbi-gen
> irqdomain
> [1.082294] Hisilicon MBIGEN-V2 HISI0152:03: Failed to create mbi-gen
> irqdomain
> [1.082381] Hisilicon MBIGEN-V2 HISI0152:04: Failed to create mbi-gen
> irqdomain
> [1.082466] Hisilicon MBIGEN-V2 HISI0152:05: Failed to create mbi-gen
> irqdomain
>
> Looking at a Hisilicon Device-Tree, this is an interrupt controller
> behind the GICv3 ITS. You will need to rebuild Xen with CONFIG_HAS_ITS=y.
>
> Also, can you confirm which devices are behind the MBI-Gen? If this is
> only PCI devices, then you are probably fine to give the controllers to
> dom0. But for PCI passthrough, you will most likely need to implement it
> a driver in Xen.
>
> Cheers,
>
> --
> Julien Grall
>


-- 

Best Regards,

Jiatong Shen


Re: [PATCH 1/1] doc: clarify intended usage of ~/control/ xentore path

2023-06-24 Thread Julien Grall

Hi Yann,

Adding Juergen.

On 31/05/2023 11:35, Yann Dirson wrote:

Signed-off-by: Yann Dirson 


Reviewed-by: Julien Grall 

Cheers,


---
  docs/misc/xenstore-paths.pandoc | 29 +
  1 file changed, 29 insertions(+)

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index f07ef90f63..5501033893 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -432,6 +432,35 @@ by udev ("0") or will be run by the toolstack directly 
("1").
  
  ### Platform Feature and Control Paths
  
+ ~/control = "" []

+
+Directory to hold feature and control paths.  This directory is not
+guest-writable, only the toolstack is allowed to create new child
+nodes under this.
+
+Children of this nodes can have one of several types:
+
+* platform features: using name pattern `platform-feature-*`, they may
+  be set by the toolstack to inform the guest, and are not writable by
+  the guest.
+
+* guest features: using name pattern `feature-*`, they may be created
+  by the toolstack with an empty value (`""`), should be set writable
+  by the guest which can then advertize to the toolstack its
+  (non-)usage of the feature with values `"0"` and `"1"` respectively.
+  The lack of update by the guest can be interpreted by the toolstack
+  as the lack of supporting software (PV driver, guest agent, ...) in
+  the guest.
+
+* control nodes: using any name not matching the above pattern, they
+  are used by the toolstack or by the guest to signal a specific
+  condition to the other end, which is expected to watch it to react
+  to changes.
+
+Note: the presence of a control node in itself advertises the
+underlying toolstack feature, it is not necessary to add an extra
+platform-feature for such cases.
+
   ~/control/sysrq = (""|COMMAND) [w]
  
  This is the PV SysRq control node. A toolstack can write a single character


--
Julien Grall



Re: [PATCH] libelf: make L1_MFN_VALID note known

2023-06-24 Thread Julien Grall

Hi Jan,

On 17/05/2023 15:19, Jan Beulich wrote:

We still don't use it (in the tool stack), and its values (plural) also
aren't fetched correctly, but it is odd to continue to see the
hypervisor log "ELF: note: unknown (0xd)" when loading a Linux Dom0.

Signed-off-by: Jan Beulich 


Acked-by: Julien Grall 

Cheers,

--
Julien Grall



Design session notes: Committers workflow: move to Gitlab

2023-06-24 Thread Marek Marczykowski-Górecki
Stefano: 2min summary: gitlab as CI infrastucture, not as code hosting, tickets 
etc;
  we have several improvements for gitlab CI, including tests on hw
  there are a bunch of build jobs, and also some run tests, most on qemu, but 
some on hw
  I'd like to give commiters and other notable community members a way to 
trigger a pipeline - it's as easy as git push to your repository
Julien: everyone can push, how it's prioritized?
Stefano: unfortunately we don't have prioritized, but increasing capacity is 
easy
  everyone can have a personal repo on gitlab
  but also: it would be nice to gate push to staging by gitlab pipeline
Marek: isn't the purpose of staging to be a pre-test master copy?
George: staging is fast-forward branch, cannot be rewind
Stefano: goal is to not allow bad commits even in staging
  committers would push to somewhere on gitlab and that only then it would go 
to staging on xenbits
  later: use merge request workflow:
  1. push to personal branch, open MR (git push -o ...)
  2. if pipeline passes, it can be merged to staging fast-forward
  3. 

Julien: maybe let osstest pull from gitlab?
Stefano: staging on xenbits is useful for legacy reasons
Marek: I have a script to push and pull stuff around in reaction to webhooks
Andrew: there is also stuff on github - FreeBSD testing, coverity testing, 
codeql code analyzer; generally github actions are nice
  it would be good to collect that state into common place (gitlab) too
Bertrand: can osstest be trigerred from gitlab?
George: the goal is to slowly move out of osstest into gitlab
Jan: I'm concerned about few things, for example conflicting merge requests
Bertrand: auto-rebase bot?
Julien: may introduce issues
Stefano: adding more capacity also reduces risk of such conflicts (smaller time 
window);
  two MR options:
  - merge commit
  - cherry-pick (rebase?)
Juline: when Jan is pushing, I'd like to know that when I'm pushing, to 
potentially adjust
George: maybe another bot that watches for MRs and see if they conflict to 
notify early (while pipeline is still running)?
Stefano: this can be another gitlab job
  and also, we can have a fast-fail job - if it fails, it stops the whole 
pipeline (earlier notifications, save resources)
Andrew: there are some non-deterministic errors, but also, there is a lot of 
noise (error messages that are harmless, basically bugs in the test)
Jan: to recap: first push to gitlab staging, then osstest, and only then to 
master; this increases delay
Andrew: security team must have a way to bypass public CI loop, but do testing 
in private first (private gitlab pipelines)
  but also, maintainers of runners implicitly will have access to that - this 
needs to be documented - like require them to be on pre-disclosure list
Jan: what about stable trees?
Stefano: most are okay with gitlab
Andrew: no, recent container change broke all stable trees :/
Stefano: we need George to cleanup permissions on gitlab - a lot of "Owner"s
Marek: what about removing osstests already covered by gitlab?
Andrew: that's stage 2


-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab


signature.asc
Description: PGP signature


Design session notes: Reducing the number of pages always mapped in Xen: Next steps

2023-06-24 Thread Marek Marczykowski-Górecki
Hi, 

Here is what I managed to capture, unfortunately some parts slipped by
me.

Julien: remove directmap, avoid speculative reading it
  series sent 6 months ago - enough to remove directmap, but not perfect
  resolving virtual adresses requires mapping/unmapping page tables - 
significant perf hit
Andrew: a lot of attacks read direct from directmap
  end goal "address space isolation" - no sensitive info mapped into xen
  directmap is not only place, but the large part of the problem
  single heap semi-rely on directmap
Bertrand: you don't know heap size for a guest up front, maybe can be specified 
manually?
Jan: why would that help?
Bertrand: no need to extend heap at guest runtime
Andrew: reduce number of translation (virt <-> phys) by using better data 
structures
  even getting address space isolation just for HVM guests will be huge 
improvement
Jan: zap the directmap when switching to HVM, and reinstantiate when switching 
to PV?
Andrew: just not have it mapped; some things needs to be mapped, like vcpu
  structure but not sensitive info, on fast path skip speculative mitigations,
  but when hitting slow path (page fault), apply speculative migitations and 
restore directmap;
  this makes fast path faster and slow path slower
Roger: what about auto eIBRS?
Andrew: it helps only in newer hardware, there is still older hardware
  even with retblead, the fast path with address space isolation would remain 
fast;
  it's also about future-proofing, many new bugs will not require HV changes
George: the slow path would still require adjustments/mitigations, likely
Julien: map specific pages individually, not whole directmap, keep common 
xenheap mapped
Andrew: address space isolation helps also with non-speculative attacks, and 
also per-guest heaps would further isolate sensitive data
Julien: the problem with page found approach is finding all the places and data 
that is safe and needed for fast paths
Andrew: implement faulting and then profile, then see whether common hits are 
safe to keep mapped, but if not try to rearrange algorithms/data structures
Bertrand: adjust how Xen is linked, isolate fast path areas from slow path 
areas to be able to switch them on/off fast
Andrew: struct vcpu and struct domain is a dumping ground for everything, some 
parts will need moving too
  for example: register data for own vcpus - probably safe, but for different 
vcpu of the same guest probably not safe, vcpu for different guest definitely 
not
Bertrand: risk moving the problem somewhere else? the problem of defining what 
is safe
Andrew: you can identify when it's in the fast path
Jan: besides registers and guest own memory, is there anything else secret?
Andrew: we have more luck than Linux, because for example Xen has no in-Xen 
crypto libraries;
  but also, for example you can figure code paths by looking at stacks
  not much more secret data
Bertrand: if we try to unmap guest-specific data (Jan's idea), don't we solve 
the problem more efficient way?
Andrew: it's risky
  per-vcpu mapping is easy for HVM, but not for PV, because top level page
  table is chosen by the PV kernel, and Linux does sometimes run multiple vcpus
  using the same page tables -> no per-vcpu mapping
George: close to time limit, lets go to conclusions
Julien: figure out next steps, what to do with the series from 6 months ago
  remove directmap
  make virt<->mfn mapping easy to use
Jan: this feels like going too far, if we only need to remove few secret data
Julien: directmap is about whole guest memory
Daniel Smith(?): what is the overlap with SEV
Andrew: doesn't really overlap with encrypted VMs
  both Intel and AMD encrypted VMs assume hypervisor may have a mapping of 
encrypted pages
  if directmap is present, you have still cache timing attacks, removing 
directmap helps with that
Bertrand: also benefits from safety POV - limits the scope for evaluation
Andrew: accidental out of bounds write will be a page fault - easier to notice
Jan: on demand mapping of xenheap, that means 4k mappings of everything; can we 
do better, to preserve superpages?
Julien: few other structures to consider
Andew: EPT page tables are not sensitive, MSR permissions also not, because 
guest vcpu can recover them anyway
Jan: that data actually can be sensitive, but you can't do anything about it

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab


signature.asc
Description: PGP signature


[linux-5.4 test] 181571: regressions - trouble: broken/fail/pass

2023-06-24 Thread osstest service owner
flight 181571 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181571/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt broken
 build-amd64   6 xen-build  fail in 181563 REGR. vs. 181363

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt  5 host-install(5)  broken pass in 181553
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail in 181553 
pass in 181571
 test-armhf-armhf-xl  18 guest-start/debian.repeat  fail pass in 181563

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)blocked in 181563 n/a
 test-amd64-amd64-examine  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)  blocked in 181563 n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)  blocked in 181563 n/a
 test-amd64-amd64-examine-bios  1 build-check(1)  blocked in 181563 n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)  blocked in 181563 n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 
181563 n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1) blocked in 181563 n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1) blocked in 181563 n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)  blocked in 181563 n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)blocked in 181563 n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)  blocked in 181563 n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)  blocked in 181563 n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)blocked in 181563 n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 181563 
n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 181563 
n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 
181563 n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
in 181563 n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked in 181563 n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked in 181563 n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)  blocked in 181563 n/a
 test-amd64-coresched-i386-xl  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-examine   1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-examine-bios  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-examine-uefi  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)blocked in 181563 n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1) blocked in 181563 n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-pair  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1) blocked in 181563 n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1) blocked in 181563 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-xl1 build-check(1)   blocked in 181563 n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked in 181563 n/a
 

Re: Asking for help to debug xen efi on Kunpeng machine

2023-06-24 Thread Julien Grall

Hi,

On 20/06/2023 08:09, Jiatong Shen wrote:

Hello Julien,

Sorry for the delay.. I obtained the full xen log and attached it in the
mail. Please take a look when you are available. Thank you very much


Thanks for sharing the logs. The following lines are interesting:

[1.081905] Hisilicon MBIGEN-V2 HISI0152:00: Failed to create mbi-gen 
irqdomain
[1.082107] Hisilicon MBIGEN-V2 HISI0152:01: Failed to create mbi-gen 
irqdomain
[1.082204] Hisilicon MBIGEN-V2 HISI0152:02: Failed to create mbi-gen 
irqdomain
[1.082294] Hisilicon MBIGEN-V2 HISI0152:03: Failed to create mbi-gen 
irqdomain
[1.082381] Hisilicon MBIGEN-V2 HISI0152:04: Failed to create mbi-gen 
irqdomain
[1.082466] Hisilicon MBIGEN-V2 HISI0152:05: Failed to create mbi-gen 
irqdomain


Looking at a Hisilicon Device-Tree, this is an interrupt controller 
behind the GICv3 ITS. You will need to rebuild Xen with CONFIG_HAS_ITS=y.


Also, can you confirm which devices are behind the MBI-Gen? If this is 
only PCI devices, then you are probably fine to give the controllers to 
dom0. But for PCI passthrough, you will most likely need to implement it 
a driver in Xen.


Cheers,

--
Julien Grall



Re: [PATCH 3/7] xen/arm64: head: Add missing isb in setup_fixmap()

2023-06-24 Thread Julien Grall

Hi,

On 21/06/2023 11:13, Michal Orzel wrote:



On 21/06/2023 12:02, Julien Grall wrote:



Hi,

On 21/06/2023 10:33, Michal Orzel wrote:



On 19/06/2023 19:01, Julien Grall wrote:



From: Julien Grall 

On older version of the Arm Arm (ARM DDI 0487E.a, B2-125) there were
the following paragraph:

"DMB and DSB instructions affect reads and writes to the memory system
generated by Load/Store instructions and data or unified cache
maintenance instructions being executed by the PE. Instruction fetches
or accesses caused by a hardware translation table access are not
explicit accesses."

Newer revision (e.g. ARM DDI 0487J.a) doesn't have the second sentence
(it might be somewhere else in the Arm Arm). But the interpretation is
not much different.

In setup_fixmap(), we write the fixmap area and may be used soon after,
for instance, to write to the UART. IOW, there could be hardware
translation table access. So we need to ensure the 'dsb' has completed
before continuing. Therefore add an 'isb'.

Fixes: 2b11c3646105 ("xen/arm64: head: Remove 1:1 mapping as soon as it is not 
used")
Signed-off-by: Julien Grall 

Reviewed-by: Michal Orzel 

I'm happy with the whole series but I do not see a point in flooding each patch 
with my tag
since you already got two (from Henry and Luca).


Thanks. To clarify, shall I add it in each patch or only this one?

Whatever you prefer. If you care about my tag and want to have more than two, 
feel free to add it to
all the patches.


Ok. I will not then because I need to add the ack manually.

Cheers,

--
Julien Grall



Re: [XEN PATCH v4] xen/include: avoid using a compiler extension for BUILD_BUG_ON_ZERO.

2023-06-24 Thread Jan Beulich
On 24.06.2023 09:11, Julien Grall wrote:
> On 23/06/2023 18:16, Jan Beulich wrote:
>> I'm not happy to, with the continued use of the
>> two U suffixes. It may seem minor, but to me it feels like setting a
>> bad precedent.
> 
> I wasn't able to find the reasoning behind your objections in the 
> archive. I would like to understand your concern before providing any 
> ack. Would you be able to give a pointer?

I appreciate the Misra-invoked desire to add U suffixes where
otherwise (visual) ambiguities may exist. But on numbers like
0 or 1, and when use of e.g. resulting #define-s doesn't require
the constants to be of unsigned type, I view such suffixes purely
as clutter. In the specific case I might go as far as questioning
why, when U is added, L isn't added as well, to "support" the
size_t result aspect also from the "width of type" perspective.

Jan



Re: [XEN PATCH v4] xen/include: avoid using a compiler extension for BUILD_BUG_ON_ZERO.

2023-06-24 Thread Julien Grall

Hi,

First, one remark about the title. We don't usually add full stop in the 
title. I am happy to fix it on commit.


On 23/06/2023 18:16, Jan Beulich wrote:

I'm not happy to, with the continued use of the
two U suffixes. It may seem minor, but to me it feels like setting a
bad precedent.


I wasn't able to find the reasoning behind your objections in the 
archive. I would like to understand your concern before providing any 
ack. Would you be able to give a pointer?


Cheers,

--
Julien Grall



Re: [PATCH v1] xen/arm: arm32: Add support to identify the Cortex-R52 processor

2023-06-24 Thread Julien Grall

Hi,

On 23/06/2023 22:26, Julien Grall wrote:

--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -322,7 +322,7 @@ cpu_init:
  PRINT("- Setting up control registers -\r\n")

  mov   r5, lr   /* r5 := return address */
-
+#ifndef CONFIG_ARM_NO_PROC_INIT
  /* Get processor specific proc info into r1 */
  bl    __lookup_processor_type
  teq   r1, #0
@@ -337,7 +337,7 @@ cpu_init:
  ldr   r1, [r1, #PROCINFO_cpu_init]  /* r1 := vaddr(init 
func) */

  adr   lr, cpu_init_done /* Save return address */
  add   pc, r1, r10   /* Call paddr(init func) */
-
+#endif


I think it would be best if you just #ifdef the fail below. So if the 
config selected, then you will still be able to have a Xen that can boot 
Cortex-A15 or a core that don't need _init.


Note that for now, we should only select this new config for Armv8-R 
because there are some work to confirm it would be safe for us to boot 
Xen 32-bit Arm on any CPUs. I vaguely remember that we were making some 
assumptions on the cache type in the past. But maybe we other check in 
place to check such assumption.


If this can be confirm (I am not ask you to do it, but you can) then we 
could even get rid of the #ifdef.


I had a look through the code. We have a check in the 32-bit version of 
setup_mm() for the instruction cache type. So I think it would be OK to 
relax the check in head.S.


Bertrand, Stefano, what do you think?

Cheers,

--
Julien Grall



[xen-unstable test] 181565: tolerable FAIL - PUSHED

2023-06-24 Thread osstest service owner
flight 181565 xen-unstable real [real]
flight 181574 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181565/
http://logs.test-lab.xenproject.org/osstest/logs/181574/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 
181574-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 181545
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 181545
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 181545
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 181545
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 181545
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 181545
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 181545
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 181545
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 181545
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 181545
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 181545
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 181545
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-checkfail never pass

version targeted for testing:
 xen  5c84f1f636981dab5341e84aaba8d4dd00bbc2cb
baseline version:
 xen  7a25a1501ca941c3e01b0c4e624ace05417f1587

Last test of basis   181545  2023-06-22 01:52:10 Z