[PATCH] x86/xen: only unlock when USE_SPLIT_PTE_PTLOCKS is true

2020-09-28 Thread Jason Yan
When USE_SPLIT_PTE_PTLOCKS is false, xen_pte_lock() actually do nothing
but returns NULL. So xen_pte_unlock() should not actually unlock.
Otherwise a NULL pointer dereference will be triggered.

Fixes: 74260714c56d ("xen: lock pte pages while pinning/unpinning")
Signed-off-by: Jason Yan 
---
 arch/x86/xen/mmu_pv.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index eda78144c000..c70cbdf5c0fa 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -656,8 +656,10 @@ static spinlock_t *xen_pte_lock(struct page *page, struct 
mm_struct *mm)
 
 static void xen_pte_unlock(void *v)
 {
+#if USE_SPLIT_PTE_PTLOCKS
spinlock_t *ptl = v;
spin_unlock(ptl);
+#endif
 }
 
 static void xen_do_pin(unsigned level, unsigned long pfn)
-- 
2.25.4




Re: [PATCH] xen/x86: Fix memory leak in vcpu_create() error path

2020-09-28 Thread Jan Beulich
On 28.09.2020 17:47, Andrew Cooper wrote:
> Various paths in vcpu_create() end up calling paging_update_paging_modes(),
> which eventually allocate a monitor pagetable if one doesn't exist.
> 
> However, an error in vcpu_create() results in the vcpu being cleaned up
> locally, and not put onto the domain's vcpu list.  Therefore, the monitor
> table is not freed by {hap,shadow}_teardown()'s loop.  This is caught by
> assertions later that we've successfully freed the entire hap/shadow memory
> pool.
> 
> The per-vcpu loops in domain teardown logic is conceptually wrong, but exist
> due to insufficient existing structure in the existing logic.
> 
> Break paging_vcpu_teardown() out of paging_teardown(), with mirrored breakouts
> in the hap/shadow code, and use it from arch_vcpu_create()'s error path.  This
> fixes the memory leak.
> 
> The new {hap,shadow}_vcpu_teardown() must be idempotent, and are written to be
> as tolerable as possible, with the minimum number of safety checks possible.
> In particular, drop the mfn_valid() check - if junk is in these fields, then
> Xen is going to explode anyway.
> 
> Reported-by: Michał Leszczyński 
> Signed-off-by: Andrew Cooper 

Reviewed-by: Jan Beulich 
yet I've got a couple of simple questions:

> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -563,30 +563,37 @@ void hap_final_teardown(struct domain *d)
>  paging_unlock(d);
>  }
>  
> +void hap_vcpu_teardown(struct vcpu *v)
> +{
> +struct domain *d = v->domain;
> +mfn_t mfn;
> +
> +paging_lock(d);
> +
> +if ( !paging_mode_hap(d) || !v->arch.paging.mode )
> +goto out;

Any particular reason you don't use paging_get_hostmode() (as the
original code did) here? Any particular reason for the seemingly
redundant (and hence somewhat in conflict with the description's
"with the minimum number of safety checks possible")
paging_mode_hap()?

> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -2775,6 +2775,32 @@ int shadow_enable(struct domain *d, u32 mode)
>  return rv;
>  }
>  
> +void shadow_vcpu_teardown(struct vcpu *v)
> +{
> +struct domain *d = v->domain;
> +
> +paging_lock(d);
> +
> +if ( !paging_mode_shadow(d) || !v->arch.paging.mode )

Same question regarding paging_get_hostmode() here, albeit I see
the original code open-coded it in this case.

Jan



Re: [RESEND] [PATCH] tools/python: Pass linker to Python build process

2020-09-28 Thread Jan Beulich
On 29.09.2020 04:27, Pry Mar wrote:
>> Unexpectedly the environment variable which needs to be passed is
>> $LDSHARED and not $LD.  Otherwise Python may find the build `ld` instead
>> of the host `ld`.
>>
>> Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
>> it can load at runtime, not executables.
>>
>> This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
>> to $LDFLAGS which breaks many linkers.
>>
>> Signed-off-by: Elliott Mitchell 
> 
> Tested-by: Mark Pryor 

Just fyi: With this zapping of the domain that looks to have appeared
on your side (list archive has the same), I'm afraid such a tag is
unusable. While one might infer it from From: of the mail, there's no
guarantee that what's intended.

Jan



[xen-4.11-testing test] 155013: regressions - FAIL

2020-09-28 Thread osstest service owner
flight 155013 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155013/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-prev   6 xen-buildfail REGR. vs. 151714
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 151714
 test-amd64-amd64-xl-xsm  12 guest-start  fail REGR. vs. 151714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 151714
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 151714
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 151714
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 151714
 test-amd64-amd64-libvirt-xsm 12 guest-start  fail REGR. vs. 151714
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 151714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 151714
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 151714
 test-amd64-i386-libvirt-xsm  12 guest-start  fail REGR. vs. 151714
 test-amd64-i386-xl-xsm   12 guest-start  fail REGR. vs. 151714

Tests which did not succeed, but are not blocking:
 test-amd64-i386-migrupgrade   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-suppor

[xen-4.13-testing test] 155015: regressions - FAIL

2020-09-28 Thread osstest service owner
flight 155015 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155015/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 154358
 build-amd64-xsm   6 xen-buildfail REGR. vs. 154358

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-livepatch1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)   blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-livepatch 1 build-check(1)   blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-

re: [RESEND] [PATCH] tools/python: Pass linker to Python build process

2020-09-28 Thread Pry Mar
>Unexpectedly the environment variable which needs to be passed is
>$LDSHARED and not $LD.  Otherwise Python may find the build `ld` instead
>of the host `ld`.
>
>Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
>it can load at runtime, not executables.
>
>This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
>to $LDFLAGS which breaks many linkers.
>
>Signed-off-by: Elliott Mitchell 

Tested-by: Mark Pryor 



[xen-unstable-smoke test] 155048: tolerable all pass - PUSHED

2020-09-28 Thread osstest service owner
flight 155048 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155048/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  28804c0ce9fde36feec04ad7f57b2683875da8a0
baseline version:
 xen  4bdbf746ac9152e70f264f87db4472707da805ce

Last test of basis   155035  2020-09-28 17:02:03 Z0 days
Testing same since   155048  2020-09-28 22:00:30 Z0 days1 attempts


People who touched revisions under test:
  Julien Grall 
  Stefano Stabellini 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4bdbf746ac..28804c0ce9  28804c0ce9fde36feec04ad7f57b2683875da8a0 -> smoke



Re: [PATCH -next] xen: Fix a previous prototype warning in xen.c

2020-09-28 Thread Bjorn Helgaas
On Thu, Sep 24, 2020 at 10:36:16PM +0800, Li Heng wrote:
> Fix the warning:
> arch/x86/pci/xen.c:423:13: warning:
> no previous prototype for ‘xen_msi_init’ [-Wmissing-prototypes]
> 
> Reported-by: Hulk Robot 
> Signed-off-by: Li Heng 

Applied to pci/misc for v5.10, thanks!

> ---
>  arch/x86/pci/xen.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
> index 89395a5..f663a5f 100644
> --- a/arch/x86/pci/xen.c
> +++ b/arch/x86/pci/xen.c
> @@ -420,7 +420,7 @@ int __init pci_xen_init(void)
>  }
> 
>  #ifdef CONFIG_PCI_MSI
> -void __init xen_msi_init(void)
> +static void __init xen_msi_init(void)
>  {
>   if (!disable_apic) {
>   /*
> --
> 2.7.4
> 



[xen-4.10-testing test] 155012: regressions - trouble: blocked/broken/fail/pass

2020-09-28 Thread osstest service owner
flight 155012 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155012/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-thunderx broken
 build-i386-xsm6 xen-buildfail REGR. vs. 151728
 build-i3866 xen-buildfail REGR. vs. 151728
 build-amd64   6 xen-buildfail REGR. vs. 151728
 build-amd64-prev  6 xen-buildfail REGR. vs. 151728
 build-i386-prev   6 xen-buildfail REGR. vs. 151728
 build-amd64-xsm   6 xen-buildfail REGR. vs. 151728

Tests which did not succeed, but are not blocking:
 test-xtf-amd64-amd64-11 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-21 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-livepatch1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-livepatch 1 build-check(1)   blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-qemuu-rhel6

[seabios test] 155004: regressions - FAIL

2020-09-28 Thread osstest service owner
flight 155004 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155004/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm   6 xen-buildfail REGR. vs. 152554

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 152554
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 152554
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 152554
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 152554
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios  41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5
baseline version:
 seabios  155821a1990b6de78dde5f98fa5ab90e802021e0

Last test of basis   152554  2020-08-10 15:41:45 Z   49 days
Testing same since   154814  2020-09-25 16:10:32 Z3 days2 attempts


People who touched revisions under test:
  Daniel P. Berrangé 
  Matt DeVillier 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmblocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  blocked 
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-qemuu-freebsd11-amd64   pass
 test-amd64-amd64-qemuu-freebsd12-amd64   pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrictpass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict pass
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5
Author: Matt DeVillier 
Date:   Fri Sep 11 12:54:21 2020 -0500

usb.c: Fix devices using non-primary interface descriptor

A fair number of USB devices (keyboards in particular) use an
interface descriptor
other than the first available, making them non-functional currently.
To correct this, iterate through all available interface descriptors
until one with the correct class/subclass is found, then proceed to set the
configuration and setup the driver.

Tested on an ultimate hacking keyboard (UHK 60)

Signed-off-by: Matt DeVillier 

commit 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5
Author: Daniel P. Berrangé 
Date:   Tue Sep 8 16:16:53 2020 +0100

smbios: avoid integer overflow when adding SMBIOS type 0 table

SeaBIOS implements the SMB

[ovmf test] 155005: regressions - FAIL

2020-09-28 Thread osstest service owner
flight 155005 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155005/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm   6 xen-buildfail REGR. vs. 154633

version targeted for testing:
 ovmf 1d058c3e86b079a2e207bb022fd7a97814c9a04f
baseline version:
 ovmf dd5c7e3c5282b084daa5bbf0ec229cec699b2c17

Last test of basis   154633  2020-09-23 05:49:28 Z5 days
Failing since154753  2020-09-25 02:39:51 Z3 days3 attempts
Testing same since   154899  2020-09-26 12:23:59 Z2 days2 attempts


People who touched revisions under test:
  Bob Feng 
  gaoliming 
  Liming Gao 
  Mingyue Liang 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 1d058c3e86b079a2e207bb022fd7a97814c9a04f
Author: gaoliming 
Date:   Wed Sep 16 17:58:14 2020 +0800

IntelFsp2Pkg GenCfgOpt.py: Initialize IncLines as empty list

IncLines as empty list for the case when InputHeaderFile is not specified.

Cc: Chasel Chiu 
Cc: Nate DeSimone 
Cc: Star Zeng 
Signed-off-by: Liming Gao 
Reviewed-by: Chasel Chiu 
Reviewed-by: Star Zeng 

commit d8be01079b3c7b554ac8126e97e73fba8894e519
Author: Bob Feng 
Date:   Tue Sep 22 19:27:54 2020 +0800

BaseTools: Set section alignment as zero if its type is Auto

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2881

Currently, the build tool try to read the section alignment
from efi file if the section alignment type is Auto.
If there is no efi generated, the section alignment will
be set to zero. This behavior causes the Makefile to be different
between the full build and the incremental build.

Since the Genffs can auto get the section alignment from
efi file during Genffs procedure, the build tool can just set section
alignment as zero. This change can make the autogen makefile
consistent for the full build and the incremental build.

Signed-off-by: Bob Feng 
Cc: Liming Gao 
Cc: Yuwei Chen 

Reviewed-by: Liming Gao 
Reviewed-by: Yuwei Chen

commit 3a7a6761143a4840faea0bd84daada3ac0f1bd22
Author: Bob Feng 
Date:   Wed Sep 23 20:36:58 2020 +0800

BaseTools: Remove CanSkip calling for incremental build

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2978

If a module add a new PCD, the pcd token number will be
reassigned. The new Pcd token number should be updated
to all module's autogen files. CanSkip can only detect a
single module's change but not others. CanSkip block the
pcd token number update in incremental build, so this
patch is going to remove this call.

Signed-off-by: Bob Feng 
Cc: Liming Gao 
Cc: Yuwei Chen 

Reviewed-by: Yuwei Chen

commit 9641a7f975ff5a18f83a8c899626342e15409c48
Author: Mingyue Liang 
Date:   Wed Sep 23 18:57:32 2020 +0800

BaseTools: Normalize case of pathname when evaluating Macros.

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2880

Currently, When doing the Incremental build, the directory
macros extended to absolute path in output Makefile, which
is inconsistent with the output of Clean build.

When we do macro replacement, we can't replace macro due to
inconsistent path case, which results in inconsistent display
of incremental build and clean build in makefile.Therefore,
the path is converted to achieve the correct macro replacement.

Signed-off-by: Mingyue Liang 
Cc: Bob Feng 
Cc: Liming G

[xen-unstable-smoke test] 155035: tolerable all pass - PUSHED

2020-09-28 Thread osstest service owner
flight 155035 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155035/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  4bdbf746ac9152e70f264f87db4472707da805ce
baseline version:
 xen  5bcac985498ed83d89666959175ca9c9ed561ae1

Last test of basis   154728  2020-09-24 21:01:24 Z4 days
Testing same since   155022  2020-09-28 14:00:30 Z0 days2 attempts


People who touched revisions under test:
  Jan Beulich 
  Julien Grall 
  Marek Marczykowski-Górecki 
  Roger Pau Monné 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5bcac98549..4bdbf746ac  4bdbf746ac9152e70f264f87db4472707da805ce -> smoke



Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()

2020-09-28 Thread Pankaj Gupta
> Page isolation doesn't actually touch the pages, it simply isolates
> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>
> We already place pages to the tail of the freelists when undoing
> isolation via __putback_isolated_page(), let's do it in any case
> (e.g., if order <= pageblock_order) and document the behavior.
>
> Add a "to_tail" parameter to move_freepages_block() but introduce a
> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
>
> This change results in all pages getting onlined via online_pages() to
> be placed to the tail of the freelist.
>
> Reviewed-by: Oscar Salvador 
> Cc: Andrew Morton 
> Cc: Alexander Duyck 
> Cc: Mel Gorman 
> Cc: Michal Hocko 
> Cc: Dave Hansen 
> Cc: Vlastimil Babka 
> Cc: Wei Yang 
> Cc: Oscar Salvador 
> Cc: Mike Rapoport 
> Cc: Scott Cheloha 
> Cc: Michael Ellerman 
> Signed-off-by: David Hildenbrand 
> ---
>  include/linux/page-isolation.h |  4 ++--
>  mm/page_alloc.c| 35 +++---
>  mm/page_isolation.c| 12 +---
>  3 files changed, 35 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
> index 572458016331..3eca9b3c5305 100644
> --- a/include/linux/page-isolation.h
> +++ b/include/linux/page-isolation.h
> @@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
>  struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>  int migratetype, int flags);
>  void set_pageblock_migratetype(struct page *page, int migratetype);
> -int move_freepages_block(struct zone *zone, struct page *page,
> -   int migratetype, int *num_movable);
> +int move_freepages_block(struct zone *zone, struct page *page, int 
> migratetype,
> +bool to_tail, int *num_movable);
>
>  /*
>   * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9e3ed4a6f69a..d5a5f528b8ca 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, 
> struct zone *zone,
> list_move(&page->lru, &area->free_list[migratetype]);
>  }
>
> +/* Used for pages which are on another list */
> +static inline void move_to_free_list_tail(struct page *page, struct zone 
> *zone,
> + unsigned int order, int migratetype)
> +{
> +   struct free_area *area = &zone->free_area[order];
> +
> +   list_move_tail(&page->lru, &area->free_list[migratetype]);
> +}
> +
>  static inline void del_page_from_free_list(struct page *page, struct zone 
> *zone,
>unsigned int order)
>  {
> @@ -2338,9 +2347,9 @@ static inline struct page 
> *__rmqueue_cma_fallback(struct zone *zone,
>   * Note that start_page and end_pages are not aligned on a pageblock
>   * boundary. If alignment is required, use move_freepages_block()
>   */
> -static int move_freepages(struct zone *zone,
> - struct page *start_page, struct page *end_page,
> - int migratetype, int *num_movable)
> +static int move_freepages(struct zone *zone, struct page *start_page,
> + struct page *end_page, int migratetype,
> + bool to_tail, int *num_movable)
>  {
> struct page *page;
> unsigned int order;
> @@ -2371,7 +2380,10 @@ static int move_freepages(struct zone *zone,
> VM_BUG_ON_PAGE(page_zone(page) != zone, page);
>
> order = page_order(page);
> -   move_to_free_list(page, zone, order, migratetype);
> +   if (to_tail)
> +   move_to_free_list_tail(page, zone, order, 
> migratetype);
> +   else
> +   move_to_free_list(page, zone, order, migratetype);
> page += 1 << order;
> pages_moved += 1 << order;
> }
> @@ -2379,8 +2391,8 @@ static int move_freepages(struct zone *zone,
> return pages_moved;
>  }
>
> -int move_freepages_block(struct zone *zone, struct page *page,
> -   int migratetype, int *num_movable)
> +int move_freepages_block(struct zone *zone, struct page *page, int 
> migratetype,
> +bool to_tail, int *num_movable)
>  {
> unsigned long start_pfn, end_pfn;
> struct page *start_page, *end_page;
> @@ -2401,7 +2413,7 @@ int move_freepages_block(struct zone *zone, struct page 
> *page,
> return 0;
>
> return move_freepages(zone, start_page, end_page, migratetype,
> -   num_movable);
> + to_tail, num_movable);
>  }
>
>  static void change_pageblock_range(struct page *pageblock_page,
> @@ -2526,8 +2538,8 @@ static void steal_suitable

Re: [PATCH 12/12] evtchn: convert domain event lock to an r/w one

2020-09-28 Thread Roger Pau Monné
On Mon, Sep 28, 2020 at 01:02:43PM +0200, Jan Beulich wrote:
> Especially for the use in evtchn_move_pirqs() (called when moving a vCPU
> across pCPU-s) and the ones in EOI handling in PCI pass-through code,
> serializing perhaps an entire domain isn't helpful when no state (which
> isn't e.g. further protected by the per-channel lock) changes.
> 
> Unfortunately this implies dropping of lock profiling for this lock,
> until r/w locks may get enabled for such functionality.
> 
> While ->notify_vcpu_id is now meant to be consistently updated with the
> per-channel lock held for writing, an extension applies to ECS_PIRQ: The
> field is also guaranteed to not change with the per-domain event lock
> held. Therefore the unlink_pirq_port() call from evtchn_bind_vcpu() as
> well as the link_pirq_port() one from evtchn_bind_pirq() could in
> principle be moved out of the per-channel locked regions, but this
> further code churn didn't seem worth it.
> 
> Signed-off-by: Jan Beulich 
> ---
> RFC:
> * In evtchn_bind_vcpu() the question is whether limiting the use of
>   write_lock() to just the ECS_PIRQ case is really worth it.

IMO I would just use use write_lock() at the top of the function in
place of the current spin_lock. The more fine grained change should be
done as a follow up patch if it's worth it. TBH event channels
shouldn't change vCPU that frequently that using a more fine grained
approach matters much.

> * In flask_get_peer_sid() the question is whether we wouldn't better
>   switch to using the per-channel lock.
>  
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -465,7 +465,7 @@ int msixtbl_pt_register(struct domain *d
>  int r = -EINVAL;
>  
>  ASSERT(pcidevs_locked());
> -ASSERT(spin_is_locked(&d->event_lock));
> +ASSERT(rw_is_write_locked(&d->event_lock));

FWIW, we could switch rw_is_write_locked to use
_is_write_locked_by_me (or introduce rw_is_write_locked_by_me, albeit
I think all users of rw_is_write_locked care about the lock being
taken by them).

> @@ -1098,7 +1108,7 @@ int evtchn_reset(struct domain *d, bool
>  if ( d != current->domain && !d->controller_pause_count )
>  return -EINVAL;
>  
> -spin_lock(&d->event_lock);
> +read_lock(&d->event_lock);
>  
>  /*
>   * If we are resuming, then start where we stopped. Otherwise, check
> @@ -1109,7 +1119,7 @@ int evtchn_reset(struct domain *d, bool
>  if ( i > d->next_evtchn )
>  d->next_evtchn = i;

Using the read lock to write to d->next_evtchn here...

>  
> -spin_unlock(&d->event_lock);
> +read_unlock(&d->event_lock);
>  
>  if ( !i )
>  return -EBUSY;
> @@ -1121,14 +1131,14 @@ int evtchn_reset(struct domain *d, bool
>  /* NB: Choice of frequency is arbitrary. */
>  if ( !(i & 0x3f) && hypercall_preempt_check() )
>  {
> -spin_lock(&d->event_lock);
> +write_lock(&d->event_lock);
>  d->next_evtchn = i;

... but the write lock here instead seems inconsistent.

> -spin_unlock(&d->event_lock);
> +write_unlock(&d->event_lock);
>  return -ERESTART;
>  }
>  }
>  
> -spin_lock(&d->event_lock);
> +write_lock(&d->event_lock);
>  
>  d->next_evtchn = 0;
>  
> @@ -1557,7 +1568,7 @@ static void domain_dump_evtchn_info(stru
> "Polling vCPUs: {%*pbl}\n"
> "port [p/m/s]\n", d->domain_id, d->max_vcpus, d->poll_mask);
>  
> -spin_lock(&d->event_lock);
> +read_lock(&d->event_lock);

Since this is a debug key, I would suggest using read_trylock in order
to prevent blocking if a CPU is stuck while holding the event_lock in
write mode.


> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -105,7 +105,7 @@ static void pt_pirq_softirq_reset(struct
>  {
>  struct domain *d = pirq_dpci->dom;
>  
> -ASSERT(spin_is_locked(&d->event_lock));
> +ASSERT(rw_is_write_locked(&d->event_lock));
>  
>  switch ( cmpxchg(&pirq_dpci->state, 1 << STATE_SCHED, 0) )
>  {
> @@ -162,7 +162,7 @@ static void pt_irq_time_out(void *data)
>  const struct hvm_irq_dpci *dpci;
>  const struct dev_intx_gsi_link *digl;
>  
> -spin_lock(&irq_map->dom->event_lock);
> +read_lock(&irq_map->dom->event_lock);

Is it fine to use the lock in read mode here? It's likely to change
the flags by adding HVM_IRQ_DPCI_EOI_LATCH, and hence should use the
lock in write mode?

As I think that's the lock that's supposed to protect changes to the
flags field?

>  static void hvm_dirq_assist(struct domain *d, struct hvm_pirq_dpci 
> *pirq_dpci)
> @@ -893,7 +893,7 @@ static void hvm_dirq_assist(struct domai
>  return;
>  }
>  
> -spin_lock(&d->event_lock);
> +read_lock(&d->event_lock);

It's also not clear to me that a read lock can be used here, since you
increase a couple of counters of hvm_pirq_dpci which doesn't seem to
be protected by any other lock?

>  if ( test_and_clear_bool

Re: [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()

2020-09-28 Thread Pankaj Gupta
> __putback_isolated_page() already documents that pages will be placed to
> the tail of the freelist - this is, however, not the case for
> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
> the case for all existing users.
>
> This change affects two users:
> - free page reporting
> - page isolation, when undoing the isolation (including memory onlining).
>
> This behavior is desireable for pages that haven't really been touched
> lately, so exactly the two users that don't actually read/write page
> content, but rather move untouched pages.
>
> The new behavior is especially desirable for memory onlining, where we
> allow allocation of newly onlined pages via undo_isolate_page_range()
> in online_pages(). Right now, we always place them to the head of the
> free list, resulting in undesireable behavior: Assume we add
> individual memory chunks via add_memory() and online them right away to
> the NORMAL zone. We create a dependency chain of unmovable allocations
> e.g., via the memmap. The memmap of the next chunk will be placed onto
> previous chunks - if the last block cannot get offlined+removed, all
> dependent ones cannot get offlined+removed. While this can already be
> observed with individual DIMMs, it's more of an issue for virtio-mem
> (and I suspect also ppc DLPAR).
>
> Document that this should only be used for optimizations, and no code
> should realy on this for correction (if the order of freepage lists
> ever changes).
>
> We won't care about page shuffling: memory onlining already properly
> shuffles after onlining. free page reporting doesn't care about
> physically contiguous ranges, and there are already cases where page
> isolation will simply move (physically close) free pages to (currently)
> the head of the freelists via move_freepages_block() instead of
> shuffling. If this becomes ever relevant, we should shuffle the whole
> zone when undoing isolation of larger ranges, and after
> free_contig_range().
>
> Reviewed-by: Alexander Duyck 
> Reviewed-by: Oscar Salvador 
> Cc: Andrew Morton 
> Cc: Alexander Duyck 
> Cc: Mel Gorman 
> Cc: Michal Hocko 
> Cc: Dave Hansen 
> Cc: Vlastimil Babka 
> Cc: Wei Yang 
> Cc: Oscar Salvador 
> Cc: Mike Rapoport 
> Cc: Scott Cheloha 
> Cc: Michael Ellerman 
> Signed-off-by: David Hildenbrand 
> ---
>  mm/page_alloc.c | 18 --
>  1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index daab90e960fe..9e3ed4a6f69a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -89,6 +89,18 @@ typedef int __bitwise fop_t;
>   */
>  #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
>
> +/*
> + * Place the (possibly merged) page to the tail of the freelist. Will ignore
> + * page shuffling (relevant code - e.g., memory onlining - is expected to
> + * shuffle the whole zone).
> + *
> + * Note: No code should rely onto this flag for correctness - it's purely
> + *   to allow for optimizations when handing back either fresh pages
> + *   (memory onlining) or untouched pages (page isolation, free page
> + *   reporting).
> + */
> +#define FOP_TO_TAIL((__force fop_t)BIT(1))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION   (8)
> @@ -1038,7 +1050,9 @@ static inline void __free_one_page(struct page *page, 
> unsigned long pfn,
>  done_merging:
> set_page_order(page, order);
>
> -   if (is_shuffle_order(order))
> +   if (fop_flags & FOP_TO_TAIL)
> +   to_tail = true;
> +   else if (is_shuffle_order(order))
> to_tail = shuffle_pick_tail();
> else
> to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
> @@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page, 
> unsigned int order, int mt)
>
> /* Return isolated page to tail of freelist. */
> __free_one_page(page, page_to_pfn(page), zone, order, mt,
> -   FOP_SKIP_REPORT_NOTIFY);
> +   FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
>  }

Reviewed-by: Pankaj Gupta 



Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()

2020-09-28 Thread Pankaj Gupta
> __free_pages_core() is used when exposing fresh memory to the buddy
> during system boot and when onlining memory in generic_online_page().
>
> generic_online_page() is used in two cases:
>
> 1. Direct memory onlining in online_pages().
> 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>balloon and virtio-mem), when parts of a section are kept
>fake-offline to be fake-onlined later on.
>
> In 1, we already place pages to the tail of the freelist. Pages will be
> freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
> via undo_isolate_page_range().
>
> In 2, we currently don't implement a proper rule. In case of virtio-mem,
> where we currently always online MAX_ORDER - 1 pages, the pages will be
> placed to the HEAD of the freelist - undesireable. While the hyper-v
> balloon calls generic_online_page() with single pages, usually it will
> call it on successive single pages in a larger block.
>
> The pages are fresh, so place them to the tail of the freelists and avoid
> the PCP. In __free_pages_core(), remove the now superflouos call to
> set_page_refcounted() and add a comment regarding page initialization and
> the refcount.
>
> Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
> is usually of limited use in virtualized environments), we might want to
> shuffle after a sequence of generic_online_page() calls in the
> relevant callers.
>
> Reviewed-by: Vlastimil Babka 
> Reviewed-by: Oscar Salvador 
> Cc: Andrew Morton 
> Cc: Alexander Duyck 
> Cc: Mel Gorman 
> Cc: Michal Hocko 
> Cc: Dave Hansen 
> Cc: Vlastimil Babka 
> Cc: Wei Yang 
> Cc: Oscar Salvador 
> Cc: Mike Rapoport 
> Cc: "K. Y. Srinivasan" 
> Cc: Haiyang Zhang 
> Cc: Stephen Hemminger 
> Cc: Wei Liu 
> Signed-off-by: David Hildenbrand 
> ---
>  mm/page_alloc.c | 37 -
>  1 file changed, 24 insertions(+), 13 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d5a5f528b8ca..8a2134fe9947 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
>  unsigned int pageblock_order __read_mostly;
>  #endif
>
> -static void __free_pages_ok(struct page *page, unsigned int order);
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +   fop_t fop_flags);
>
>  /*
>   * results with 256, 32 in the lowmem_reserve sysctl:
> @@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char 
> *reason)
>  void free_compound_page(struct page *page)
>  {
> mem_cgroup_uncharge(page);
> -   __free_pages_ok(page, compound_order(page));
> +   __free_pages_ok(page, compound_order(page), FOP_NONE);
>  }
>
>  void prep_compound_page(struct page *page, unsigned int order)
> @@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int 
> count,
> spin_unlock(&zone->lock);
>  }
>
> -static void free_one_page(struct zone *zone,
> -   struct page *page, unsigned long pfn,
> -   unsigned int order,
> -   int migratetype)
> +static void free_one_page(struct zone *zone, struct page *page, unsigned 
> long pfn,
> + unsigned int order, int migratetype, fop_t 
> fop_flags)
>  {
> spin_lock(&zone->lock);
> if (unlikely(has_isolate_pageblock(zone) ||
> is_migrate_isolate(migratetype))) {
> migratetype = get_pfnblock_migratetype(page, pfn);
> }
> -   __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
> +   __free_one_page(page, pfn, zone, order, migratetype, fop_flags);
> spin_unlock(&zone->lock);
>  }
>
> @@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t 
> start, phys_addr_t end)
> }
>  }
>
> -static void __free_pages_ok(struct page *page, unsigned int order)
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +   fop_t fop_flags)
>  {
> unsigned long flags;
> int migratetype;
> @@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned 
> int order)
> migratetype = get_pfnblock_migratetype(page, pfn);
> local_irq_save(flags);
> __count_vm_events(PGFREE, 1 << order);
> -   free_one_page(page_zone(page), page, pfn, order, migratetype);
> +   free_one_page(page_zone(page), page, pfn, order, migratetype,
> + fop_flags);
> local_irq_restore(flags);
>  }
>
> @@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int 
> order)
> struct page *p = page;
> unsigned int loop;
>
> +   /*
> +* When initializing the memmap, init_single_page() sets the refcount
> +* of all pages to 1 ("allocated"/"not free"). We have to set the
> +* refcount of all involved pages to 0.
> +*/
> prefetchw(p);

Re: [PATCH 57/63] xen: Rename XENBACKEND_DEVICE to XENBACKEND

2020-09-28 Thread Anthony PERARD
On Wed, Sep 02, 2020 at 06:43:05PM -0400, Eduardo Habkost wrote:
> Make the type checking macro name consistent with the TYPE_*
> constant.
> 
> Signed-off-by: Eduardo Habkost 

Acked-by: Anthony PERARD 

Thanks,

-- 
Anthony PERARD



Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag

2020-09-28 Thread Pankaj Gupta
> Let's prepare for additional flags and avoid long parameter lists of bools.
> Follow-up patches will also make use of the flags in __free_pages_ok(),
> however, I wasn't able to come up with a better name for the type - should
> be good enough for internal purposes.
>
> Reviewed-by: Alexander Duyck 
> Reviewed-by: Vlastimil Babka 
> Reviewed-by: Oscar Salvador 
> Cc: Andrew Morton 
> Cc: Alexander Duyck 
> Cc: Mel Gorman 
> Cc: Michal Hocko 
> Cc: Dave Hansen 
> Cc: Vlastimil Babka 
> Cc: Wei Yang 
> Cc: Oscar Salvador 
> Cc: Mike Rapoport 
> Signed-off-by: David Hildenbrand 
> ---
>  mm/page_alloc.c | 28 
>  1 file changed, 20 insertions(+), 8 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index df90e3654f97..daab90e960fe 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -77,6 +77,18 @@
>  #include "shuffle.h"
>  #include "page_reporting.h"
>
> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
> +typedef int __bitwise fop_t;
> +
> +/* No special request */
> +#define FOP_NONE   ((__force fop_t)0)
> +
> +/*
> + * Skip free page reporting notification for the (possibly merged) page. 
> (will
> + * *not* mark the page reported, only skip the notification).
> + */
> +#define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION   (8)
> @@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long 
> buddy_pfn,
>   * -- nyc
>   */
>
> -static inline void __free_one_page(struct page *page,
> -   unsigned long pfn,
> -   struct zone *zone, unsigned int order,
> -   int migratetype, bool report)
> +static inline void __free_one_page(struct page *page, unsigned long pfn,
> +  struct zone *zone, unsigned int order,
> +  int migratetype, fop_t fop_flags)
>  {
> struct capture_control *capc = task_capc(zone);
> unsigned long buddy_pfn;
> @@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page,
> add_to_free_list(page, zone, order, migratetype);
>
> /* Notify page reporting subsystem of freed page */
> -   if (report)
> +   if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY))
> page_reporting_notify_free(order);
>  }
>
> @@ -1379,7 +1390,7 @@ static void free_pcppages_bulk(struct zone *zone, int 
> count,
> if (unlikely(isolated_pageblocks))
> mt = get_pageblock_migratetype(page);
>
> -   __free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
> +   __free_one_page(page, page_to_pfn(page), zone, 0, mt, 
> FOP_NONE);
> trace_mm_page_pcpu_drain(page, 0, mt);
> }
> spin_unlock(&zone->lock);
> @@ -1395,7 +1406,7 @@ static void free_one_page(struct zone *zone,
> is_migrate_isolate(migratetype))) {
> migratetype = get_pfnblock_migratetype(page, pfn);
> }
> -   __free_one_page(page, pfn, zone, order, migratetype, true);
> +   __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
> spin_unlock(&zone->lock);
>  }
>
> @@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page, 
> unsigned int order, int mt)
> lockdep_assert_held(&zone->lock);
>
> /* Return isolated page to tail of freelist. */
> -   __free_one_page(page, page_to_pfn(page), zone, order, mt, false);
> +   __free_one_page(page, page_to_pfn(page), zone, order, mt,
> +   FOP_SKIP_REPORT_NOTIFY);
>  }

Reviewed-by: Pankaj Gupta 



[PATCH] xen/x86: Fix memory leak in vcpu_create() error path

2020-09-28 Thread Andrew Cooper
Various paths in vcpu_create() end up calling paging_update_paging_modes(),
which eventually allocate a monitor pagetable if one doesn't exist.

However, an error in vcpu_create() results in the vcpu being cleaned up
locally, and not put onto the domain's vcpu list.  Therefore, the monitor
table is not freed by {hap,shadow}_teardown()'s loop.  This is caught by
assertions later that we've successfully freed the entire hap/shadow memory
pool.

The per-vcpu loops in domain teardown logic is conceptually wrong, but exist
due to insufficient existing structure in the existing logic.

Break paging_vcpu_teardown() out of paging_teardown(), with mirrored breakouts
in the hap/shadow code, and use it from arch_vcpu_create()'s error path.  This
fixes the memory leak.

The new {hap,shadow}_vcpu_teardown() must be idempotent, and are written to be
as tolerable as possible, with the minimum number of safety checks possible.
In particular, drop the mfn_valid() check - if junk is in these fields, then
Xen is going to explode anyway.

Reported-by: Michał Leszczyński 
Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Roger Pau Monné 
CC: Wei Liu 
CC: George Dunlap 
CC: Tim Deegan 
CC: Michał Leszczyński 

This is a minimal patch which ought to be safe to backport.  The whole cleanup
infrastructure is a mess.
---
 xen/arch/x86/domain.c   |  1 +
 xen/arch/x86/mm/hap/hap.c   | 39 ++-
 xen/arch/x86/mm/paging.c|  8 +++
 xen/arch/x86/mm/shadow/common.c | 52 -
 xen/include/asm-x86/hap.h   |  1 +
 xen/include/asm-x86/paging.h|  3 ++-
 xen/include/asm-x86/shadow.h|  3 ++-
 7 files changed, 67 insertions(+), 40 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index e8e91cf080..b8f5b1f5b4 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -603,6 +603,7 @@ int arch_vcpu_create(struct vcpu *v)
 return rc;
 
  fail:
+paging_vcpu_teardown(v);
 vcpu_destroy_fpu(v);
 xfree(v->arch.msrs);
 v->arch.msrs = NULL;
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 4eedd1a995..737821a166 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -563,30 +563,37 @@ void hap_final_teardown(struct domain *d)
 paging_unlock(d);
 }
 
+void hap_vcpu_teardown(struct vcpu *v)
+{
+struct domain *d = v->domain;
+mfn_t mfn;
+
+paging_lock(d);
+
+if ( !paging_mode_hap(d) || !v->arch.paging.mode )
+goto out;
+
+mfn = pagetable_get_mfn(v->arch.hvm.monitor_table);
+if ( mfn_x(mfn) )
+hap_destroy_monitor_table(v, mfn);
+v->arch.hvm.monitor_table = pagetable_null();
+
+ out:
+paging_unlock(d);
+}
+
 void hap_teardown(struct domain *d, bool *preempted)
 {
 struct vcpu *v;
-mfn_t mfn;
 
 ASSERT(d->is_dying);
 ASSERT(d != current->domain);
 
-paging_lock(d); /* Keep various asserts happy */
+/* TODO - Remove when the teardown path is better structured. */
+for_each_vcpu ( d, v )
+hap_vcpu_teardown(v);
 
-if ( paging_mode_enabled(d) )
-{
-/* release the monitor table held by each vcpu */
-for_each_vcpu ( d, v )
-{
-if ( paging_get_hostmode(v) && paging_mode_external(d) )
-{
-mfn = pagetable_get_mfn(v->arch.hvm.monitor_table);
-if ( mfn_valid(mfn) && (mfn_x(mfn) != 0) )
-hap_destroy_monitor_table(v, mfn);
-v->arch.hvm.monitor_table = pagetable_null();
-}
-}
-}
+paging_lock(d); /* Keep various asserts happy */
 
 if ( d->arch.paging.hap.total_pages != 0 )
 {
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 695372783d..d5e967fcd5 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -794,6 +794,14 @@ long 
paging_domctl_continuation(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 }
 #endif /* CONFIG_PV_SHIM_EXCLUSIVE */
 
+void paging_vcpu_teardown(struct vcpu *v)
+{
+if ( hap_enabled(v->domain) )
+hap_vcpu_teardown(v);
+else
+shadow_vcpu_teardown(v);
+}
+
 /* Call when destroying a domain */
 int paging_teardown(struct domain *d)
 {
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 7c7204fd34..ea51068530 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2775,6 +2775,32 @@ int shadow_enable(struct domain *d, u32 mode)
 return rv;
 }
 
+void shadow_vcpu_teardown(struct vcpu *v)
+{
+struct domain *d = v->domain;
+
+paging_lock(d);
+
+if ( !paging_mode_shadow(d) || !v->arch.paging.mode )
+goto out;
+
+v->arch.paging.mode->shadow.detach_old_tables(v);
+#ifdef CONFIG_HVM
+if ( shadow_mode_external(d) )
+{
+mfn_t mfn = pagetable_get_mfn(v->arch.hvm.monitor_table);
+
+if ( mfn_x(mfn) )
+v->arch.paging.mode->shadow.destroy_monitor_table

Re: [PATCH 6/5] x86/ELF: drop unnecessary volatile from asm()-s in elf_core_save_regs()

2020-09-28 Thread Andrew Cooper
On 28/09/2020 16:04, Jan Beulich wrote:
> There are no hidden side effects here.
>
> Signed-off-by: Jan Beulich 
> ---
> v2: New.
>
> --- a/xen/include/asm-x86/x86_64/elf.h
> +++ b/xen/include/asm-x86/x86_64/elf.h
> @@ -37,26 +37,26 @@ typedef struct {
>  static inline void elf_core_save_regs(ELF_Gregset *core_regs, 
>crash_xen_core_t *xen_core_regs)
>  {
> -asm volatile("movq %%r15,%0" : "=m"(core_regs->r15));
> -asm volatile("movq %%r14,%0" : "=m"(core_regs->r14));
> -asm volatile("movq %%r13,%0" : "=m"(core_regs->r13));
> -asm volatile("movq %%r12,%0" : "=m"(core_regs->r12));
> -asm volatile("movq %%rbp,%0" : "=m"(core_regs->rbp));
> -asm volatile("movq %%rbx,%0" : "=m"(core_regs->rbx));
> -asm volatile("movq %%r11,%0" : "=m"(core_regs->r11));
> -asm volatile("movq %%r10,%0" : "=m"(core_regs->r10));
> -asm volatile("movq %%r9,%0" : "=m"(core_regs->r9));
> -asm volatile("movq %%r8,%0" : "=m"(core_regs->r8));
> -asm volatile("movq %%rax,%0" : "=m"(core_regs->rax));
> -asm volatile("movq %%rcx,%0" : "=m"(core_regs->rcx));
> -asm volatile("movq %%rdx,%0" : "=m"(core_regs->rdx));
> -asm volatile("movq %%rsi,%0" : "=m"(core_regs->rsi));
> -asm volatile("movq %%rdi,%0" : "=m"(core_regs->rdi));
> +asm ( "movq %%r15,%0" : "=m" (core_regs->r15) );
> +asm ( "movq %%r14,%0" : "=m" (core_regs->r14) );
> +asm ( "movq %%r13,%0" : "=m" (core_regs->r13) );
> +asm ( "movq %%r12,%0" : "=m" (core_regs->r12) );
> +asm ( "movq %%rbp,%0" : "=m" (core_regs->rbp) );
> +asm ( "movq %%rbx,%0" : "=m" (core_regs->rbx) );
> +asm ( "movq %%r11,%0" : "=m" (core_regs->r11) );
> +asm ( "movq %%r10,%0" : "=m" (core_regs->r10) );
> +asm ( "movq %%r9,%0" : "=m" (core_regs->r9) );
> +asm ( "movq %%r8,%0" : "=m" (core_regs->r8) );

Any chance we can align these seeing as they're changing?

What about spaces before %0 ?

Either way, Reviewed-by: Andrew Cooper 

> +asm ( "movq %%rax,%0" : "=m" (core_regs->rax) );
> +asm ( "movq %%rcx,%0" : "=m" (core_regs->rcx) );
> +asm ( "movq %%rdx,%0" : "=m" (core_regs->rdx) );
> +asm ( "movq %%rsi,%0" : "=m" (core_regs->rsi) );
> +asm ( "movq %%rdi,%0" : "=m" (core_regs->rdi) );
>  /* orig_rax not filled in for now */
>  asm ( "call 0f; 0: popq %0" : "=m" (core_regs->rip) );
>  core_regs->cs = read_sreg(cs);
> -asm volatile("pushfq; popq %0" :"=m"(core_regs->rflags));
> -asm volatile("movq %%rsp,%0" : "=m"(core_regs->rsp));
> +asm ( "pushfq; popq %0" : "=m" (core_regs->rflags) );
> +asm ( "movq %%rsp,%0" : "=m" (core_regs->rsp) );
>  core_regs->ss = read_sreg(ss);
>  rdmsrl(MSR_FS_BASE, core_regs->thread_fs);
>  rdmsrl(MSR_GS_BASE, core_regs->thread_gs);
>




Re: [PATCH 1/5] x86: introduce read_sregs() to allow storing to memory directly

2020-09-28 Thread Andrew Cooper
On 28/09/2020 15:49, Jan Beulich wrote:
> On 28.09.2020 14:47, Andrew Cooper wrote:
>> On 28/09/2020 13:05, Jan Beulich wrote:
>>> --- a/xen/include/asm-x86/regs.h
>>> +++ b/xen/include/asm-x86/regs.h
>>> @@ -15,4 +15,18 @@
>>>  (diff == 0);   
>>>\
>>>  })
>>>  
>>> +#define read_sreg(name) ({\
>>> +unsigned int __sel;   \
>>> +asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \
>>> +__sel;\
>>> +})
>>> +
>>> +static inline void read_sregs(struct cpu_user_regs *regs)
>>> +{
>>> +asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) );
>>> +asm volatile ( "mov %%es, %0" : "=m" (regs->es) );
>>> +asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) );
>>> +asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) );
>> It occurs to me that reads don't need to be volatile.  There are no side
>> effects.
> I'll do the same for what patches 3 and 5 alter anyway, assuming
> this won't invalidate your R-b there.

3 is fine.  5 is a little more problematic, because there are
serialising side effects, but I suppose we really don't care here.

~Andrew



Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend mode

2020-09-28 Thread boris . ostrovsky


On 9/25/20 6:28 PM, Anchal Agarwal wrote:
> On Fri, Sep 25, 2020 at 04:02:58PM -0400, boris.ostrov...@oracle.com wrote:
>> CAUTION: This email originated from outside of the organization. Do not 
>> click links or open attachments unless you can confirm the sender and know 
>> the content is safe.
>>
>>
>>
>> On 9/25/20 3:04 PM, Anchal Agarwal wrote:
>>> On Tue, Sep 22, 2020 at 11:17:36PM +, Anchal Agarwal wrote:
 On Tue, Sep 22, 2020 at 12:18:05PM -0400, boris.ostrov...@oracle.com wrote:
> CAUTION: This email originated from outside of the organization. Do not 
> click links or open attachments unless you can confirm the sender and 
> know the content is safe.
>
>
>
> On 9/21/20 5:54 PM, Anchal Agarwal wrote:

> Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't 
> had
> bandwidth to dive deep into the issue and fix it.
>>
>> So what's the plan there? You first mentioned this issue early this year and 
>> judged by your response it is not clear whether you will ever spend time 
>> looking at it.
>>
> I do want to fix it and did do some debugging earlier this year just haven't
> gotten back to it. Also, wanted to understand if the issue is a blocker to 
> this
> series?


Integrating code with known bugs is less than ideal.


3% failure for this feature seems to be a manageable number from the 
reproducability perspective --- you should be able to script this and each 
iteration should take way under a minute, no?


> I had some theories when debugging around this like if the random base 
> address picked by kaslr for the
> resuming kernel mismatches the suspended kernel and just jogging my memory, I 
> didn't find that as the case.
> Another hunch was if physical address of registered vcpu info at boot is 
> different from what suspended kernel
> has and that can cause CPU's to get stuck when coming online. 


I'd think if this were the case you'd have 100% failure rate. And we are also 
re-registering vcpu info on xen restore and I am not aware of any failures due 
to KASLR.


> The issue was only
> reproducible 3% of the time out of 3000 runs hence its hard to just reproduce 
> this.
>
> Moreover, I also wanted to get an insight on if hibernation works correctly 
> with KASLR
> generally and its only Xen causing the issue?


With KASLR being on by default I'd be surprised if it didn't.


-boris




Re: [EXTERNAL] [Xen-devel] XEN Qdisk Ceph rbd support broken?

2020-09-28 Thread Anthony PERARD
On Fri, Jul 17, 2020 at 08:48:01AM +0100, Paul Durrant wrote:
> > -Original Message-
> > From: Brian Marcotte 
> > Sent: 16 July 2020 21:24
> > To: Paul Durrant 
> > Cc: p...@xen.org; 'Jules' ; xen-devel@lists.xenproject.org;
> > oleksandr_gryt...@epam.com; w...@xen.org
> > Subject: Re: [EXTERNAL] [Xen-devel] XEN Qdisk Ceph rbd support broken?
> > 
> > > Your issue stems from the auto-creation code in xen-block:
> > >
> > > The "aio:rbd:rbd/machine.disk0" string is generated by libxl and does
> > > look a little odd and will fool the parser there, but the error you see
> > > after modifying the string appears to be because QEMU's QMP block
> > > device instantiation code is objecting to a missing parameter. Older
> > > QEMUs circumvented that code which is almost certainly why you don't
> > > see the issue with versions 2 or 3.
> > 
> > Xen 4.13 and 4.14 includes QEMU 4 and 5. They don't work with Ceph/RBD.
> > 
> > Are you saying that xl/libxl is doing the right thing and the problem
> > needs to be fixed in QEMU?
> 
> Unfortunately, from what you describe, it sounds like there is a problem in 
> both. To get something going, you could bring a domain
> up paused and then try manually adding your rbd device using the QMP shell.
> 
> It would be useful if a toolstack maintainer could take a look at this issue 
> in the near future.
> 

Hi,

I did start working on a solution some time ago and produced a patch for
QEMU (attached) which would allow QEMU to parse the aio:rdb:... string
from xenstore.

But I ran into an other issue when I tried with nbd (Network Block
Device), QEMU would connect twice to the NBD server and the server I had
didn't like it. Maybe Ceph would allow two connections to the same disk?

The two connections issue is less likely to happen on older QEMU because
it would delay the second connection until the guest connect to the PV
backend, so after the emulated disk has been unplugged (and thus the
first connection disconnected).

Anyway, it would be better to upgrade libxl to be able to create a QEMU
PV backend via QMP or qemu's command line rather than via xenstore, but
I don't think I have time to work on it just yet. But I feel like we are
going to have the same issue that QEMU will try to connect twice to the
Ceph server where this wasn't likely to happen before.

Jules, Brian, could you maybe try the attach QEMU patch and see if that
works?

Cheers,

-- 
Anthony PERARD
>From 1b8d77f1f8709a6ef1960111ea022cfb6d74 Mon Sep 17 00:00:00 2001
From: Anthony PERARD 
Date: Fri, 17 Jan 2020 12:05:09 +
Subject: [PATCH] xen-block: Fix parsing of legacy options

Even though the xen-disk PV backend can be instantiated via QMP, we
still need to handle the case where the backend is created via
xenstore. This means that we need to be able to parse legacy disk
options such as "aio:nbd://host:1234/disk".

Signed-off-by: Anthony PERARD 
---
 block.c|  6 ++
 hw/block/xen-block.c   | 25 +
 include/sysemu/block-backend.h |  3 +++
 3 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/block.c b/block.c
index ecd09dbbfd89..13b8690e5006 100644
--- a/block.c
+++ b/block.c
@@ -1705,6 +1705,12 @@ static int bdrv_fill_options(QDict **options, const char 
*filename,
 
 return 0;
 }
+int bdrv_fill_options_legacy(QDict **options, const char *filename,
+ int *flags, Error **errp)
+{
+return bdrv_fill_options(options, filename, flags, errp);
+}
+
 
 static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
  uint64_t perm, uint64_t shared,
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 879fc310a4c5..1cc97a001e1f 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -28,6 +28,7 @@
 #include "sysemu/iothread.h"
 #include "dataplane/xen-block.h"
 #include "trace.h"
+#include "include/block/qdict.h"
 
 static char *xen_block_get_name(XenDevice *xendev, Error **errp)
 {
@@ -687,7 +688,12 @@ static char *xen_block_blockdev_add(const char *id, QDict 
*qdict,
 
 trace_xen_block_blockdev_add(node_name);
 
-v = qobject_input_visitor_new(QOBJECT(qdict));
+qdict_flatten(qdict);
+v = qobject_input_visitor_new_flat_confused(qdict, &local_err);
+if (local_err) {
+error_propagate(errp, local_err);
+goto fail;
+}
 visit_type_BlockdevOptions(v, NULL, &options, &local_err);
 visit_free(v);
 
@@ -782,8 +788,14 @@ static XenBlockDrive *xen_block_drive_create(const char 
*id,
 file_layer = qdict_new();
 driver_layer = qdict_new();
 
-qdict_put_str(file_layer, "driver", "file");
-qdict_put_str(file_layer, "filename", filename);
+int flags = BDRV_O_PROTOCOL | BDRV_O_RDWR;
+if (mode && *mode != 'w') {
+flags &= ~BDRV_O_RDWR;
+}
+bdrv_fill_options_legacy(&file_layer, filename, &flags, &local_err);
+if (local_err)
+goto done;
+
 g_free(filename);
 
 if 

[PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()

2020-09-28 Thread David Hildenbrand
__putback_isolated_page() already documents that pages will be placed to
the tail of the freelist - this is, however, not the case for
"order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
the case for all existing users.

This change affects two users:
- free page reporting
- page isolation, when undoing the isolation (including memory onlining).

This behavior is desireable for pages that haven't really been touched
lately, so exactly the two users that don't actually read/write page
content, but rather move untouched pages.

The new behavior is especially desirable for memory onlining, where we
allow allocation of newly onlined pages via undo_isolate_page_range()
in online_pages(). Right now, we always place them to the head of the
free list, resulting in undesireable behavior: Assume we add
individual memory chunks via add_memory() and online them right away to
the NORMAL zone. We create a dependency chain of unmovable allocations
e.g., via the memmap. The memmap of the next chunk will be placed onto
previous chunks - if the last block cannot get offlined+removed, all
dependent ones cannot get offlined+removed. While this can already be
observed with individual DIMMs, it's more of an issue for virtio-mem
(and I suspect also ppc DLPAR).

Document that this should only be used for optimizations, and no code
should realy on this for correction (if the order of freepage lists
ever changes).

We won't care about page shuffling: memory onlining already properly
shuffles after onlining. free page reporting doesn't care about
physically contiguous ranges, and there are already cases where page
isolation will simply move (physically close) free pages to (currently)
the head of the freelists via move_freepages_block() instead of
shuffling. If this becomes ever relevant, we should shuffle the whole
zone when undoing isolation of larger ranges, and after
free_contig_range().

Reviewed-by: Alexander Duyck 
Reviewed-by: Oscar Salvador 
Cc: Andrew Morton 
Cc: Alexander Duyck 
Cc: Mel Gorman 
Cc: Michal Hocko 
Cc: Dave Hansen 
Cc: Vlastimil Babka 
Cc: Wei Yang 
Cc: Oscar Salvador 
Cc: Mike Rapoport 
Cc: Scott Cheloha 
Cc: Michael Ellerman 
Signed-off-by: David Hildenbrand 
---
 mm/page_alloc.c | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index daab90e960fe..9e3ed4a6f69a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -89,6 +89,18 @@ typedef int __bitwise fop_t;
  */
 #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
 
+/*
+ * Place the (possibly merged) page to the tail of the freelist. Will ignore
+ * page shuffling (relevant code - e.g., memory onlining - is expected to
+ * shuffle the whole zone).
+ *
+ * Note: No code should rely onto this flag for correctness - it's purely
+ *   to allow for optimizations when handing back either fresh pages
+ *   (memory onlining) or untouched pages (page isolation, free page
+ *   reporting).
+ */
+#define FOP_TO_TAIL((__force fop_t)BIT(1))
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION   (8)
@@ -1038,7 +1050,9 @@ static inline void __free_one_page(struct page *page, 
unsigned long pfn,
 done_merging:
set_page_order(page, order);
 
-   if (is_shuffle_order(order))
+   if (fop_flags & FOP_TO_TAIL)
+   to_tail = true;
+   else if (is_shuffle_order(order))
to_tail = shuffle_pick_tail();
else
to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
@@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page, unsigned 
int order, int mt)
 
/* Return isolated page to tail of freelist. */
__free_one_page(page, page_to_pfn(page), zone, order, mt,
-   FOP_SKIP_REPORT_NOTIFY);
+   FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
 }
 
 /*
-- 
2.26.2




[PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()

2020-09-28 Thread David Hildenbrand
Page isolation doesn't actually touch the pages, it simply isolates
pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.

We already place pages to the tail of the freelists when undoing
isolation via __putback_isolated_page(), let's do it in any case
(e.g., if order <= pageblock_order) and document the behavior.

Add a "to_tail" parameter to move_freepages_block() but introduce a
a new move_to_free_list_tail() - similar to add_to_free_list_tail().

This change results in all pages getting onlined via online_pages() to
be placed to the tail of the freelist.

Reviewed-by: Oscar Salvador 
Cc: Andrew Morton 
Cc: Alexander Duyck 
Cc: Mel Gorman 
Cc: Michal Hocko 
Cc: Dave Hansen 
Cc: Vlastimil Babka 
Cc: Wei Yang 
Cc: Oscar Salvador 
Cc: Mike Rapoport 
Cc: Scott Cheloha 
Cc: Michael Ellerman 
Signed-off-by: David Hildenbrand 
---
 include/linux/page-isolation.h |  4 ++--
 mm/page_alloc.c| 35 +++---
 mm/page_isolation.c| 12 +---
 3 files changed, 35 insertions(+), 16 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 572458016331..3eca9b3c5305 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
 struct page *has_unmovable_pages(struct zone *zone, struct page *page,
 int migratetype, int flags);
 void set_pageblock_migratetype(struct page *page, int migratetype);
-int move_freepages_block(struct zone *zone, struct page *page,
-   int migratetype, int *num_movable);
+int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
+bool to_tail, int *num_movable);
 
 /*
  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9e3ed4a6f69a..d5a5f528b8ca 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, 
struct zone *zone,
list_move(&page->lru, &area->free_list[migratetype]);
 }
 
+/* Used for pages which are on another list */
+static inline void move_to_free_list_tail(struct page *page, struct zone *zone,
+ unsigned int order, int migratetype)
+{
+   struct free_area *area = &zone->free_area[order];
+
+   list_move_tail(&page->lru, &area->free_list[migratetype]);
+}
+
 static inline void del_page_from_free_list(struct page *page, struct zone 
*zone,
   unsigned int order)
 {
@@ -2338,9 +2347,9 @@ static inline struct page *__rmqueue_cma_fallback(struct 
zone *zone,
  * Note that start_page and end_pages are not aligned on a pageblock
  * boundary. If alignment is required, use move_freepages_block()
  */
-static int move_freepages(struct zone *zone,
- struct page *start_page, struct page *end_page,
- int migratetype, int *num_movable)
+static int move_freepages(struct zone *zone, struct page *start_page,
+ struct page *end_page, int migratetype,
+ bool to_tail, int *num_movable)
 {
struct page *page;
unsigned int order;
@@ -2371,7 +2380,10 @@ static int move_freepages(struct zone *zone,
VM_BUG_ON_PAGE(page_zone(page) != zone, page);
 
order = page_order(page);
-   move_to_free_list(page, zone, order, migratetype);
+   if (to_tail)
+   move_to_free_list_tail(page, zone, order, migratetype);
+   else
+   move_to_free_list(page, zone, order, migratetype);
page += 1 << order;
pages_moved += 1 << order;
}
@@ -2379,8 +2391,8 @@ static int move_freepages(struct zone *zone,
return pages_moved;
 }
 
-int move_freepages_block(struct zone *zone, struct page *page,
-   int migratetype, int *num_movable)
+int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
+bool to_tail, int *num_movable)
 {
unsigned long start_pfn, end_pfn;
struct page *start_page, *end_page;
@@ -2401,7 +2413,7 @@ int move_freepages_block(struct zone *zone, struct page 
*page,
return 0;
 
return move_freepages(zone, start_page, end_page, migratetype,
-   num_movable);
+ to_tail, num_movable);
 }
 
 static void change_pageblock_range(struct page *pageblock_page,
@@ -2526,8 +2538,8 @@ static void steal_suitable_fallback(struct zone *zone, 
struct page *page,
if (!whole_block)
goto single_page;
 
-   free_pages = move_freepages_block(zone, page, start_type,
-   &mov

[PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling

2020-09-28 Thread David Hildenbrand
As we no longer shuffle via generic_online_page() and when undoing
isolation, we can simplify the comment.

We now effectively shuffle only once (properly) when onlining new
memory.

Cc: Andrew Morton 
Cc: Alexander Duyck 
Cc: Mel Gorman 
Cc: Michal Hocko 
Cc: Dave Hansen 
Cc: Vlastimil Babka 
Cc: Wei Yang 
Cc: Oscar Salvador 
Cc: Mike Rapoport 
Signed-off-by: David Hildenbrand 
---
 mm/memory_hotplug.c | 11 ---
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9db80ee29caa..c589bd8801bb 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -859,13 +859,10 @@ int __ref online_pages(unsigned long pfn, unsigned long 
nr_pages,
undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
 
/*
-* When exposing larger, physically contiguous memory areas to the
-* buddy, shuffling in the buddy (when freeing onlined pages, putting
-* them either to the head or the tail of the freelist) is only helpful
-* for maintaining the shuffle, but not for creating the initial
-* shuffle. Shuffle the whole zone to make sure the just onlined pages
-* are properly distributed across the whole freelist. Make sure to
-* shuffle once pageblocks are no longer isolated.
+* Freshly onlined pages aren't shuffled (e.g., all pages are placed to
+* the tail of the freelist when undoing isolation). Shuffle the whole
+* zone to make sure the just onlined pages are properly distributed
+* across the whole freelist - to create an initial shuffle.
 */
shuffle_zone(zone);
 
-- 
2.26.2




[PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag

2020-09-28 Thread David Hildenbrand
Let's prepare for additional flags and avoid long parameter lists of bools.
Follow-up patches will also make use of the flags in __free_pages_ok(),
however, I wasn't able to come up with a better name for the type - should
be good enough for internal purposes.

Reviewed-by: Alexander Duyck 
Reviewed-by: Vlastimil Babka 
Reviewed-by: Oscar Salvador 
Cc: Andrew Morton 
Cc: Alexander Duyck 
Cc: Mel Gorman 
Cc: Michal Hocko 
Cc: Dave Hansen 
Cc: Vlastimil Babka 
Cc: Wei Yang 
Cc: Oscar Salvador 
Cc: Mike Rapoport 
Signed-off-by: David Hildenbrand 
---
 mm/page_alloc.c | 28 
 1 file changed, 20 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index df90e3654f97..daab90e960fe 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -77,6 +77,18 @@
 #include "shuffle.h"
 #include "page_reporting.h"
 
+/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
+typedef int __bitwise fop_t;
+
+/* No special request */
+#define FOP_NONE   ((__force fop_t)0)
+
+/*
+ * Skip free page reporting notification for the (possibly merged) page. (will
+ * *not* mark the page reported, only skip the notification).
+ */
+#define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION   (8)
@@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long 
buddy_pfn,
  * -- nyc
  */
 
-static inline void __free_one_page(struct page *page,
-   unsigned long pfn,
-   struct zone *zone, unsigned int order,
-   int migratetype, bool report)
+static inline void __free_one_page(struct page *page, unsigned long pfn,
+  struct zone *zone, unsigned int order,
+  int migratetype, fop_t fop_flags)
 {
struct capture_control *capc = task_capc(zone);
unsigned long buddy_pfn;
@@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page,
add_to_free_list(page, zone, order, migratetype);
 
/* Notify page reporting subsystem of freed page */
-   if (report)
+   if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY))
page_reporting_notify_free(order);
 }
 
@@ -1379,7 +1390,7 @@ static void free_pcppages_bulk(struct zone *zone, int 
count,
if (unlikely(isolated_pageblocks))
mt = get_pageblock_migratetype(page);
 
-   __free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
+   __free_one_page(page, page_to_pfn(page), zone, 0, mt, FOP_NONE);
trace_mm_page_pcpu_drain(page, 0, mt);
}
spin_unlock(&zone->lock);
@@ -1395,7 +1406,7 @@ static void free_one_page(struct zone *zone,
is_migrate_isolate(migratetype))) {
migratetype = get_pfnblock_migratetype(page, pfn);
}
-   __free_one_page(page, pfn, zone, order, migratetype, true);
+   __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
spin_unlock(&zone->lock);
 }
 
@@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page, unsigned 
int order, int mt)
lockdep_assert_held(&zone->lock);
 
/* Return isolated page to tail of freelist. */
-   __free_one_page(page, page_to_pfn(page), zone, order, mt, false);
+   __free_one_page(page, page_to_pfn(page), zone, order, mt,
+   FOP_SKIP_REPORT_NOTIFY);
 }
 
 /*
-- 
2.26.2




[PATCH v1 0/5] mm: place pages to the freelist tail when onling and undoing isolation

2020-09-28 Thread David Hildenbrand
When adding separate memory blocks via add_memory*() and onlining them
immediately, the metadata (especially the memmap) of the next block will be
placed onto one of the just added+onlined block. This creates a chain
of unmovable allocations: If the last memory block cannot get
offlined+removed() so will all dependent ones. We directly have unmovable
allocations all over the place.

This can be observed quite easily using virtio-mem, however, it can also
be observed when using DIMMs. The freshly onlined pages will usually be
placed to the head of the freelists, meaning they will be allocated next,
turning the just-added memory usually immediately un-removable. The
fresh pages are cold, prefering to allocate others (that might be hot)
also feels to be the natural thing to do.

It also applies to the hyper-v balloon xen-balloon, and ppc64 dlpar: when
adding separate, successive memory blocks, each memory block will have
unmovable allocations on them - for example gigantic pages will fail to
allocate.

While the ZONE_NORMAL doesn't provide any guarantees that memory can get
offlined+removed again (any kind of fragmentation with unmovable
allocations is possible), there are many scenarios (hotplugging a lot of
memory, running workload, hotunplug some memory/as much as possible) where
we can offline+remove quite a lot with this patchset.

a) To visualize the problem, a very simple example:

Start a VM with 4GB and 8GB of virtio-mem memory:

 [root@localhost ~]# lsmem
 RANGE SIZE  STATE REMOVABLE  BLOCK
 0x-0xbfff   3G online   yes   0-23
 0x0001-0x00033fff   9G online   yes 32-103

 Memory block size:   128M
 Total online memory:  12G
 Total offline memory:  0B

Then try to unplug as much as possible using virtio-mem. Observe which
memory blocks are still around. Without this patch set:

 [root@localhost ~]# lsmem
 RANGE  SIZE  STATE REMOVABLE   BLOCK
 0x-0xbfff3G online   yes0-23
 0x0001-0x00013fff1G online   yes   32-39
 0x00014800-0x00014fff  128M online   yes  41
 0x00015800-0x00015fff  128M online   yes  43
 0x00016800-0x00016fff  128M online   yes  45
 0x00017800-0x00017fff  128M online   yes  47
 0x00018800-0x000197ff  256M online   yes   49-50
 0x0001a000-0x0001a7ff  128M online   yes  52
 0x0001b000-0x0001b7ff  128M online   yes  54
 0x0001c000-0x0001c7ff  128M online   yes  56
 0x0001d000-0x0001d7ff  128M online   yes  58
 0x0001e000-0x0001e7ff  128M online   yes  60
 0x0001f000-0x0001f7ff  128M online   yes  62
 0x0002-0x000207ff  128M online   yes  64
 0x00021000-0x000217ff  128M online   yes  66
 0x00022000-0x000227ff  128M online   yes  68
 0x00023000-0x000237ff  128M online   yes  70
 0x00024000-0x000247ff  128M online   yes  72
 0x00025000-0x000257ff  128M online   yes  74
 0x00026000-0x000267ff  128M online   yes  76
 0x00027000-0x000277ff  128M online   yes  78
 0x00028000-0x000287ff  128M online   yes  80
 0x00029000-0x000297ff  128M online   yes  82
 0x0002a000-0x0002a7ff  128M online   yes  84
 0x0002b000-0x0002b7ff  128M online   yes  86
 0x0002c000-0x0002c7ff  128M online   yes  88
 0x0002d000-0x0002d7ff  128M online   yes  90
 0x0002e000-0x0002e7ff  128M online   yes  92
 0x0002f000-0x0002f7ff  128M online   yes  94
 0x0003-0x000307ff  128M online   yes  96
 0x00031000-0x000317ff  128M online   yes  98
 0x00032000-0x000327ff  128M online   yes 100
 0x00033000-0x00033fff  256M online   yes 102-103

 Memory block size:   128M
 Total online memory: 8.1G
 Total offline memory:  0B

With this patch set:

 [root@localhost ~]# lsmem
 RANGE SIZE  STATE REMOVABLE BLOCK
 0x-0xbfff   3G online   yes  0-23
 0x0001-0x00013fff   1G online   yes 32-39

 Memory block size:   128M
 Total online memory:   4G
 Total offline memory:  0B

All memory can get unplugged, all memory block can get removed. Of course,
no workload ran and the system was basically idle, but it highlights the
issue - the fairly deterministic chain of unmovable allocations. When a
huge page for the 

[PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()

2020-09-28 Thread David Hildenbrand
__free_pages_core() is used when exposing fresh memory to the buddy
during system boot and when onlining memory in generic_online_page().

generic_online_page() is used in two cases:

1. Direct memory onlining in online_pages().
2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
   balloon and virtio-mem), when parts of a section are kept
   fake-offline to be fake-onlined later on.

In 1, we already place pages to the tail of the freelist. Pages will be
freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
via undo_isolate_page_range().

In 2, we currently don't implement a proper rule. In case of virtio-mem,
where we currently always online MAX_ORDER - 1 pages, the pages will be
placed to the HEAD of the freelist - undesireable. While the hyper-v
balloon calls generic_online_page() with single pages, usually it will
call it on successive single pages in a larger block.

The pages are fresh, so place them to the tail of the freelists and avoid
the PCP. In __free_pages_core(), remove the now superflouos call to
set_page_refcounted() and add a comment regarding page initialization and
the refcount.

Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
is usually of limited use in virtualized environments), we might want to
shuffle after a sequence of generic_online_page() calls in the
relevant callers.

Reviewed-by: Vlastimil Babka 
Reviewed-by: Oscar Salvador 
Cc: Andrew Morton 
Cc: Alexander Duyck 
Cc: Mel Gorman 
Cc: Michal Hocko 
Cc: Dave Hansen 
Cc: Vlastimil Babka 
Cc: Wei Yang 
Cc: Oscar Salvador 
Cc: Mike Rapoport 
Cc: "K. Y. Srinivasan" 
Cc: Haiyang Zhang 
Cc: Stephen Hemminger 
Cc: Wei Liu 
Signed-off-by: David Hildenbrand 
---
 mm/page_alloc.c | 37 -
 1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d5a5f528b8ca..8a2134fe9947 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
 unsigned int pageblock_order __read_mostly;
 #endif
 
-static void __free_pages_ok(struct page *page, unsigned int order);
+static void __free_pages_ok(struct page *page, unsigned int order,
+   fop_t fop_flags);
 
 /*
  * results with 256, 32 in the lowmem_reserve sysctl:
@@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *reason)
 void free_compound_page(struct page *page)
 {
mem_cgroup_uncharge(page);
-   __free_pages_ok(page, compound_order(page));
+   __free_pages_ok(page, compound_order(page), FOP_NONE);
 }
 
 void prep_compound_page(struct page *page, unsigned int order)
@@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int 
count,
spin_unlock(&zone->lock);
 }
 
-static void free_one_page(struct zone *zone,
-   struct page *page, unsigned long pfn,
-   unsigned int order,
-   int migratetype)
+static void free_one_page(struct zone *zone, struct page *page, unsigned long 
pfn,
+ unsigned int order, int migratetype, fop_t fop_flags)
 {
spin_lock(&zone->lock);
if (unlikely(has_isolate_pageblock(zone) ||
is_migrate_isolate(migratetype))) {
migratetype = get_pfnblock_migratetype(page, pfn);
}
-   __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
+   __free_one_page(page, pfn, zone, order, migratetype, fop_flags);
spin_unlock(&zone->lock);
 }
 
@@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, 
phys_addr_t end)
}
 }
 
-static void __free_pages_ok(struct page *page, unsigned int order)
+static void __free_pages_ok(struct page *page, unsigned int order,
+   fop_t fop_flags)
 {
unsigned long flags;
int migratetype;
@@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned 
int order)
migratetype = get_pfnblock_migratetype(page, pfn);
local_irq_save(flags);
__count_vm_events(PGFREE, 1 << order);
-   free_one_page(page_zone(page), page, pfn, order, migratetype);
+   free_one_page(page_zone(page), page, pfn, order, migratetype,
+ fop_flags);
local_irq_restore(flags);
 }
 
@@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int 
order)
struct page *p = page;
unsigned int loop;
 
+   /*
+* When initializing the memmap, init_single_page() sets the refcount
+* of all pages to 1 ("allocated"/"not free"). We have to set the
+* refcount of all involved pages to 0.
+*/
prefetchw(p);
for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
prefetchw(p + 1);
@@ -1539,8 +1545,12 @@ void __free_pages_core(struct page *page, unsigned int 
order)
set_page_count(p, 0);
 
atomic_long_add(nr_

Re: [PATCH 3/3] x86/pv: Inject #UD for missing SYSCALL callbacks

2020-09-28 Thread Andrew Cooper
On 24/09/2020 15:56, Jan Beulich wrote:
> On 23.09.2020 12:18, Andrew Cooper wrote:
>> Despite appearing to be a deliberate design choice of early PV64, the
>> resulting behaviour for unregistered SYSCALL callbacks creates an untenable
>> testability problem for Xen.  Furthermore, the behaviour is undocumented,
>> bizarre, and inconsistent with related behaviour in Xen, and very liable
>> introduce a security vulnerability into a PV guest if the author hasn't
>> studied Xen's assembly code in detail.
>>
>> There are two different bugs here.
>>
>> 1) The current logic confuses the registered entrypoints, and may deliver a
>>SYSCALL from 32bit userspace to the 64bit entry, when only a 64bit
>>entrypoint is registered.
>>
>>This has been the case ever since 2007 (c/s cd75d47348b) but up until
>>2018 (c/s dba899de14) the wrong selectors would be handed to the guest for
>>a 32bit SYSCALL entry, making it appear as if it a 64bit entry all along.
> I'm not sure what you derive the last half sentence from. To a 32-bit
> PV guest, nothing can make things look like being 64-bit.

Right, but what part of this discussion is relevant to 32bit PV guests,
when we're discussing junk data being passed to the 64bit SYSCALL entry?

> And as you
> did say in your 2018 change, FLAT_KERNEL_SS == FLAT_USER_SS32.

And? Mode is determined by CS, not SS.  A kernel suffering this failure
will find a CS claiming to be FLAT_RING1_DS/RPL3, and not
FLAT_COMPAT_USER_CS.

Even if we presume for a moment that multiplexing was a sensible plan,
there were 13 years where you couldn't rationally distinguish the two
conditions.

Considering the very obvious chaos which occurs when you try to
HYPERCALL_iret with the bogus frame, either noone ever encountered it,
or everyone used the Linux way which was to blindly overwrite Xen's
selectors with the knowledge (and by this, I mean expectation) that the
two entrypoints distinguished the originating mode.

Linux doesn't go wrong because it registers both entrypoints, but
anything else using similar logic (and only one registered entrypoint)
would end up returning to 32bit userspace in 64bit mode.

> As to the "confusion" of entry points - before the compat mode entry
> path was introduced, a 64-bit guest could only register a single
> entry point.

The fact that MSR_LSTAR and MSR_CSTAR are separate in the AMD64 spec is
a very good hint that that is how software should/would expect things to
behave.

The timing and content of c/s 02410e06fea7, which introduced the first
use of SYSCALL, looks suspiciously like it was designed to the Intel
manual, seeing as it failed to configure MSR_CSTAR entirely.

The CSTAR "fix" came later in c/s 6c94cfd1491 "Various bug fixes", which
introduced the confusion of the two entrypoints, and still hadn't been
tested on AMD as it would return to 32bit userspace in 64bit mode.

c/s 091e799a840c was the commit which introduced the syscall entrypoint.

> Hence guests at the time had to multiplex 32- and 64-bit
> user mode entry from this one code path. In order to avoid regressing
> any such guest, the falling back to using the 64-bit entry point was
> chosen. Effectively what you propose is to regress such guests now,
> rather than back then.

I completely believe that you deliberately avoided changing the existing
behaviour at the time.

I just don't find it credible that the multiplexing was a deliberate and
informed design choice originally, when it looks very much like an
accident, and was so broken for more than a decade following.

I'm not trying to ascribe blame.  I can see exactly how this happened,
especially given how broken 32bit SYSCALL was AMD (how many OSes
realised they needed to have #DB in a task gate to be safe, considering
that basically the same bug took everyone by surprise a couple of years
ago).  32bit code never used SYSCALL, so multiplexing never got used in
practice, which is why the bugs remained hidden for 13 years, and which
is why changing the behaviour now doesn't break anything.

>>Xen would malfunction under these circumstances, if it were a PV guest.
>>Linux would as well, but PVOps has always registered both entrypoints and
>>discarded the Xen-provided selectors.  NetBSD really does malfunction as a
>>consequence (benignly now, but a VM DoS before the 2018 Xen selector fix).
>>
>> 2) In the case that neither SYSCALL callbacks are registered, the guest will
>>be crashed when userspace executes a SYSCALL instruction, which is a
>>userspace => kernel DoS.
>>
>>This has been the case ever since the introduction of 64bit PV support, 
>> but
>>behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which yield
>>#GP/#UD in userspace before the callback is registered, and are therefore
>>safe by default.
> I agree this part is an improvement.
>
>> This change does constitute a change in the PV ABI, for corner cases of a PV
>> guest kernel registering neither callback, or not register

Re: [PATCH 5/5] x86/ELF: eliminate pointless local variable from elf_core_save_regs()

2020-09-28 Thread Andrew Cooper
On 28/09/2020 13:07, Jan Beulich wrote:
> We can just as well specify the CRn structure fields directly in the
> asm()s, just like done for all other ones.
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 



Re: [PATCH 4/5] x86/ELF: also record FS/GS bases in elf_core_save_regs()

2020-09-28 Thread Andrew Cooper
On 28/09/2020 13:06, Jan Beulich wrote:
> Signed-off-by: Jan Beulich 

Any idea why this wasn't done before?  At a minimum, I'd be tempted to
put a sentence in the commit message saying "no idea why this wasn't
done before".

Reviewed-by: Andrew Cooper 

>
> --- a/xen/include/asm-x86/x86_64/elf.h
> +++ b/xen/include/asm-x86/x86_64/elf.h
> @@ -1,6 +1,7 @@
>  #ifndef __X86_64_ELF_H__
>  #define __X86_64_ELF_H__
>  
> +#include 
>  #include 
>  
>  typedef struct {
> @@ -59,8 +60,8 @@ static inline void elf_core_save_regs(EL
>  asm volatile("pushfq; popq %0" :"=m"(core_regs->rflags));
>  asm volatile("movq %%rsp,%0" : "=m"(core_regs->rsp));
>  asm volatile("movl %%ss, %%eax;" :"=a"(core_regs->ss));
> -/* thread_fs not filled in for now */
> -/* thread_gs not filled in for now */
> +rdmsrl(MSR_FS_BASE, core_regs->thread_fs);
> +rdmsrl(MSR_GS_BASE, core_regs->thread_gs);
>  core_regs->ds = read_sreg(ds);
>  core_regs->es = read_sreg(es);
>  core_regs->fs = read_sreg(fs);
>




Re: [PATCH 3/5] x86/ELF: don't store function pointer in elf_core_save_regs()

2020-09-28 Thread Andrew Cooper
On 28/09/2020 13:06, Jan Beulich wrote:
> This keeps at least gcc 10 from generating a separate function instance
> in common/kexec.o alongside the inlining of the function in its sole
> caller. I also think putting the address of the actual code storing the
> registers is a better indication to consumers than that of an otherwise
> unreferenced function.

Hmm - that's unfortunate.

elf_core_save_regs is certainly a useful name to spot in a backtrace.

> Signed-off-by: Jan Beulich 
>
> --- a/xen/include/asm-x86/x86_64/elf.h
> +++ b/xen/include/asm-x86/x86_64/elf.h
> @@ -54,7 +54,7 @@ static inline void elf_core_save_regs(EL
>  asm volatile("movq %%rsi,%0" : "=m"(core_regs->rsi));
>  asm volatile("movq %%rdi,%0" : "=m"(core_regs->rdi));
>  /* orig_rax not filled in for now */
> -core_regs->rip = (unsigned long)elf_core_save_regs;
> +asm volatile("call 0f; 0: popq %0" : "=m" (core_regs->rip));

lea 0(%rip) will be faster to execute, and this is 64bit code specifically.

Either way, Reviewed-by: Andrew Cooper 

>  core_regs->cs = read_sreg(cs);
>  asm volatile("pushfq; popq %0" :"=m"(core_regs->rflags));
>  asm volatile("movq %%rsp,%0" : "=m"(core_regs->rsp));
>




Re: [PATCH 2/5] x86/ELF: don't open-code read_sreg()

2020-09-28 Thread Andrew Cooper
On 28/09/2020 13:05, Jan Beulich wrote:
> Signed-off-by: Jan Beulich 
>
> --- a/xen/include/asm-x86/x86_64/elf.h
> +++ b/xen/include/asm-x86/x86_64/elf.h
> @@ -1,6 +1,8 @@
>  #ifndef __X86_64_ELF_H__
>  #define __X86_64_ELF_H__
>  
> +#include 
> +
>  typedef struct {
>  unsigned long r15;
>  unsigned long r14;
> @@ -53,16 +55,16 @@ static inline void elf_core_save_regs(EL
>  asm volatile("movq %%rdi,%0" : "=m"(core_regs->rdi));
>  /* orig_rax not filled in for now */
>  core_regs->rip = (unsigned long)elf_core_save_regs;
> -asm volatile("movl %%cs, %%eax;" :"=a"(core_regs->cs));
> +core_regs->cs = read_sreg(cs);
>  asm volatile("pushfq; popq %0" :"=m"(core_regs->rflags));
>  asm volatile("movq %%rsp,%0" : "=m"(core_regs->rsp));
>  asm volatile("movl %%ss, %%eax;" :"=a"(core_regs->ss));

Another one here.

With that fixed, Reviewed-by: Andrew Cooper 

>  /* thread_fs not filled in for now */
>  /* thread_gs not filled in for now */
> -asm volatile("movl %%ds, %%eax;" :"=a"(core_regs->ds));
> -asm volatile("movl %%es, %%eax;" :"=a"(core_regs->es));
> -asm volatile("movl %%fs, %%eax;" :"=a"(core_regs->fs));
> -asm volatile("movl %%gs, %%eax;" :"=a"(core_regs->gs));
> +core_regs->ds = read_sreg(ds);
> +core_regs->es = read_sreg(es);
> +core_regs->fs = read_sreg(fs);
> +core_regs->gs = read_sreg(gs);
>  
>  asm volatile("mov %%cr0, %0" : "=r" (tmp) : );
>  xen_core_regs->cr0 = tmp;




[xen-unstable-smoke test] 155022: regressions - FAIL

2020-09-28 Thread osstest service owner
flight 155022 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155022/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 154728

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  4bdbf746ac9152e70f264f87db4472707da805ce
baseline version:
 xen  5bcac985498ed83d89666959175ca9c9ed561ae1

Last test of basis   154728  2020-09-24 21:01:24 Z3 days
Testing same since   155022  2020-09-28 14:00:30 Z0 days1 attempts


People who touched revisions under test:
  Jan Beulich 
  Julien Grall 
  Marek Marczykowski-Górecki 
  Roger Pau Monné 

jobs:
 build-arm64-xsm  pass
 build-amd64  fail
 build-armhf  pass
 build-amd64-libvirt  blocked 
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64blocked 
 test-amd64-amd64-libvirt blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 4bdbf746ac9152e70f264f87db4472707da805ce
Author: Marek Marczykowski-Górecki 
Date:   Mon Sep 28 10:43:10 2020 +0200

x86/S3: fix shadow stack resume path

Fix the resume path to load the shadow stack pointer from saved_ssp (not
saved_rsp), to match what suspend path does.

Fixes: 633ecc4a7cb2 ("x86/S3: Save and restore Shadow Stack configuration")
Backport: 4.14
Signed-off-by: Marek Marczykowski-Górecki 
Reviewed-by: Jan Beulich 

commit 28fb8cf323dd93f59a9c851c93ba9b79de8b1c4e
Author: Roger Pau Monné 
Date:   Mon Sep 28 10:42:29 2020 +0200

x86/iommu: remove code to fetch MSI message from remap table

Remove the code to compose a MSI message based on the information from
the MSI registers and the data in the interrupt remapping table.
Since the removal of read_msi_msg and its user there's no longer a
need for such code, as the last written (untranslated) MSI message is
cached internally by Xen.

Suggested-by: Jan Beulich 
Signed-off-by: Roger Pau Monné 
Reviewed-by: Andrew Cooper 

commit f9ffd20f946c0315937f85d2f124a9bc4be49473
Author: Roger Pau Monné 
Date:   Mon Sep 28 10:41:48 2020 +0200

x86/hpet: remove hpet_msi_read

It's dead code, even more now that read_msi_msg has been removed.

Suggested-by: Jan Beulich 
Signed-off-by: Roger Pau Monné 
Reviewed-by: Andrew Cooper 

commit fe41405f5ee650d3fe39105cf59193b1494cdcdc
Author: Jan Beulich 
Date:   Mon Sep 28 10:40:53 2020 +0200

common/Kconfig: sort HAS_*

Later additions look to have been put at the end, with MEM_ACCESS*
somewhere in the middle. Re-sort this part of the file, in the hope that
future additions will be made noticing the intentions here.

Signed-off-by: Jan Beulich 
Acked-by: Julien Grall 

commit 643e2f3cbb3b607f3365b230f439845e9bf113b0
Author: Jan Beulich 
Date:   Mon Sep 28 10:39:47 2020 +0200

EFI: some easy constification

Inspired by some of Trammell's suggestions, this harvests some low
hanging fruit, without needing to be concerned about the definitions of
the EFI interfaces themselves.

Sign

Re: [PATCH 1/5] x86: introduce read_sregs() to allow storing to memory directly

2020-09-28 Thread Andrew Cooper
On 28/09/2020 13:05, Jan Beulich wrote:
> --- a/xen/include/asm-x86/regs.h
> +++ b/xen/include/asm-x86/regs.h
> @@ -15,4 +15,18 @@
>  (diff == 0); 
>  \
>  })
>  
> +#define read_sreg(name) ({\
> +unsigned int __sel;   \
> +asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \
> +__sel;\
> +})
> +
> +static inline void read_sregs(struct cpu_user_regs *regs)
> +{
> +asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) );
> +asm volatile ( "mov %%es, %0" : "=m" (regs->es) );
> +asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) );
> +asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) );

It occurs to me that reads don't need to be volatile.  There are no side
effects.

With that fixed, Reviewed-by: Andrew Cooper 



Re: [PATCH 6/5] x86/ELF: drop unnecessary volatile from asm()-s in elf_core_save_regs()

2020-09-28 Thread Jan Beulich
On 28.09.2020 17:15, Andrew Cooper wrote:
> On 28/09/2020 16:04, Jan Beulich wrote:
>> There are no hidden side effects here.
>>
>> Signed-off-by: Jan Beulich 
>> ---
>> v2: New.
>>
>> --- a/xen/include/asm-x86/x86_64/elf.h
>> +++ b/xen/include/asm-x86/x86_64/elf.h
>> @@ -37,26 +37,26 @@ typedef struct {
>>  static inline void elf_core_save_regs(ELF_Gregset *core_regs, 
>>crash_xen_core_t *xen_core_regs)
>>  {
>> -asm volatile("movq %%r15,%0" : "=m"(core_regs->r15));
>> -asm volatile("movq %%r14,%0" : "=m"(core_regs->r14));
>> -asm volatile("movq %%r13,%0" : "=m"(core_regs->r13));
>> -asm volatile("movq %%r12,%0" : "=m"(core_regs->r12));
>> -asm volatile("movq %%rbp,%0" : "=m"(core_regs->rbp));
>> -asm volatile("movq %%rbx,%0" : "=m"(core_regs->rbx));
>> -asm volatile("movq %%r11,%0" : "=m"(core_regs->r11));
>> -asm volatile("movq %%r10,%0" : "=m"(core_regs->r10));
>> -asm volatile("movq %%r9,%0" : "=m"(core_regs->r9));
>> -asm volatile("movq %%r8,%0" : "=m"(core_regs->r8));
>> -asm volatile("movq %%rax,%0" : "=m"(core_regs->rax));
>> -asm volatile("movq %%rcx,%0" : "=m"(core_regs->rcx));
>> -asm volatile("movq %%rdx,%0" : "=m"(core_regs->rdx));
>> -asm volatile("movq %%rsi,%0" : "=m"(core_regs->rsi));
>> -asm volatile("movq %%rdi,%0" : "=m"(core_regs->rdi));
>> +asm ( "movq %%r15,%0" : "=m" (core_regs->r15) );
>> +asm ( "movq %%r14,%0" : "=m" (core_regs->r14) );
>> +asm ( "movq %%r13,%0" : "=m" (core_regs->r13) );
>> +asm ( "movq %%r12,%0" : "=m" (core_regs->r12) );
>> +asm ( "movq %%rbp,%0" : "=m" (core_regs->rbp) );
>> +asm ( "movq %%rbx,%0" : "=m" (core_regs->rbx) );
>> +asm ( "movq %%r11,%0" : "=m" (core_regs->r11) );
>> +asm ( "movq %%r10,%0" : "=m" (core_regs->r10) );
>> +asm ( "movq %%r9,%0" : "=m" (core_regs->r9) );
>> +asm ( "movq %%r8,%0" : "=m" (core_regs->r8) );
> 
> Any chance we can align these seeing as they're changing?

I wasn't really sure about this - alignment to cover for the
difference between r8 and r9 vs r10-r15 never comes out nicely,
as the padding should really be in the number part of the
names. I'd prefer to leave it as is, while ...

> What about spaces before %0 ?

... I certainly will add these (as I should have noticed their
lack myself).

> Either way, Reviewed-by: Andrew Cooper 

Thanks.

Jan



Re: [PATCH 3/3] x86/pv: Inject #UD for missing SYSCALL callbacks

2020-09-28 Thread Jan Beulich
On 28.09.2020 15:05, Andrew Cooper wrote:
> On 24/09/2020 15:56, Jan Beulich wrote:
>> On 23.09.2020 12:18, Andrew Cooper wrote:
>>> Despite appearing to be a deliberate design choice of early PV64, the
>>> resulting behaviour for unregistered SYSCALL callbacks creates an untenable
>>> testability problem for Xen.  Furthermore, the behaviour is undocumented,
>>> bizarre, and inconsistent with related behaviour in Xen, and very liable
>>> introduce a security vulnerability into a PV guest if the author hasn't
>>> studied Xen's assembly code in detail.
>>>
>>> There are two different bugs here.
>>>
>>> 1) The current logic confuses the registered entrypoints, and may deliver a
>>>SYSCALL from 32bit userspace to the 64bit entry, when only a 64bit
>>>entrypoint is registered.
>>>
>>>This has been the case ever since 2007 (c/s cd75d47348b) but up until
>>>2018 (c/s dba899de14) the wrong selectors would be handed to the guest 
>>> for
>>>a 32bit SYSCALL entry, making it appear as if it a 64bit entry all along.
>> I'm not sure what you derive the last half sentence from. To a 32-bit
>> PV guest, nothing can make things look like being 64-bit.
> 
> Right, but what part of this discussion is relevant to 32bit PV guests,
> when we're discussing junk data being passed to the 64bit SYSCALL entry?

To me your text doesn't make it clear that you only talk about 64-bit
guests there. Talking about 32-bit user space doesn't imply a 64-bit
kernel to me.

>> And as you
>> did say in your 2018 change, FLAT_KERNEL_SS == FLAT_USER_SS32.
> 
> And? Mode is determined by CS, not SS.  A kernel suffering this failure
> will find a CS claiming to be FLAT_RING1_DS/RPL3, and not
> FLAT_COMPAT_USER_CS.

Yet how does this make anyone looking think of this being a 64-bit
entry then?

As to CS vs SS - I think the canonical CPL is SS.DPL, and since
for SS DPL == RPL, also SS.RPL. At least that's what we use in a
number of places.

> Even if we presume for a moment that multiplexing was a sensible plan,
> there were 13 years where you couldn't rationally distinguish the two
> conditions.
> 
> Considering the very obvious chaos which occurs when you try to
> HYPERCALL_iret with the bogus frame, either noone ever encountered it,
> or everyone used the Linux way which was to blindly overwrite Xen's
> selectors with the knowledge (and by this, I mean expectation) that the
> two entrypoints distinguished the originating mode.
> 
> Linux doesn't go wrong because it registers both entrypoints, but
> anything else using similar logic (and only one registered entrypoint)
> would end up returning to 32bit userspace in 64bit mode.
> 
>> As to the "confusion" of entry points - before the compat mode entry
>> path was introduced, a 64-bit guest could only register a single
>> entry point.
> 
> The fact that MSR_LSTAR and MSR_CSTAR are separate in the AMD64 spec is
> a very good hint that that is how software should/would expect things to
> behave.
> 
> The timing and content of c/s 02410e06fea7, which introduced the first
> use of SYSCALL, looks suspiciously like it was designed to the Intel
> manual, seeing as it failed to configure MSR_CSTAR entirely.
> 
> The CSTAR "fix" came later in c/s 6c94cfd1491 "Various bug fixes", which
> introduced the confusion of the two entrypoints, and still hadn't been
> tested on AMD as it would return to 32bit userspace in 64bit mode.
> 
> c/s 091e799a840c was the commit which introduced the syscall entrypoint.
> 
>> Hence guests at the time had to multiplex 32- and 64-bit
>> user mode entry from this one code path. In order to avoid regressing
>> any such guest, the falling back to using the 64-bit entry point was
>> chosen. Effectively what you propose is to regress such guests now,
>> rather than back then.
> 
> I completely believe that you deliberately avoided changing the existing
> behaviour at the time.
> 
> I just don't find it credible that the multiplexing was a deliberate and
> informed design choice originally, when it looks very much like an
> accident, and was so broken for more than a decade following.

The problem here is once again the lack of documentation of the ABI.
As such, the behavior of the implementation, of however good or bad
intent, has been the reference. And hence I don't see us changing the
behavior as a viable thing.

If you can get e.g. Roger to support your position and provide an
ack to this change, I guess I'm willing to accept the change going in
as it is. But I'm afraid I can't give it my R-b.

>>> This change does constitute a change in the PV ABI, for corner cases of a PV
>>> guest kernel registering neither callback, or not registering the 32bit
>>> callback when running on AMD/Hygon hardware.
>>>
>>> It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
>>> SYSENTER (safe by default, until explicitly enabled), as well as native
>>> hardware (always delivered to the single applicable callback).
>> Albeit an OS running natively and setting EFER.

Re: [PATCH 2/2] libgnttab: Add support for Linux dma-buf offset

2020-09-28 Thread Ian Jackson
Oleksandr Andrushchenko writes ("[PATCH 2/2] libgnttab: Add support for Linux 
dma-buf offset"):
> From: Oleksandr Andrushchenko 
> 
> Add version 2 of the dma-buf ioctls which adds data_ofs parameter.
> 
> dma-buf is backed by a scatter-gather table and has offset parameter
> which tells where the actual data starts. Relevant ioctls are extended
> to support that offset:
>   - when dma-buf is created (exported) from grant references then
> data_ofs is used to set the offset field in the scatter list
> of the new dma-buf
>   - when dma-buf is imported and grant references provided then
> data_ofs is used to report that offset to user-space

Thanks.  I'm not a DMA expert, but I think this is probably going in
roughly the right direction.  I will probably want a review from a DMA
expert too, but let me get on with my questions:

When you say "the protocol changes are already accepted" I think you
mean the Linux ioctl changes ?  If not, what *do* you mean ?

> +/*
> + * Version 2 of the ioctls adds @data_ofs parameter.
> + *
> + * dma-buf is backed by a scatter-gather table and has offset
> + * parameter which tells where the actual data starts.
> + * Relevant ioctls are extended to support that offset:
> + *   - when dma-buf is created (exported) from grant references then
> + * @data_ofs is used to set the offset field in the scatter list
> + * of the new dma-buf
> + *   - when dma-buf is imported and grant references are provided then
> + * @data_ofs is used to report that offset to user-space
> + */
> +#define IOCTL_GNTDEV_DMABUF_EXP_FROM_REFS_V2 \
> +_IOC(_IOC_NONE, 'G', 13, \

I think this was copied from a Linux header file ?  If so please quote
the precise file and revision in the commit message.  And be sure to
copy the copyright informtaion appropriately.

> +int osdep_gnttab_dmabuf_exp_from_refs_v2(xengnttab_handle *xgt, uint32_t 
> domid,
> + uint32_t flags, uint32_t count,
> + const uint32_t *refs,
> + uint32_t *dmabuf_fd, uint32_t 
> data_ofs)
> +{
> +abort();

I'm pretty sure this is wrong.

This leads me to ask about compatibility, both across versions of the
various components, and API compatibility across different platforms.

libxengnttab is supposed to have a stable API and ABI.  This means
that old programs should work with the new library - which I think you
have achieved.

But I think it also means that it should work with new programs, and
the new library, on old kernels.  What is your compatibility story
here ?  What is the intended mode of use by an application ?

And the same application code should be useable, so far as possible,
across different plaatforms that support Xen.

What fallback would be possible for application do if the v2 function
is not available ?  I think that fallback action needs to be
selectable at runtime, to support new userspace on old kernels.

What architectures is the new Linux ioctl available on ?


> diff --git a/tools/libs/gnttab/include/xengnttab.h 
> b/tools/libs/gnttab/include/xengnttab.h
> index 111fc88caeb3..0956bd91e0df 100644
> --- a/tools/libs/gnttab/include/xengnttab.h
> +++ b/tools/libs/gnttab/include/xengnttab.h
> @@ -322,12 +322,19 @@ int xengnttab_grant_copy(xengnttab_handle *xgt,
>   * Returns 0 if dma-buf was successfully created and the corresponding
>   * dma-buf's file descriptor is returned in @fd.
>   *
> +
> + * Version 2 also accepts @data_ofs offset of the data in the buffer.
> + *
>   * [1] 
> https://elixir.bootlin.com/linux/latest/source/Documentation/driver-api/dma-buf.rst
>   */
>  int xengnttab_dmabuf_exp_from_refs(xengnttab_handle *xgt, uint32_t domid,
> uint32_t flags, uint32_t count,
> const uint32_t *refs, uint32_t *fd);
>  
> +int xengnttab_dmabuf_exp_from_refs_v2(xengnttab_handle *xgt, uint32_t domid,
> +  uint32_t flags, uint32_t count,
> +  const uint32_t *refs, uint32_t *fd,
> +  uint32_t data_ofs);

I think the information about the meaning of @data_ofs must be in the
doc comment.  Indeed, that should be the primary location.

Conversely there is no need to duplicate information between the patch
contents, and the commit message.

Is _v2 really the best name for this ?  Are we likely to want to
extend this again in future ?  Perhaps it should be called ..._offset
or something ?  Please think about this and tell me your opinion.

> +int osdep_gnttab_dmabuf_exp_from_refs_v2(xengnttab_handle *xgt, uint32_t 
> domid,
> + uint32_t flags, uint32_t count,
> + const uint32_t *refs,
> + uint32_t *dmabuf_fd,
> + uint32_t data_ofs)
> +{
> +   

[PATCH 6/5] x86/ELF: drop unnecessary volatile from asm()-s in elf_core_save_regs()

2020-09-28 Thread Jan Beulich
There are no hidden side effects here.

Signed-off-by: Jan Beulich 
---
v2: New.

--- a/xen/include/asm-x86/x86_64/elf.h
+++ b/xen/include/asm-x86/x86_64/elf.h
@@ -37,26 +37,26 @@ typedef struct {
 static inline void elf_core_save_regs(ELF_Gregset *core_regs, 
   crash_xen_core_t *xen_core_regs)
 {
-asm volatile("movq %%r15,%0" : "=m"(core_regs->r15));
-asm volatile("movq %%r14,%0" : "=m"(core_regs->r14));
-asm volatile("movq %%r13,%0" : "=m"(core_regs->r13));
-asm volatile("movq %%r12,%0" : "=m"(core_regs->r12));
-asm volatile("movq %%rbp,%0" : "=m"(core_regs->rbp));
-asm volatile("movq %%rbx,%0" : "=m"(core_regs->rbx));
-asm volatile("movq %%r11,%0" : "=m"(core_regs->r11));
-asm volatile("movq %%r10,%0" : "=m"(core_regs->r10));
-asm volatile("movq %%r9,%0" : "=m"(core_regs->r9));
-asm volatile("movq %%r8,%0" : "=m"(core_regs->r8));
-asm volatile("movq %%rax,%0" : "=m"(core_regs->rax));
-asm volatile("movq %%rcx,%0" : "=m"(core_regs->rcx));
-asm volatile("movq %%rdx,%0" : "=m"(core_regs->rdx));
-asm volatile("movq %%rsi,%0" : "=m"(core_regs->rsi));
-asm volatile("movq %%rdi,%0" : "=m"(core_regs->rdi));
+asm ( "movq %%r15,%0" : "=m" (core_regs->r15) );
+asm ( "movq %%r14,%0" : "=m" (core_regs->r14) );
+asm ( "movq %%r13,%0" : "=m" (core_regs->r13) );
+asm ( "movq %%r12,%0" : "=m" (core_regs->r12) );
+asm ( "movq %%rbp,%0" : "=m" (core_regs->rbp) );
+asm ( "movq %%rbx,%0" : "=m" (core_regs->rbx) );
+asm ( "movq %%r11,%0" : "=m" (core_regs->r11) );
+asm ( "movq %%r10,%0" : "=m" (core_regs->r10) );
+asm ( "movq %%r9,%0" : "=m" (core_regs->r9) );
+asm ( "movq %%r8,%0" : "=m" (core_regs->r8) );
+asm ( "movq %%rax,%0" : "=m" (core_regs->rax) );
+asm ( "movq %%rcx,%0" : "=m" (core_regs->rcx) );
+asm ( "movq %%rdx,%0" : "=m" (core_regs->rdx) );
+asm ( "movq %%rsi,%0" : "=m" (core_regs->rsi) );
+asm ( "movq %%rdi,%0" : "=m" (core_regs->rdi) );
 /* orig_rax not filled in for now */
 asm ( "call 0f; 0: popq %0" : "=m" (core_regs->rip) );
 core_regs->cs = read_sreg(cs);
-asm volatile("pushfq; popq %0" :"=m"(core_regs->rflags));
-asm volatile("movq %%rsp,%0" : "=m"(core_regs->rsp));
+asm ( "pushfq; popq %0" : "=m" (core_regs->rflags) );
+asm ( "movq %%rsp,%0" : "=m" (core_regs->rsp) );
 core_regs->ss = read_sreg(ss);
 rdmsrl(MSR_FS_BASE, core_regs->thread_fs);
 rdmsrl(MSR_GS_BASE, core_regs->thread_gs);




Re: [PATCH 1/5] x86: introduce read_sregs() to allow storing to memory directly

2020-09-28 Thread Jan Beulich
On 28.09.2020 14:47, Andrew Cooper wrote:
> On 28/09/2020 13:05, Jan Beulich wrote:
>> --- a/xen/include/asm-x86/regs.h
>> +++ b/xen/include/asm-x86/regs.h
>> @@ -15,4 +15,18 @@
>>  (diff == 0);
>>   \
>>  })
>>  
>> +#define read_sreg(name) ({\
>> +unsigned int __sel;   \
>> +asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \
>> +__sel;\
>> +})
>> +
>> +static inline void read_sregs(struct cpu_user_regs *regs)
>> +{
>> +asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) );
>> +asm volatile ( "mov %%es, %0" : "=m" (regs->es) );
>> +asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) );
>> +asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) );
> 
> It occurs to me that reads don't need to be volatile.  There are no side
> effects.

I'll do the same for what patches 3 and 5 alter anyway, assuming
this won't invalidate your R-b there.

Jan



[PATCH] arm,smmu: match start level of page table walk with P2M

2020-09-28 Thread laurentiu . tudor
From: Laurentiu Tudor 

Don't hardcode the lookup start level of the page table walk to 1
and instead match the one used in P2M. This should fix scenarios
involving SMMU where the start level is different than 1.

Signed-off-by: Laurentiu Tudor 
---
 xen/arch/arm/p2m.c | 2 +-
 xen/drivers/passthrough/arm/smmu.c | 2 +-
 xen/include/asm-arm/p2m.h  | 1 +
 3 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ce59f2b503..0181b09dc0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -18,7 +18,6 @@
 
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
-static unsigned int __read_mostly p2m_root_level;
 #define P2M_ROOT_ORDERp2m_root_order
 #define P2M_ROOT_LEVEL p2m_root_level
 static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
@@ -39,6 +38,7 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
  * restricted by external entity (e.g. IOMMU).
  */
 unsigned int __read_mostly p2m_ipa_bits = 64;
+unsigned int __read_mostly p2m_root_level;
 
 /* Helpers to lookup the properties of each level */
 static const paddr_t level_masks[] =
diff --git a/xen/drivers/passthrough/arm/smmu.c 
b/xen/drivers/passthrough/arm/smmu.c
index 94662a8501..85709a136f 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -1152,7 +1152,7 @@ static void arm_smmu_init_context_bank(struct 
arm_smmu_domain *smmu_domain)
  (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT);
 
if (!stage1)
-   reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
+   reg |= (2 - p2m_root_level) << TTBCR_SL0_SHIFT;
 
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 5fdb6e8183..97b5eada2b 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -12,6 +12,7 @@
 
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
+extern unsigned int p2m_root_level;
 
 struct domain;
 
-- 
2.17.1




RE: [PATCH v9 0/8] domain context infrastructure

2020-09-28 Thread Lengyel, Tamas
> > Hi Paul,
> > Could you push a git branch somewhere for this series? I would like to
> > see this being integrated with VM forking and if its not too much
> > effort just create the patch for that so that it could be appended to the
> series.
> >
> 
> Hi Tamas,
> 
>   Done. See
> https://xenbits.xen.org/gitweb/?p=people/pauldu/xen.git;a=shortlog;h=refs/h
> eads/domain-save14
> 
>   Cheers,
> 
> Paul

Hi Paul,
I added a small patch that would save & load the PV context from one domain to 
another that would be called during VM forking. Please take a look at 
https://xenbits.xen.org/gitweb/?p=people/tklengyel/xen.git;a=commitdiff;h=1843ca7302e415317fdb9a63b3a4d29a385dc766;hp=8149296fdf80c73727e61cea6fe3251aecf8b333.
 I called the function copy_pv_domaincontext for now as that seemed like the 
most appropriate description for it. Please let me know if this looks good to 
you. I'm still testing it but if everything checks out it would be nice to just 
append this patch to your series.

Thanks,
Tamas


Re: [PATCH 2/2] libxl: do not automatically force detach of block devices

2020-09-28 Thread Ian Jackson
Wei Liu writes ("Re: [PATCH 2/2] libxl: do not automatically force detach of 
block devices"):
> On Mon, Sep 14, 2020 at 12:41:09PM +0200, Roger Pau Monné wrote:
> > Maybe a new function should be introduced instead, that attempts to
> > remove a device gracefully and fail otherwise?
> > 
> > Then none of the current APIs would change, and xl could use this new
> > function to handle VBD removal?
> 
> This sounds fine to me.

I agree.

If there is going to be different default policy for different devices
it ought to be in xl, not libxl, but frankly I think this is an
anomaly.

I suggest we have a new entrypoint that specifies the fallback
behaviour explicitly.  ISTM that there are three possible behaviours:
 - fail if the guest does not cooperate
 - fall back to force remove
 - rip the device out immediately

The last of these would be useful only in rare situations.  IDK if the
length of the timeout (for the first two cases) ought to be a
parameter too.

Ian.



Re: [PATCH 4/5] x86/ELF: also record FS/GS bases in elf_core_save_regs()

2020-09-28 Thread Jan Beulich
On 28.09.2020 15:04, Andrew Cooper wrote:
> On 28/09/2020 13:06, Jan Beulich wrote:
>> Signed-off-by: Jan Beulich 
> 
> Any idea why this wasn't done before?

No. My only guess is laziness, in the sense of "I'll do it later" and
then forgetting.

>  At a minimum, I'd be tempted to
> put a sentence in the commit message saying "no idea why this wasn't
> done before".

Sure, done.

> Reviewed-by: Andrew Cooper 

Thanks.

Jan



Re: [PATCH 1/5] x86: introduce read_sregs() to allow storing to memory directly

2020-09-28 Thread Jan Beulich
On 28.09.2020 14:47, Andrew Cooper wrote:
> On 28/09/2020 13:05, Jan Beulich wrote:
>> --- a/xen/include/asm-x86/regs.h
>> +++ b/xen/include/asm-x86/regs.h
>> @@ -15,4 +15,18 @@
>>  (diff == 0);
>>   \
>>  })
>>  
>> +#define read_sreg(name) ({\
>> +unsigned int __sel;   \
>> +asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \
>> +__sel;\
>> +})
>> +
>> +static inline void read_sregs(struct cpu_user_regs *regs)
>> +{
>> +asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) );
>> +asm volatile ( "mov %%es, %0" : "=m" (regs->es) );
>> +asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) );
>> +asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) );
> 
> It occurs to me that reads don't need to be volatile.  There are no side
> effects.

Oh yes, of course. Too mechanical moving / copying ...

> With that fixed, Reviewed-by: Andrew Cooper 

Thanks, Jan



[RESEND OSSTEST PATCH 0/5] Fix TCP problem

2020-09-28 Thread Ian Jackson
The best reference I found for this was here:
  https://www.evanjones.ca/tcp-stuck-connection-mystery.html

I'm resending this series because the first one had my Citrix email,
which is probably not going to reach many people.





[OSSTEST PATCH 3/5] TCP fix: Do not wait for ownerdaemon to speak

2020-09-28 Thread Ian Jackson
From: Ian Jackson 

Signed-off-by: Ian Jackson 
---
 tcl/JobDB-Executive.tcl | 13 +
 1 file changed, 13 insertions(+)

diff --git a/tcl/JobDB-Executive.tcl b/tcl/JobDB-Executive.tcl
index 29c82821..4fe85696 100644
--- a/tcl/JobDB-Executive.tcl
+++ b/tcl/JobDB-Executive.tcl
@@ -414,7 +414,20 @@ proc become-task {comment} {
 
 set ownerqueue [socket $c(OwnerDaemonHost) $c(OwnerDaemonPort)]
 fconfigure $ownerqueue -buffering line -translation lf
+
+# TCP connections can get into a weird state where the client
+# thinks the connection is open but the server has no record
+# of it.  To avoid this, have the client speak without waiting
+# for the server.  We tolerate "unknown command" errors so
+# that it is not necessary to restart the ownerdaemon since
+# that is very disruptive.
+#
+# See A TCP "stuck" connection mystery"
+# https://www.evanjones.ca/tcp-stuck-connection-mystery.html
+puts $ownerqueue noop
 must-gets $ownerqueue {^OK ms-ownerdaemon\M}
+must-gets $ownerqueue {^OK noop|^ERROR unknown command}
+
 puts $ownerqueue create-task
 must-gets $ownerqueue {^OK created-task (\d+) (\w+ [\[\]:.0-9a-f]+)$} \
 taskid refinfo
-- 
2.20.1




[OSSTEST PATCH 2/5] TCP fix: Do not wait for queuedaemon to speak

2020-09-28 Thread Ian Jackson
From: Ian Jackson 

This depends on the preceding daemonlib patch and an ms-queuedaemon
restart.

Signed-off-by: Ian Jackson 
---
 Osstest/Executive.pm | 9 +
 1 file changed, 9 insertions(+)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 61a99bc3..80e70070 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -643,7 +643,16 @@ sub tcpconnect_queuedaemon () {
 my $qserv= tcpconnect($c{QueueDaemonHost}, $c{QueueDaemonPort});
 $qserv->autoflush(1);
 
+# TCP connections can get into a weird state where the client
+# thinks the connection is open but the server has no record
+# of it.  To avoid this, have the client speak without waiting
+# for the server.
+#
+# See A TCP "stuck" connection mystery"
+# https://www.evanjones.ca/tcp-stuck-connection-mystery.html
+print $qserv "noop\n";
 $_= <$qserv>;  defined && m/^OK ms-queuedaemon\s/ or die "$_?";
+$_= <$qserv>;  defined && m/^OK noop\s/ or die "$_?";
 
 return $qserv;
 }
-- 
2.20.1




[OSSTEST PATCH 5/5] Update TftpDiVersion_buster

2020-09-28 Thread Ian Jackson
From: Ian Jackson 

Signed-off-by: Ian Jackson 
---
 production-config | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index 0c135bcb..6f85a4df 100644
--- a/production-config
+++ b/production-config
@@ -91,7 +91,7 @@ TftpNetbootGroup osstest
 TftpDiVersion_wheezy 2016-06-08
 TftpDiVersion_jessie 2018-06-26
 TftpDiVersion_stretch 2020-09-24
-TftpDiVersion_buster 2020-05-19
+TftpDiVersion_buster 2020-09-28
 
 DebianSnapshotBackports_jessie 
http://snapshot.debian.org/archive/debian/20190206T211314Z/
 
-- 
2.20.1




[OSSTEST PATCH 4/5] TftiDiVersion: Update to latest installer for stretch

2020-09-28 Thread Ian Jackson
The stretch (Debian oldstable) kernel has been updated, causing our
Xen 4.10 tests (which are still using stretch) to break.  This update
seems to fix it.

Reported-by: Jan Beulich 
Signed-off-by: Ian Jackson 
---
 production-config | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index 6055bd18..0c135bcb 100644
--- a/production-config
+++ b/production-config
@@ -90,7 +90,7 @@ TftpNetbootGroup osstest
 # Update with ./mg-debian-installer-update(-all)
 TftpDiVersion_wheezy 2016-06-08
 TftpDiVersion_jessie 2018-06-26
-TftpDiVersion_stretch 2020-02-10
+TftpDiVersion_stretch 2020-09-24
 TftpDiVersion_buster 2020-05-19
 
 DebianSnapshotBackports_jessie 
http://snapshot.debian.org/archive/debian/20190206T211314Z/
-- 
2.20.1




[OSSTEST PATCH 1/5] daemonlib: Provide a "noop" command

2020-09-28 Thread Ian Jackson
From: Ian Jackson 

We are going to want clients to speak before waiting for the server
banner.  A noop command is useful for that.

Putting this here makes it apply to both ownerdaemon and queuedaemon.

Signed-off-by: Ian Jackson 
---
 tcl/daemonlib.tcl | 4 
 1 file changed, 4 insertions(+)

diff --git a/tcl/daemonlib.tcl b/tcl/daemonlib.tcl
index 1e86d5f4..747deab1 100644
--- a/tcl/daemonlib.tcl
+++ b/tcl/daemonlib.tcl
@@ -124,6 +124,10 @@ proc puts-chan {chan m} {
 puts $chan $m
 }
 
+proc cmd/noop {chan desc} {
+puts-chan $chan "OK noop"
+}
+
 #-- data --
 
 proc puts-chan-data {chan m data} {
-- 
2.20.1




Re: [PATCH 0/4] xen/arm: Unbreak ACPI

2020-09-28 Thread Masami Hiramatsu
Hi,

I've missed the explanation of the attached patch. This prototype
patch was also needed for booting up the Xen on my box (for the system
which has no SPCR).

Thank you,

2020年9月28日(月) 15:47 Masami Hiramatsu :
>
> Hello,
>
> This made progress with my Xen boot on DeveloperBox (
> https://www.96boards.org/product/developerbox/ ) with ACPI.
>
> Thank you,
>
>
> 2020年9月27日(日) 5:56 Julien Grall :
>
> >
> > From: Julien Grall 
> >
> > Hi all,
> >
> > Xen on ARM has been broken for quite a while on ACPI systems. This
> > series aims to fix it.
> >
> > Unfortunately I don't have a system with ACPI v6.0 or later (QEMU seems
> > to only support 5.1). So I did only some light testing.
> >
> > I have only build tested the x86 side so far.
> >
> > Cheers,
> >
> > *** BLURB HERE ***
> >
> > Julien Grall (4):
> >   xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()
> >   xen/arm: acpi: The fixmap area should always be cleared during
> > failure/unmap
> >   xen/arm: Check if the platform is not using ACPI before initializing
> > Dom0less
> >   xen/arm: Introduce fw_unreserved_regions() and use it
> >
> >  xen/arch/arm/acpi/lib.c | 79 ++---
> >  xen/arch/arm/kernel.c   |  2 +-
> >  xen/arch/arm/setup.c| 25 +---
> >  xen/arch/x86/acpi/lib.c | 18 +
> >  xen/drivers/acpi/osl.c  | 34 
> >  xen/include/asm-arm/setup.h |  2 +-
> >  xen/include/xen/acpi.h  |  1 +
> >  7 files changed, 123 insertions(+), 38 deletions(-)
> >
> > --
> > 2.17.1
> >
>
>
> --
> Masami Hiramatsu



-- 
Masami Hiramatsu



Re: [PATCH RFC 4/4] mm/page_alloc: place pages to tail in __free_pages_core()

2020-09-28 Thread Oscar Salvador
On Mon, Sep 28, 2020 at 10:36:00AM +0200, David Hildenbrand wrote:
> Hi Oscar!

Hi David :-)

> 
> Old code:
> 
> set_page_refcounted(): sets the refcount to 1.
> __free_pages()
>   -> put_page_testzero(): sets it to 0
>   -> free_the_page()->__free_pages_ok()
> 
> New code:
> 
> set_page_refcounted(): sets the refcount to 1.
> page_ref_dec(page): sets it to 0
> __free_pages_ok():

bleh, I misread the patch, somehow I managed to not see that you replaced
__free_pages with __free_pages_ok.

To be honest, now that we do not need the page's refcount to be 1 for the
put_page_testzero to trigger (and since you are decrementing it anyways),
I think it would be much clear for those two to be gone.

But not strong, so:

Reviewed-by: Oscar Salvador 

-- 
Oscar Salvador
SUSE L3



Re: remove alloc_vm_area v2

2020-09-28 Thread Christoph Hellwig
On Mon, Sep 28, 2020 at 01:13:38PM +0300, Joonas Lahtinen wrote:
> I think we have a gap that after splitting the drm-intel-next pull requests 
> into
> two the drm-intel/for-linux-next branch is now missing material from
> drm-intel/drm-intel-gt-next.
> 
> I think a simple course of action might be to start including 
> drm-intel-gt-next
> in linux-next, which would mean that we should update DIM tooling to add
> extra branch "drm-intel/gt-for-linux-next" or so.
> 
> Which specific patches are missing in this case?

The two dependencies required by my series not in mainline are:

drm/i915/gem: Avoid implicit vmap for highmem on x86-32
drm/i915/gem: Prevent using pgprot_writecombine() if PAT is not supported

so it has to be one or both of those.



Re: [PATCH v2 2/6] x86: reduce CET-SS related #ifdef-ary

2020-09-28 Thread Jan Beulich
On 28.09.2020 14:30, Jan Beulich wrote:
> Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
> to introduce a number of #ifdef-s to make the build work with older tool
> chains. Introduce an assembler macro covering for tool chains not
> knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
> problem to be dropped again.
> 
> No change to generated code.
> 
> Signed-off-by: Jan Beulich 
> Reviewed-by: Roger Pau Monné 
> ---
> Now that I've done this I'm no longer sure which direction is better to
> follow: On one hand this introduces dead code (even if just NOPs) into
> CET-SS-disabled builds.

A possible compromise here might be to ...

> --- a/xen/include/asm-x86/asm-defns.h
> +++ b/xen/include/asm-x86/asm-defns.h
> @@ -7,3 +7,9 @@
>  .byte 0x0f, 0x01, 0xcb
>  .endm
>  #endif
> +
> +#ifndef CONFIG_HAS_AS_CET_SS
> +.macro setssbsy
> +.byte 0xf3, 0x0f, 0x01, 0xe8
> +.endm
> +#endif

... comment out this macro's body. If we went this route, incssp
and wrssp could be dealt with in similar ways, to allow dropping
further #ifdef-s.

Jan



Re: [PATCH v2 0/6] x86: some assembler macro rework

2020-09-28 Thread Jan Beulich
On 28.09.2020 14:28, Jan Beulich wrote:
> Parts of this were discussed in the context of Andrew's CET-SS work.
> Further parts simply fit the underlying picture. And the two final
> patches get attached here simply because of their dependency: Patch
> 4 was sent standalone already as v2, and is unchanged from that,
> while patch 6 is new.

And I should perhaps clarify: I'm resending the initial part of this
mainly to revive the discussion. There was some file name disagreement,
which is why I didn't commit at least the 2st patch here so far. But
there were no alternative suggestions that I would consider acceptable,
and hence I've kept the name as is.

Jan



[PATCH v2 6/6] x86: limit amount of INT3 in IND_THUNK_*

2020-09-28 Thread Jan Beulich
There's no point having every replacement variant to also specify the
INT3 - just have it once in the base macro. When patching, NOPs will get
inserted, which are fine to speculate through (until reaching the INT3).

Signed-off-by: Jan Beulich 
---
I also wonder whether the LFENCE in IND_THUNK_RETPOLINE couldn't be
replaced by INT3 as well. Of course the effect will be marginal, as the
size of the thunk will still be 16 bytes when including tail padding
resulting from alignment.
---
v2: New.

--- a/xen/arch/x86/indirect-thunk.S
+++ b/xen/arch/x86/indirect-thunk.S
@@ -11,6 +11,8 @@
 
 #include 
 
+.purgem ret
+
 .macro IND_THUNK_RETPOLINE reg:req
 call 2f
 1:
@@ -24,12 +26,10 @@
 .macro IND_THUNK_LFENCE reg:req
 lfence
 jmp *%\reg
-int3 /* Halt straight-line speculation */
 .endm
 
 .macro IND_THUNK_JMP reg:req
 jmp *%\reg
-int3 /* Halt straight-line speculation */
 .endm
 
 /*
@@ -44,6 +44,8 @@ ENTRY(__x86_indirect_thunk_\reg)
 __stringify(IND_THUNK_LFENCE \reg), X86_FEATURE_IND_THUNK_LFENCE, \
 __stringify(IND_THUNK_JMP \reg),X86_FEATURE_IND_THUNK_JMP
 
+int3 /* Halt straight-line speculation */
+
 .size __x86_indirect_thunk_\reg, . - __x86_indirect_thunk_\reg
 .type __x86_indirect_thunk_\reg, @function
 .endm




[PATCH v2 5/6] x86: guard against straight-line speculation past RET

2020-09-28 Thread Jan Beulich
Under certain conditions CPUs can speculate into the instruction stream
past a RET instruction. Guard against this just like 3b7dab93f240
("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
did - by inserting an "INT $3" insn. It's merely the mechanics of how to
achieve this that differ: A set of macros gets introduced to post-
process RET insns issued by the compiler (or living in assembly files).

Unfortunately for clang this requires further features their built-in
assembler doesn't support: We need to be able to override insn mnemonics
produced by the compiler (which may be impossible, if internally
assembly mnemonics never get generated), and we want to use \(text)
escaping / quoting in the auxiliary macro.

Signed-off-by: Jan Beulich 
---
TBD: Should this depend on CONFIG_SPECULATIVE_HARDEN_BRANCH?
TBD: Would be nice to avoid the additions in .init.text, but a query to
 the binutils folks regarding the ability to identify the section
 stuff is in (by Peter Zijlstra over a year ago:
 https://sourceware.org/pipermail/binutils/2019-July/107528.html)
 has been left without helpful replies.
---
v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.

--- a/xen/Makefile
+++ b/xen/Makefile
@@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
 # https://bugs.llvm.org/show_bug.cgi?id=36110
 t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile 
$(open)".macro FOO;.endm",-no-integrated-as)
 
-CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
+# Check whether \(text) escaping in macro bodies is supported.
+t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 
8",,-no-integrated-as)
+
+# Check whether macros can override insn mnemonics in inline assembly.
+t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; 
.endm",-no-integrated-as)
+
+acc1 := $(call or,$(t1),$(t2),$(t3),$(t4))
+
+CLANG_FLAGS += $(call or,$(acc1),$(t5))
 endif
 
 CLANG_FLAGS += -Werror=unknown-warning-option
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -50,3 +50,22 @@
 .macro INDIRECT_JMP arg:req
 INDIRECT_BRANCH jmp \arg
 .endm
+
+/*
+ * To guard against speculation past RET, insert a breakpoint insn
+ * immediately after them.
+ */
+.macro ret operand:vararg
+ret$ \operand
+.endm
+.macro retq operand:vararg
+ret$ \operand
+.endm
+.macro ret$ operand:vararg
+.purgem ret
+ret \operand
+int3
+.macro ret operand:vararg
+ret$ \\(operand)
+.endm
+.endm




[PATCH v2 4/6] x86: fold indirect_thunk_asm.h into asm-defns.h

2020-09-28 Thread Jan Beulich
There's little point in having two separate headers both getting
included by asm_defns.h. This in particular reduces the number of
instances of guarding asm(".include ...") suitably in such dual use
headers.

No change to generated code.

Signed-off-by: Jan Beulich 
Reviewed-by: Roger Pau Monné 

--- a/xen/Makefile
+++ b/xen/Makefile
@@ -139,7 +139,7 @@ ifeq ($(TARGET_ARCH),x86)
 t1 = $(call as-insn,$(CC),".L0: .L1: .skip (.L1 - .L0)",,-no-integrated-as)
 
 # Check whether clang asm()-s support .include.
-t2 = $(call as-insn,$(CC) -I$(BASEDIR)/include,".include 
\"asm-x86/indirect_thunk_asm.h\"",,-no-integrated-as)
+t2 = $(call as-insn,$(CC) -I$(BASEDIR)/include,".include 
\"asm-x86/asm-defns.h\"",,-no-integrated-as)
 
 # Check whether clang keeps .macro-s between asm()-s:
 # https://bugs.llvm.org/show_bug.cgi?id=36110
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -13,3 +13,40 @@
 .byte 0xf3, 0x0f, 0x01, 0xe8
 .endm
 #endif
+
+.macro INDIRECT_BRANCH insn:req arg:req
+/*
+ * Create an indirect branch.  insn is one of call/jmp, arg is a single
+ * register.
+ *
+ * With no compiler support, this degrades into a plain indirect call/jmp.
+ * With compiler support, dispatch to the correct __x86_indirect_thunk_*
+ */
+.if CONFIG_INDIRECT_THUNK == 1
+
+$done = 0
+.irp reg, ax, cx, dx, bx, bp, si, di, 8, 9, 10, 11, 12, 13, 14, 15
+.ifeqs "\arg", "%r\reg"
+\insn __x86_indirect_thunk_r\reg
+$done = 1
+   .exitm
+.endif
+.endr
+
+.if $done != 1
+.error "Bad register arg \arg"
+.endif
+
+.else
+\insn *\arg
+.endif
+.endm
+
+/* Convenience wrappers. */
+.macro INDIRECT_CALL arg:req
+INDIRECT_BRANCH call \arg
+.endm
+
+.macro INDIRECT_JMP arg:req
+INDIRECT_BRANCH jmp \arg
+.endm
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -22,7 +22,6 @@
 asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
   __stringify(IS_ENABLED(CONFIG_INDIRECT_THUNK)) );
 #endif
-#include 
 
 #ifndef __ASSEMBLY__
 void ret_from_intr(void);
--- a/xen/include/asm-x86/indirect_thunk_asm.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Trickery to allow this header to be included at the C level, to permit
- * proper dependency tracking in .*.o.d files, while still having it contain
- * assembler only macros.
- */
-#ifndef __ASSEMBLY__
-# if 0
-  .if 0
-# endif
-asm ( "\t.include \"asm/indirect_thunk_asm.h\"" );
-# if 0
-  .endif
-# endif
-#else
-
-.macro INDIRECT_BRANCH insn:req arg:req
-/*
- * Create an indirect branch.  insn is one of call/jmp, arg is a single
- * register.
- *
- * With no compiler support, this degrades into a plain indirect call/jmp.
- * With compiler support, dispatch to the correct __x86_indirect_thunk_*
- */
-.if CONFIG_INDIRECT_THUNK == 1
-
-$done = 0
-.irp reg, ax, cx, dx, bx, bp, si, di, 8, 9, 10, 11, 12, 13, 14, 15
-.ifeqs "\arg", "%r\reg"
-\insn __x86_indirect_thunk_r\reg
-$done = 1
-   .exitm
-.endif
-.endr
-
-.if $done != 1
-.error "Bad register arg \arg"
-.endif
-
-.else
-\insn *\arg
-.endif
-.endm
-
-/* Convenience wrappers. */
-.macro INDIRECT_CALL arg:req
-INDIRECT_BRANCH call \arg
-.endm
-
-.macro INDIRECT_JMP arg:req
-INDIRECT_BRANCH jmp \arg
-.endm
-
-#endif




[PATCH v2 3/6] x86: drop ASM_{CL,ST}AC

2020-09-28 Thread Jan Beulich
Use ALTERNATIVE directly, such that at the use sites it is visible that
alternative code patching is in use. Similarly avoid hiding the fact in
SAVE_ALL.

No change to generated code.

Signed-off-by: Jan Beulich 
Reviewed-by: Andrew Cooper 
---
v2: Further adjust comment in asm_domain_crash_synchronous().

--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2165,9 +2165,8 @@ void activate_debugregs(const struct vcp
 void asm_domain_crash_synchronous(unsigned long addr)
 {
 /*
- * We need clear AC bit here because in entry.S AC is set
- * by ASM_STAC to temporarily allow accesses to user pages
- * which is prevented by SMAP by default.
+ * We need to clear the AC bit here because the exception fixup logic
+ * may leave user accesses enabled.
  *
  * For some code paths, where this function is called, clac()
  * is not needed, but adding clac() here instead of each place
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -12,7 +12,7 @@
 #include 
 
 ENTRY(entry_int82)
-ASM_CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 pushq $0
 movl  $HYPERCALL_VECTOR, 4(%rsp)
 SAVE_ALL compat=1 /* DPL1 gate, restricted to 32bit PV guests only. */
@@ -284,7 +284,7 @@ ENTRY(compat_int80_direct_trap)
 compat_create_bounce_frame:
 ASSERT_INTERRUPTS_ENABLED
 mov   %fs,%edi
-ASM_STAC
+ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
 testb $2,UREGS_cs+8(%rsp)
 jz1f
 /* Push new frame at registered guest-OS stack base. */
@@ -331,7 +331,7 @@ compat_create_bounce_frame:
 movl  TRAPBOUNCE_error_code(%rdx),%eax
 .Lft8:  movl  %eax,%fs:(%rsi)   # ERROR CODE
 1:
-ASM_CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 /* Rewrite our stack frame and return to guest-OS mode. */
 /* IA32 Ref. Vol. 3: TF, VM, RF and NT flags are cleared on trap. */
 andl  $~(X86_EFLAGS_VM|X86_EFLAGS_RF|\
@@ -377,7 +377,7 @@ compat_crash_page_fault_4:
 addl  $4,%esi
 compat_crash_page_fault:
 .Lft14: mov   %edi,%fs
-ASM_CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 movl  %esi,%edi
 call  show_page_walk
 jmp   dom_crash_sync_extable
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -276,7 +276,7 @@ ENTRY(sysenter_entry)
 pushq $0
 pushfq
 GLOBAL(sysenter_eflags_saved)
-ASM_CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 pushq $3 /* ring 3 null cs */
 pushq $0 /* null rip */
 pushq $0
@@ -329,7 +329,7 @@ UNLIKELY_END(sysenter_gpf)
 jmp   .Lbounce_exception
 
 ENTRY(int80_direct_trap)
-ASM_CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 pushq $0
 movl  $0x80, 4(%rsp)
 SAVE_ALL
@@ -448,7 +448,7 @@ __UNLIKELY_END(create_bounce_frame_bad_s
 
 subq  $7*8,%rsi
 movq  UREGS_ss+8(%rsp),%rax
-ASM_STAC
+ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
 movq  VCPU_domain(%rbx),%rdi
 STORE_GUEST_STACK(rax,6)# SS
 movq  UREGS_rsp+8(%rsp),%rax
@@ -486,7 +486,7 @@ __UNLIKELY_END(create_bounce_frame_bad_s
 STORE_GUEST_STACK(rax,1)# R11
 movq  UREGS_rcx+8(%rsp),%rax
 STORE_GUEST_STACK(rax,0)# RCX
-ASM_CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 
 #undef STORE_GUEST_STACK
 
@@ -528,11 +528,11 @@ domain_crash_page_fault_2x8:
 domain_crash_page_fault_1x8:
 addq  $8,%rsi
 domain_crash_page_fault_0x8:
-ASM_CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 movq  %rsi,%rdi
 call  show_page_walk
 ENTRY(dom_crash_sync_extable)
-ASM_CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 # Get out of the guest-save area of the stack.
 GET_STACK_END(ax)
 leaq  STACK_CPUINFO_FIELD(guest_cpu_user_regs)(%rax),%rsp
@@ -590,7 +590,8 @@ UNLIKELY_END(exit_cr3)
 iretq
 
 ENTRY(common_interrupt)
-SAVE_ALL CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+SAVE_ALL
 
 GET_STACK_END(14)
 
@@ -622,7 +623,8 @@ ENTRY(page_fault)
 movl  $TRAP_page_fault,4(%rsp)
 /* No special register assumptions. */
 GLOBAL(handle_exception)
-SAVE_ALL CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+SAVE_ALL
 
 GET_STACK_END(14)
 
@@ -827,7 +829,8 @@ ENTRY(entry_CP)
 ENTRY(double_fault)
 movl  $TRAP_double_fault,4(%rsp)
 /* Set AC to reduce chance of further SMAP faults */
-SAVE_ALL STAC
+ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
+SAVE_ALL
 
 GET_STACK_END(14)
 
@@ -860,7 +863,8 @@ ENTRY(nmi)
 pushq $0
 movl  $TRAP_nmi,4(%rsp)
 handle_ist_exception:
-SAVE_ALL CLAC
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+SAVE_ALL
 
 GET_STACK_END(14)
 
--- a/xen/in

[PATCH v2 1/6] x86: replace __ASM_{CL,ST}AC

2020-09-28 Thread Jan Beulich
Introduce proper assembler macros instead, enabled only when the
assembler itself doesn't support the insns. To avoid duplicating the
macros for assembly and C files, have them processed into asm-macros.h.
This in turn requires adding a multiple inclusion guard when generating
that header.

No change to generated code.

Signed-off-by: Jan Beulich 
Reviewed-by: Roger Pau Monné 

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -243,7 +243,10 @@ $(BASEDIR)/include/asm-x86/asm-macros.h:
echo '#if 0' >$@.new
echo '.if 0' >>$@.new
echo '#endif' >>$@.new
+   echo '#ifndef __ASM_MACROS_H__' >>$@.new
+   echo '#define __ASM_MACROS_H__' >>$@.new
echo 'asm ( ".include \"$@\"" );' >>$@.new
+   echo '#endif /* __ASM_MACROS_H__ */' >>$@.new
echo '#if 0' >>$@.new
echo '.endif' >>$@.new
cat $< >>$@.new
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
 $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
 $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
 $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
+$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
 $(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
--- a/xen/arch/x86/asm-macros.c
+++ b/xen/arch/x86/asm-macros.c
@@ -1 +1,2 @@
+#include 
 #include 
--- /dev/null
+++ b/xen/include/asm-x86/asm-defns.h
@@ -0,0 +1,9 @@
+#ifndef HAVE_AS_CLAC_STAC
+.macro clac
+.byte 0x0f, 0x01, 0xca
+.endm
+
+.macro stac
+.byte 0x0f, 0x01, 0xcb
+.endm
+#endif
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -13,10 +13,12 @@
 #include 
 
 #ifdef __ASSEMBLY__
+#include 
 #ifndef CONFIG_INDIRECT_THUNK
 .equ CONFIG_INDIRECT_THUNK, 0
 #endif
 #else
+#include 
 asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
   __stringify(IS_ENABLED(CONFIG_INDIRECT_THUNK)) );
 #endif
@@ -200,34 +202,27 @@ register unsigned long current_stack_poi
 
 #endif
 
-/* "Raw" instruction opcodes */
-#define __ASM_CLAC  ".byte 0x0f,0x01,0xca"
-#define __ASM_STAC  ".byte 0x0f,0x01,0xcb"
-
 #ifdef __ASSEMBLY__
 .macro ASM_STAC
-ALTERNATIVE "", __ASM_STAC, X86_FEATURE_XEN_SMAP
+ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
 .endm
 .macro ASM_CLAC
-ALTERNATIVE "", __ASM_CLAC, X86_FEATURE_XEN_SMAP
+ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 .endm
 #else
 static always_inline void clac(void)
 {
 /* Note: a barrier is implicit in alternative() */
-alternative("", __ASM_CLAC, X86_FEATURE_XEN_SMAP);
+alternative("", "clac", X86_FEATURE_XEN_SMAP);
 }
 
 static always_inline void stac(void)
 {
 /* Note: a barrier is implicit in alternative() */
-alternative("", __ASM_STAC, X86_FEATURE_XEN_SMAP);
+alternative("", "stac", X86_FEATURE_XEN_SMAP);
 }
 #endif
 
-#undef __ASM_STAC
-#undef __ASM_CLAC
-
 #ifdef __ASSEMBLY__
 .macro SAVE_ALL op, compat=0
 .ifeqs "\op", "CLAC"




[PATCH v2 2/6] x86: reduce CET-SS related #ifdef-ary

2020-09-28 Thread Jan Beulich
Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
to introduce a number of #ifdef-s to make the build work with older tool
chains. Introduce an assembler macro covering for tool chains not
knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
problem to be dropped again.

No change to generated code.

Signed-off-by: Jan Beulich 
Reviewed-by: Roger Pau Monné 
---
Now that I've done this I'm no longer sure which direction is better to
follow: On one hand this introduces dead code (even if just NOPs) into
CET-SS-disabled builds. Otoh this is a step towards breaking the tool
chain version dependency of the feature.

I've also dropped conditionals around bigger chunks of code; while I
think that's preferable, I'm open to undo those parts.

--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -31,7 +31,6 @@ ENTRY(__high_start)
 jz  .L_bsp
 
 /* APs.  Set up shadow stacks before entering C. */
-#ifdef CONFIG_XEN_SHSTK
 testl   $cpufeat_mask(X86_FEATURE_XEN_SHSTK), \
 CPUINFO_FEATURE_OFFSET(X86_FEATURE_XEN_SHSTK) + 
boot_cpu_data(%rip)
 je  .L_ap_shstk_done
@@ -55,7 +54,6 @@ ENTRY(__high_start)
 mov $XEN_MINIMAL_CR4 | X86_CR4_CET, %ecx
 mov %rcx, %cr4
 setssbsy
-#endif
 
 .L_ap_shstk_done:
 callstart_secondary
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -668,7 +668,7 @@ static void __init noreturn reinit_bsp_s
 stack_base[0] = stack;
 memguard_guard_stack(stack);
 
-if ( IS_ENABLED(CONFIG_XEN_SHSTK) && cpu_has_xen_shstk )
+if ( cpu_has_xen_shstk )
 {
 wrmsrl(MSR_PL0_SSP,
(unsigned long)stack + (PRIMARY_SHSTK_SLOT + 1) * PAGE_SIZE - 
8);
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -197,9 +197,7 @@ ENTRY(cr4_pv32_restore)
 
 /* See lstar_enter for entry register state. */
 ENTRY(cstar_enter)
-#ifdef CONFIG_XEN_SHSTK
 ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
 /* sti could live here when we don't switch page tables below. */
 CR4_PV32_RESTORE
 movq  8(%rsp),%rax /* Restore %rax. */
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -236,9 +236,7 @@ iret_exit_to_guest:
  * %ss must be saved into the space left by the trampoline.
  */
 ENTRY(lstar_enter)
-#ifdef CONFIG_XEN_SHSTK
 ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
 /* sti could live here when we don't switch page tables below. */
 movq  8(%rsp),%rax /* Restore %rax. */
 movq  $FLAT_KERNEL_SS,8(%rsp)
@@ -272,9 +270,7 @@ ENTRY(lstar_enter)
 jmp   test_all_events
 
 ENTRY(sysenter_entry)
-#ifdef CONFIG_XEN_SHSTK
 ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
 /* sti could live here when we don't switch page tables below. */
 pushq $FLAT_USER_SS
 pushq $0
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -7,3 +7,9 @@
 .byte 0x0f, 0x01, 0xcb
 .endm
 #endif
+
+#ifndef CONFIG_HAS_AS_CET_SS
+.macro setssbsy
+.byte 0xf3, 0x0f, 0x01, 0xe8
+.endm
+#endif




[PATCH v2 0/6] x86: some assembler macro rework

2020-09-28 Thread Jan Beulich
Parts of this were discussed in the context of Andrew's CET-SS work.
Further parts simply fit the underlying picture. And the two final
patches get attached here simply because of their dependency: Patch
4 was sent standalone already as v2, and is unchanged from that,
while patch 6 is new.

1: replace __ASM_{CL,ST}AC
2: reduce CET-SS related #ifdef-ary
3: drop ASM_{CL,ST}AC
4: fold indirect_thunk_asm.h into asm-defns.h
5: guard against straight-line speculation past RET
6: limit amount of INT3 in IND_THUNK_*

Jan



Re: [PATCH] x86/PV: make post-migration page state consistent

2020-09-28 Thread Jan Beulich
On 11.09.2020 14:37, Jan Beulich wrote:
> On 11.09.2020 13:55, Andrew Cooper wrote:
>> On 11/09/2020 11:34, Jan Beulich wrote:
>>> When a page table page gets de-validated, its type reference count drops
>>> to zero (and PGT_validated gets cleared), but its type remains intact.
>>> XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
>>> such pages. An intermediate write to such a page via e.g.
>>> MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
>>> PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
>>> return. In libxc the decision which pages to normalize / localize
>>> depends solely on the type returned from the domctl. As a result without
>>> further precautions the guest won't be able to tell whether such a page
>>> has had its (apparent) PTE entries transitioned to the new MFNs.
>>
>> I'm afraid I don't follow what the problem is.
>>
>> Yes - unvalidated pages probably ought to be consistently NOTAB, so this
>> is probably a good change, but I don't see how it impacts the migration
>> logic.
> 
> It's not the migration logic itself that's impacted, but the state
> of guest pages after migration. I'm afraid I can only try to expand
> on the original description.
> 
> Case 1: Once an Ln page has been unvalidated, due to the described
> behavior the migration code in libxc will normalize and then localize
> it. Therefore the guest could go and directly try to use it as a
> page table again. This should work as long as all of the entries in
> the page can still be successfully validated (i.e. unless the guest
> itself has made changes to the state of other pages).
> 
> Case 2: Once an Ln page has been unvalidated, the guest for whatever
> reason still writes to it through e.g. MMU_NORMAL_PT_UPDATE. Prior
> to migration, and provided the new entry can be validated (and no
> other reference page has changed state), the page can still be
> converted to a proper page table one again. If, however, migration
> occurs inbetween, the page now won't get normalized and then
> localized. The MFNs in it are unlikely to make sense anymore, and
> hence an attempt to make the page a page table again is likely to
> fail (or if it doesn't fail the result is unlikely to be what's
> intended).
> 
> Since there's no way to make case 2 "work", the only choice is to
> make case 1 behave like case 2, in order for the behavior to be
> predictable / consistent.
> 
>> We already have to cope with a page really changing types in parallel
>> with the normalise/localise logic (that was a "fun" one to debug), which
>> is why errors in that logic are specifically not fatal while the guest
>> is live - the frame gets re-marked as dirty, and deferred until the next
>> round.
>>
>> Errors encountered after the VM has been paused are fatal.
>>
>> However, at no point, even with an unvalidated pagetable type, can the
>> contents of the page be anything other than legal PTEs.  (I think)
> 
> Correct, because in order to write to the page one has to either
> make it a page table one again (and then write through hypercall
> or for L1 through PTWR) or the mmu-normal-pt-update would first
> convert the page to a writable one.

Besides wanting to ping this change / discussion, I'd also like
to correct myself on this last part of the reply: The above
applies to pages after having got de-validated. However, prior
to validation pages have their type changed early (type ref count
remains zero), but at this point it can't be told yet whether the
page consists of all legal PTEs.

Jan



[PATCH 5/5] x86/ELF: eliminate pointless local variable from elf_core_save_regs()

2020-09-28 Thread Jan Beulich
We can just as well specify the CRn structure fields directly in the
asm()s, just like done for all other ones.

Signed-off-by: Jan Beulich 

--- a/xen/include/asm-x86/x86_64/elf.h
+++ b/xen/include/asm-x86/x86_64/elf.h
@@ -37,8 +37,6 @@ typedef struct {
 static inline void elf_core_save_regs(ELF_Gregset *core_regs, 
   crash_xen_core_t *xen_core_regs)
 {
-unsigned long tmp;
-
 asm volatile("movq %%r15,%0" : "=m"(core_regs->r15));
 asm volatile("movq %%r14,%0" : "=m"(core_regs->r14));
 asm volatile("movq %%r13,%0" : "=m"(core_regs->r13));
@@ -67,17 +65,10 @@ static inline void elf_core_save_regs(EL
 core_regs->fs = read_sreg(fs);
 core_regs->gs = read_sreg(gs);
 
-asm volatile("mov %%cr0, %0" : "=r" (tmp) : );
-xen_core_regs->cr0 = tmp;
-
-asm volatile("mov %%cr2, %0" : "=r" (tmp) : );
-xen_core_regs->cr2 = tmp;
-
-asm volatile("mov %%cr3, %0" : "=r" (tmp) : );
-xen_core_regs->cr3 = tmp;
-
-asm volatile("mov %%cr4, %0" : "=r" (tmp) : );
-xen_core_regs->cr4 = tmp;
+asm volatile("mov %%cr0, %0" : "=r" (xen_core_regs->cr0));
+asm volatile("mov %%cr2, %0" : "=r" (xen_core_regs->cr2));
+asm volatile("mov %%cr3, %0" : "=r" (xen_core_regs->cr3));
+asm volatile("mov %%cr4, %0" : "=r" (xen_core_regs->cr4));
 }
 
 #endif /* __X86_64_ELF_H__ */




[PATCH 3/5] x86/ELF: don't store function pointer in elf_core_save_regs()

2020-09-28 Thread Jan Beulich
This keeps at least gcc 10 from generating a separate function instance
in common/kexec.o alongside the inlining of the function in its sole
caller. I also think putting the address of the actual code storing the
registers is a better indication to consumers than that of an otherwise
unreferenced function.

Signed-off-by: Jan Beulich 

--- a/xen/include/asm-x86/x86_64/elf.h
+++ b/xen/include/asm-x86/x86_64/elf.h
@@ -54,7 +54,7 @@ static inline void elf_core_save_regs(EL
 asm volatile("movq %%rsi,%0" : "=m"(core_regs->rsi));
 asm volatile("movq %%rdi,%0" : "=m"(core_regs->rdi));
 /* orig_rax not filled in for now */
-core_regs->rip = (unsigned long)elf_core_save_regs;
+asm volatile("call 0f; 0: popq %0" : "=m" (core_regs->rip));
 core_regs->cs = read_sreg(cs);
 asm volatile("pushfq; popq %0" :"=m"(core_regs->rflags));
 asm volatile("movq %%rsp,%0" : "=m"(core_regs->rsp));




[PATCH 4/5] x86/ELF: also record FS/GS bases in elf_core_save_regs()

2020-09-28 Thread Jan Beulich
Signed-off-by: Jan Beulich 

--- a/xen/include/asm-x86/x86_64/elf.h
+++ b/xen/include/asm-x86/x86_64/elf.h
@@ -1,6 +1,7 @@
 #ifndef __X86_64_ELF_H__
 #define __X86_64_ELF_H__
 
+#include 
 #include 
 
 typedef struct {
@@ -59,8 +60,8 @@ static inline void elf_core_save_regs(EL
 asm volatile("pushfq; popq %0" :"=m"(core_regs->rflags));
 asm volatile("movq %%rsp,%0" : "=m"(core_regs->rsp));
 asm volatile("movl %%ss, %%eax;" :"=a"(core_regs->ss));
-/* thread_fs not filled in for now */
-/* thread_gs not filled in for now */
+rdmsrl(MSR_FS_BASE, core_regs->thread_fs);
+rdmsrl(MSR_GS_BASE, core_regs->thread_gs);
 core_regs->ds = read_sreg(ds);
 core_regs->es = read_sreg(es);
 core_regs->fs = read_sreg(fs);




[PATCH 1/5] x86: introduce read_sregs() to allow storing to memory directly

2020-09-28 Thread Jan Beulich
When storing all (data) segment registers in one go, prefer writing the
selector values directly to memory (as opposed to read_sreg()).

Also move the single register variant into the regs.h.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1703,10 +1703,7 @@ static void save_segments(struct vcpu *v
 {
 struct cpu_user_regs *regs = &v->arch.user_regs;
 
-regs->ds = read_sreg(ds);
-regs->es = read_sreg(es);
-regs->fs = read_sreg(fs);
-regs->gs = read_sreg(gs);
+read_sregs(regs);
 
 if ( !is_pv_32bit_vcpu(v) )
 {
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -43,10 +43,7 @@ static void read_registers(struct cpu_us
 crs[2] = read_cr2();
 crs[3] = read_cr3();
 crs[4] = read_cr4();
-regs->ds = read_sreg(ds);
-regs->es = read_sreg(es);
-regs->fs = read_sreg(fs);
-regs->gs = read_sreg(gs);
+read_sregs(regs);
 crs[5] = rdfsbase();
 crs[6] = rdgsbase();
 crs[7] = rdgsshadow();
--- a/xen/include/asm-x86/regs.h
+++ b/xen/include/asm-x86/regs.h
@@ -15,4 +15,18 @@
 (diff == 0);  \
 })
 
+#define read_sreg(name) ({\
+unsigned int __sel;   \
+asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \
+__sel;\
+})
+
+static inline void read_sregs(struct cpu_user_regs *regs)
+{
+asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) );
+asm volatile ( "mov %%es, %0" : "=m" (regs->es) );
+asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) );
+asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) );
+}
+
 #endif /* __X86_REGS_H__ */
--- a/xen/include/asm-x86/system.h
+++ b/xen/include/asm-x86/system.h
@@ -5,12 +5,6 @@
 #include 
 #include 
 
-#define read_sreg(name) \
-({  unsigned int __sel; \
-asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) );   \
-__sel;  \
-})
-
 static inline void wbinvd(void)
 {
 asm volatile ( "wbinvd" ::: "memory" );




[PATCH 2/5] x86/ELF: don't open-code read_sreg()

2020-09-28 Thread Jan Beulich
Signed-off-by: Jan Beulich 

--- a/xen/include/asm-x86/x86_64/elf.h
+++ b/xen/include/asm-x86/x86_64/elf.h
@@ -1,6 +1,8 @@
 #ifndef __X86_64_ELF_H__
 #define __X86_64_ELF_H__
 
+#include 
+
 typedef struct {
 unsigned long r15;
 unsigned long r14;
@@ -53,16 +55,16 @@ static inline void elf_core_save_regs(EL
 asm volatile("movq %%rdi,%0" : "=m"(core_regs->rdi));
 /* orig_rax not filled in for now */
 core_regs->rip = (unsigned long)elf_core_save_regs;
-asm volatile("movl %%cs, %%eax;" :"=a"(core_regs->cs));
+core_regs->cs = read_sreg(cs);
 asm volatile("pushfq; popq %0" :"=m"(core_regs->rflags));
 asm volatile("movq %%rsp,%0" : "=m"(core_regs->rsp));
 asm volatile("movl %%ss, %%eax;" :"=a"(core_regs->ss));
 /* thread_fs not filled in for now */
 /* thread_gs not filled in for now */
-asm volatile("movl %%ds, %%eax;" :"=a"(core_regs->ds));
-asm volatile("movl %%es, %%eax;" :"=a"(core_regs->es));
-asm volatile("movl %%fs, %%eax;" :"=a"(core_regs->fs));
-asm volatile("movl %%gs, %%eax;" :"=a"(core_regs->gs));
+core_regs->ds = read_sreg(ds);
+core_regs->es = read_sreg(es);
+core_regs->fs = read_sreg(fs);
+core_regs->gs = read_sreg(gs);
 
 asm volatile("mov %%cr0, %0" : "=r" (tmp) : );
 xen_core_regs->cr0 = tmp;



[PATCH 0/5] x86: introduce read_sregs() and elf_core_save_regs() adjustments

2020-09-28 Thread Jan Beulich
1: introduce read_sregs() to allow storing to memory directly
2: ELF: don't open-code read_sreg()
3: ELF: don't store function pointer in elf_core_save_regs()
4: ELF: also record FS/GS bases in elf_core_save_regs()
5: ELF: eliminate pointless local variable from elf_core_save_regs()

Jan



[xen-4.14-testing test] 154698: regressions - trouble: fail/pass/preparing

2020-09-28 Thread osstest service owner
flight 154698 xen-4.14-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154698/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-4   68 xtf/test-hvm64-xsa-221   fail REGR. vs. 154350
 test-xtf-amd64-amd64-4   106 xtf/test-pv64-xsa-221   fail REGR. vs. 154350
 test-amd64-amd64-xl-xsm  12 guest-start  fail REGR. vs. 154350
 test-xtf-amd64-amd64-1   68 xtf/test-hvm64-xsa-221   fail REGR. vs. 154350
 test-amd64-amd64-libvirt-xsm 12 guest-start  fail REGR. vs. 154350
 test-xtf-amd64-amd64-5   68 xtf/test-hvm64-xsa-221   fail REGR. vs. 154350
 test-xtf-amd64-amd64-1   106 xtf/test-pv64-xsa-221   fail REGR. vs. 154350
 test-xtf-amd64-amd64-5   106 xtf/test-pv64-xsa-221   fail REGR. vs. 154350
 test-amd64-i386-xl-xsm   12 guest-start  fail REGR. vs. 154350
 test-amd64-i386-libvirt-xsm  12 guest-start  fail REGR. vs. 154350
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 154350
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 154350
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 154350
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 154350
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 154350
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 154350
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 154350
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  2 hosts-allocate running
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  2 hosts-allocate running
 test-xtf-amd64-amd64-22 hosts-allocate   running
 test-xtf-amd64-amd64-32 hosts-allocate   running
 test-amd64-amd64-xl-qemuu-win7-amd64  2 hosts-allocate   running

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-x

[ovmf test] 154983: trouble: blocked/broken/preparing/queued

2020-09-28 Thread osstest service owner
flight 154983 ovmf running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154983/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-amd64-pvopsbroken
 build-amd64-xsm  broken
 build-i386-xsm   broken
 build-i386-xsm4 host-install(4)broken REGR. vs. 154633
 build-amd64   4 host-install(4)broken REGR. vs. 154633
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 154633
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 154633
 build-i386-libvirtqueued
 test-amd64-i386-xl-qemuu-ovmf-amd64  queued
 build-i3862 hosts-allocate   running
 build-i386-pvops  2 hosts-allocate   running

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a

version targeted for testing:
 ovmf 1d058c3e86b079a2e207bb022fd7a97814c9a04f
baseline version:
 ovmf dd5c7e3c5282b084daa5bbf0ec229cec699b2c17

Last test of basis   154633  2020-09-23 05:49:28 Z5 days
Failing since154753  2020-09-25 02:39:51 Z3 days2 attempts
Testing same since   154899  2020-09-26 12:23:59 Z1 days1 attempts


People who touched revisions under test:
  Bob Feng 
  gaoliming 
  Liming Gao 
  Mingyue Liang 

jobs:
 build-amd64-xsm  broken  
 build-i386-xsm   broken  
 build-amd64  broken  
 build-i386   preparing
 build-amd64-libvirt  blocked 
 build-i386-libvirt   queued  
 build-amd64-pvopsbroken  
 build-i386-pvops preparing
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  queued  



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-i386-libvirt queued
broken-job build-i386-xsm broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-step build-i386-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-amd64-xsm host-install(4)

Not pushing.


commit 1d058c3e86b079a2e207bb022fd7a97814c9a04f
Author: gaoliming 
Date:   Wed Sep 16 17:58:14 2020 +0800

IntelFsp2Pkg GenCfgOpt.py: Initialize IncLines as empty list

IncLines as empty list for the case when InputHeaderFile is not specified.

Cc: Chasel Chiu 
Cc: Nate DeSimone 
Cc: Star Zeng 
Signed-off-by: Liming Gao 
Reviewed-by: Chasel Chiu 
Reviewed-by: Star Zeng 

commit d8be01079b3c7b554ac8126e97e73fba8894e519
Author: Bob Feng 
Date:   Tue Sep 22 19:27:54 2020 +0800

BaseTools: Set section alignment as zero if its type is Auto

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2881

Currently, the build tool try to read the section alignment
from efi file if the section alignment type is Auto.
If there is no efi generated, the section alignment will
be set to zero. This behavior causes the Makefile to be different
between the full build and the incremental build.

Since the Genffs can auto get the section alignment from
efi file during Genffs procedure, the build tool can just set section
alignment as zero. This change can make the autogen makefile
consistent for the full build and the incremental build.

Signed-off-by: Bob Feng 
Cc: Liming Gao 
Cc: Yuwei Chen 

Reviewed-by: Liming Gao 
Reviewed-by: Yuwei Chen

commit 3a7a6761143a4840faea0bd84daada3ac0f1bd22
Author: Bob Feng 
Date:   Wed Sep 23 20:36:58 202

[qemu-mainline test] 154677: regressions - trouble: fail/pass/preparing

2020-09-28 Thread osstest service owner
flight 154677 qemu-mainline running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154677/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow210 debian-di-installfail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail 
REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 152631
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
152631
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
152631
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail 
REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 12 guest-start  fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd  10 debian-di-installfail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 11 guest-start  fail REGR. vs. 152631
 test-armhf-armhf-libvirt 12 guest-start  fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-installfail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-start/freebsd.repeat fail 
REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 19 guest-start/freebsd.repeat fail 
REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 19 guest-start/freebsd.repeat fail REGR. vs. 
152631
 test-amd64-i386-xl-qemuu-ws16-amd64  2 hosts-allocate   running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 152631
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-x

[PATCH 2/2] x86/mm: remove some indirection from {paging,sh}_cmpxchg_guest_entry()

2020-09-28 Thread Jan Beulich
Make the functions more similar to cmpxchg() in that they now take an
integral "old" input and return the value read.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -398,8 +398,8 @@ int shadow_write_p2m_entry(struct p2m_do
 /* Functions that atomically write PV guest PT entries */
 void sh_write_guest_entry(struct vcpu *v, intpte_t *p, intpte_t new,
   mfn_t gmfn);
-void sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t *old,
-intpte_t new, mfn_t gmfn);
+intpte_t sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t old,
+intpte_t new, mfn_t gmfn);
 
 /* Update all the things that are derived from the guest's CR0/CR3/CR4.
  * Called to initialize paging structures if the paging mode
--- a/xen/arch/x86/mm/shadow/pv.c
+++ b/xen/arch/x86/mm/shadow/pv.c
@@ -39,22 +39,22 @@ sh_write_guest_entry(struct vcpu *v, int
 
 /*
  * Cmpxchg a new value into the guest pagetable, and update the shadows
- * appropriately.
- * N.B. caller should check the value of "old" to see if the cmpxchg itself
- * was successful.
+ * appropriately.  Returns the previous entry found, which the caller is
+ * expected to check to see if the cmpxchg was successful.
  */
-void
-sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t *old,
+intpte_t
+sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t old,
intpte_t new, mfn_t gmfn)
 {
 intpte_t t;
 
 paging_lock(v->domain);
-t = cmpxchg(p, *old, new);
-if ( t == *old )
+t = cmpxchg(p, old, new);
+if ( t == old )
 sh_validate_guest_entry(v, gmfn, p, sizeof(new));
-*old = t;
 paging_unlock(v->domain);
+
+return t;
 }
 
 /*
--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -47,16 +47,14 @@ static inline bool update_intpte(intpte_
 else
 #endif
 {
-intpte_t t = old;
-
 for ( ; ; )
 {
-intpte_t _new = new;
+intpte_t _new = new, t;
 
 if ( preserve_ad )
 _new |= old & (_PAGE_ACCESSED | _PAGE_DIRTY);
 
-paging_cmpxchg_guest_entry(v, p, &t, _new, mfn);
+t = paging_cmpxchg_guest_entry(v, p, old, _new, mfn);
 
 if ( t == old )
 break;
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -168,8 +168,8 @@ static int ptwr_emulated_update(unsigned
 if ( p_old )
 {
 ol1e = l1e_from_intpte(old);
-paging_cmpxchg_guest_entry(v, &l1e_get_intpte(*pl1e), &old,
-   l1e_get_intpte(nl1e), mfn);
+old = paging_cmpxchg_guest_entry(v, &l1e_get_intpte(*pl1e), old,
+ l1e_get_intpte(nl1e), mfn);
 if ( l1e_get_intpte(ol1e) == old )
 ret = X86EMUL_OKAY;
 else
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -98,8 +98,8 @@ struct shadow_paging_mode {
 #ifdef CONFIG_PV
 void  (*write_guest_entry )(struct vcpu *v, intpte_t *p,
 intpte_t new, mfn_t gmfn);
-void  (*cmpxchg_guest_entry   )(struct vcpu *v, intpte_t *p,
-intpte_t *old, intpte_t new,
+intpte_t  (*cmpxchg_guest_entry   )(struct vcpu *v, intpte_t *p,
+intpte_t old, intpte_t new,
 mfn_t gmfn);
 #endif
 #ifdef CONFIG_HVM
@@ -342,16 +342,15 @@ static inline void paging_write_guest_en
  * true if not.  N.B. caller should check the value of "old" to see if the
  * cmpxchg itself was successful.
  */
-static inline void paging_cmpxchg_guest_entry(
-struct vcpu *v, intpte_t *p, intpte_t *old, intpte_t new, mfn_t gmfn)
+static inline intpte_t paging_cmpxchg_guest_entry(
+struct vcpu *v, intpte_t *p, intpte_t old, intpte_t new, mfn_t gmfn)
 {
 #ifdef CONFIG_SHADOW_PAGING
 if ( unlikely(paging_mode_shadow(v->domain)) && paging_get_hostmode(v) )
-paging_get_hostmode(v)->shadow.cmpxchg_guest_entry(v, p, old,
-   new, gmfn);
-else
+return paging_get_hostmode(v)->shadow.cmpxchg_guest_entry(v, p, old,
+  new, gmfn);
 #endif
-*old = cmpxchg(p, *old, new);
+return cmpxchg(p, old, new);
 }
 
 #endif /* CONFIG_PV */




[PATCH 1/2] x86/mm: {paging, sh}_{cmpxchg, write}_guest_entry() cannot fault

2020-09-28 Thread Jan Beulich
As of 2d0557c5cbeb ("x86: Fold page_info lock into type_info") we
haven't been updating guest page table entries through linear page
tables anymore. All updates have been using domain mappings since then.
Drop the use of guest/user access helpers there, and hence also the
boolean return values of the involved functions.

update_intpte(), otoh, gets its boolean return type retained for now,
as we may want to bound the CMPXCHG retry loop, indicating failure to
the caller instead when the retry threshold got exceeded.

With this {,__}cmpxchg_user() become unused, so they too get dropped.
(In fact, dropping them was the motivation of making the change.)

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4033,8 +4033,8 @@ long do_mmu_update(
 
 case PGT_writable_page:
 perfc_incr(writable_mmu_updates);
-if ( paging_write_guest_entry(v, va, req.val, mfn) )
-rc = 0;
+paging_write_guest_entry(v, va, req.val, mfn);
+rc = 0;
 break;
 }
 page_unlock(page);
@@ -4044,9 +4044,9 @@ long do_mmu_update(
 else if ( get_page_type(page, PGT_writable_page) )
 {
 perfc_incr(writable_mmu_updates);
-if ( paging_write_guest_entry(v, va, req.val, mfn) )
-rc = 0;
+paging_write_guest_entry(v, va, req.val, mfn);
 put_page_type(page);
+rc = 0;
 }
 
 put_page(page);
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -396,9 +396,9 @@ int shadow_write_p2m_entry(struct p2m_do
unsigned int level);
 
 /* Functions that atomically write PV guest PT entries */
-bool sh_write_guest_entry(struct vcpu *v, intpte_t *p, intpte_t new,
+void sh_write_guest_entry(struct vcpu *v, intpte_t *p, intpte_t new,
   mfn_t gmfn);
-bool sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t *old,
+void sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t *old,
 intpte_t new, mfn_t gmfn);
 
 /* Update all the things that are derived from the guest's CR0/CR3/CR4.
--- a/xen/arch/x86/mm/shadow/pv.c
+++ b/xen/arch/x86/mm/shadow/pv.c
@@ -26,43 +26,35 @@
 
 /*
  * Write a new value into the guest pagetable, and update the shadows
- * appropriately.  Returns false if we page-faulted, true for success.
+ * appropriately.
  */
-bool
+void
 sh_write_guest_entry(struct vcpu *v, intpte_t *p, intpte_t new, mfn_t gmfn)
 {
-unsigned int failed;
-
 paging_lock(v->domain);
-failed = __copy_to_user(p, &new, sizeof(new));
-if ( failed != sizeof(new) )
-sh_validate_guest_entry(v, gmfn, p, sizeof(new));
+write_atomic(p, new);
+sh_validate_guest_entry(v, gmfn, p, sizeof(new));
 paging_unlock(v->domain);
-
-return !failed;
 }
 
 /*
  * Cmpxchg a new value into the guest pagetable, and update the shadows
- * appropriately. Returns false if we page-faulted, true if not.
+ * appropriately.
  * N.B. caller should check the value of "old" to see if the cmpxchg itself
  * was successful.
  */
-bool
+void
 sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t *old,
intpte_t new, mfn_t gmfn)
 {
-bool failed;
-intpte_t t = *old;
+intpte_t t;
 
 paging_lock(v->domain);
-failed = cmpxchg_user(p, t, new);
+t = cmpxchg(p, *old, new);
 if ( t == *old )
 sh_validate_guest_entry(v, gmfn, p, sizeof(new));
 *old = t;
 paging_unlock(v->domain);
-
-return !failed;
 }
 
 /*
--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -43,9 +43,7 @@ static inline bool update_intpte(intpte_
 
 #ifndef PTE_UPDATE_WITH_CMPXCHG
 if ( !preserve_ad )
-{
-rv = paging_write_guest_entry(v, p, new, mfn);
-}
+paging_write_guest_entry(v, p, new, mfn);
 else
 #endif
 {
@@ -58,14 +56,7 @@ static inline bool update_intpte(intpte_
 if ( preserve_ad )
 _new |= old & (_PAGE_ACCESSED | _PAGE_DIRTY);
 
-rv = paging_cmpxchg_guest_entry(v, p, &t, _new, mfn);
-if ( unlikely(rv == 0) )
-{
-gdprintk(XENLOG_WARNING,
- "Failed to update %" PRIpte " -> %" PRIpte
- ": saw %" PRIpte "\n", old, _new, t);
-break;
-}
+paging_cmpxchg_guest_entry(v, p, &t, _new, mfn);
 
 if ( t == old )
 break;
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -168,10 +168,9 @@ static int ptwr_emulated_update(unsigned
 if ( p_old )
 {
 ol1e = l1e_from_intpte(old);
-if ( !paging_cmpxchg_guest_entry(v, &l1e_get_intpte(*pl1e),
- &old, l1e_get_intpte(nl1e), mfn) )

[libvirt test] 154990: trouble: blocked/broken/preparing/queued

2020-09-28 Thread osstest service owner
flight 154990 libvirt running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154990/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-amd64-pvopsbroken
 build-amd64-xsm  broken
 build-arm64-xsm  broken
 build-armhf  broken
 build-i386   broken
 build-i386-pvops broken
 build-i386-xsm   broken
 build-i3864 host-install(4)broken REGR. vs. 151777
 build-i386-xsm4 host-install(4)broken REGR. vs. 151777
 build-i386-pvops  4 host-install(4)broken REGR. vs. 151777
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 151777
 build-amd64   4 host-install(4)broken REGR. vs. 151777
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 151777
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 151777
 build-armhf   4 host-install(4)broken REGR. vs. 151777
 build-arm64-libvirt   queued
 test-arm64-arm64-libvirt  queued
 test-arm64-arm64-libvirt-qcow2  queued
 test-arm64-arm64-libvirt-xsm  queued
 test-armhf-armhf-libvirt  queued
 test-armhf-armhf-libvirt-raw  queued
 build-arm64-pvops 2 hosts-allocate   running
 build-arm64   2 hosts-allocate   running
 build-armhf-pvops 2 hosts-allocate   running

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  76356ea7600ba9815fb942c1e852b5c76364b936
baseline version:
 libvirt  2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   80 days
Failing since151818  2020-07-11 04:18:52 Z   79 days   74 attempts
Testing same since   154909  2020-09-26 16:35:44 Z1 days2 attempts


People who touched revisions under test:
  Andika Triwidada 
  Andrea Bolognani 
  Balázs Meskó 
  Bastien Orivel 
  Bihong Yu 
  Binfeng Wu 
  Boris Fiuczynski 
  Christian Ehrhardt 
  Collin Walling 
  Cornelia Huck 
  Côme Borsoi 
  Daniel Henrique Barboza 
  Daniel P. Berrange 
  Daniel P. Berrangé 
  Erik Skultety 
  Fabian Freyer 
  Fangge Jin 
  Fedora Weblate Translation 
  Han Han 
  Hao Wang 
  Ian Wienand 
  Jamie Strandboge 
  Jamie Strandboge 
  Jean-Baptiste Holcroft 
  Jianan Gao 
  Jim Fehlig 
  Jin Yan 
  Jiri Denemark 
  Jonathon Jongsma 
  Ján Tomko 
  Kashyap Chamarthy 
  Kevin Locke 
  Laine Stump 
  Liao Pingfang 
  Lin Ma 
  Lin Ma 
  Marc Hartmayer 
  Marek Marczykowski-Górecki 
  Martin Kletzander 
  Matt Coleman 
  Matt Coleman 
  Michal Privoznik 
  Milo Casagrande 
  Neal Gompa 
  Nikolay Shirokovskiy 
  Patrick Magauran 
  Paulo de Rezende Pinatti 
  Pavel Hrdina 
  Peter Krempa 
  Pino Toscano 
  Pino Toscano 
  Piotr Drąg 
  Prathamesh Chavan 
  Roman Bogorodskiy 
  Ryan Schmidt 
  Sam Hartman 
  Scott Shambarger 
  Sebastian Mitterle 
  Simon Gaiser 
  Stefan Bader 
  Stefan Berger 
  Szymon Scholz 
  Thomas Huth 
  Tim Wiederhake 
  Tomáš Golembiovský 
  Wang Xin 
  Weblate 
  Yang Hang 
  Yanqiu Zhang 
  Yi Li 
  Yi Wang 
  Yuri Chornoivan 
  Zheng Chuan 

jobs:
 build-amd64-xsm  broken  
 build-arm64-xsm  broken  
 build-i386-xsm   broken  
 build-amd64  broken  
 build-arm64  preparing
 build-armhf  

[xen-unstable test] 154688: regressions - trouble: broken/fail/pass/preparing

2020-09-28 Thread osstest service owner
flight 154688 xen-unstable running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154688/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd  broken
 test-armhf-armhf-xl-vhd   4 host-install(4)broken REGR. vs. 154611
 test-xtf-amd64-amd64-4   68 xtf/test-hvm64-xsa-221   fail REGR. vs. 154611
 test-xtf-amd64-amd64-2   68 xtf/test-hvm64-xsa-221   fail REGR. vs. 154611
 test-xtf-amd64-amd64-1   68 xtf/test-hvm64-xsa-221   fail REGR. vs. 154611
 test-xtf-amd64-amd64-5   68 xtf/test-hvm64-xsa-221   fail REGR. vs. 154611
 test-xtf-amd64-amd64-4   106 xtf/test-pv64-xsa-221   fail REGR. vs. 154611
 test-xtf-amd64-amd64-2   106 xtf/test-pv64-xsa-221   fail REGR. vs. 154611
 test-xtf-amd64-amd64-1   106 xtf/test-pv64-xsa-221   fail REGR. vs. 154611
 test-xtf-amd64-amd64-5   106 xtf/test-pv64-xsa-221   fail REGR. vs. 154611
 test-amd64-amd64-xl-xsm  12 guest-start  fail REGR. vs. 154611
 test-amd64-i386-libvirt-xsm  12 guest-start  fail REGR. vs. 154611
 test-amd64-amd64-libvirt-xsm 12 guest-start  fail REGR. vs. 154611
 test-amd64-i386-xl-xsm   12 guest-start  fail REGR. vs. 154611
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 154611
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 154611
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 154611
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 154611
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 154611
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 154611
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 154611
 test-xtf-amd64-amd64-32 hosts-allocate   running
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  2 hosts-allocate running
 test-amd64-amd64-xl-qemuu-ovmf-amd64  2 hosts-allocate   running

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 154611
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 154611
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 154611
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 154611
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 154611
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 154611
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 154611
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 154611
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 154611
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt 13 mig

[PATCH 0/2] x86/mm: {paging,sh}_{cmpxchg,write}_guest_entry() adjustments

2020-09-28 Thread Jan Beulich
1: {paging,sh}_{cmpxchg,write}_guest_entry() cannot fault
2: remove some indirection from {paging,sh}_cmpxchg_guest_entry()

Jan



[linux-5.4 test] 154979: trouble: blocked/broken/preparing/queued

2020-09-28 Thread osstest service owner
flight 154979 linux-5.4 running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154979/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-amd64-pvopsbroken
 build-arm64  broken
 build-arm64-pvopsbroken
 build-arm64-xsm  broken
 build-armhf  broken
 build-i386   broken
 build-i386-pvops broken
 build-i386-xsm   broken
 build-i386-xsm4 host-install(4)broken REGR. vs. 154718
 build-i3864 host-install(4)broken REGR. vs. 154718
 build-i386-pvops  4 host-install(4)broken REGR. vs. 154718
 build-arm64   4 host-install(4)broken REGR. vs. 154718
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 154718
 build-arm64-pvops 4 host-install(4)broken REGR. vs. 154718
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 154718
 build-amd64   4 host-install(4)broken REGR. vs. 154718
 build-armhf   4 host-install(4)broken REGR. vs. 154718
 test-armhf-armhf-examine  queued
 test-armhf-armhf-libvirt  queued
 test-armhf-armhf-libvirt-raw  queued
 test-armhf-armhf-xl   queued
 test-armhf-armhf-xl-arndale   queued
 test-armhf-armhf-xl-credit1   queued
 test-armhf-armhf-xl-credit2   queued
 test-armhf-armhf-xl-cubietruck  queued
 test-armhf-armhf-xl-multivcpu  queued
 test-armhf-armhf-xl-rtds  queued
 test-armhf-armhf-xl-vhd   queued
 test-amd64-amd64-xl-xsm   queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsmqueued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   queued
 test-amd64-amd64-libvirt-xsm  queued
 test-amd64-i386-xl-xsmqueued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmqueued
 test-amd64-i386-libvirt-xsm   queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  queued
 build-amd64-xsm   2 hosts-allocate   running
 build-armhf-pvops 2 hosts-allocate   running

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-examine   1 build-check(1)   blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)   blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-x

[xen-4.11-testing test] 154894: trouble: blocked/broken/pass/queued/running

2020-09-28 Thread osstest service owner
flight 154894 xen-4.11-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154894/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-amd64-pvopsbroken
 build-arm64  broken
 build-arm64-pvopsbroken
 build-arm64-xsm  broken
 build-armhf  broken
 build-armhf-pvopsbroken
 build-i386   broken
 build-i386-prev  broken
 build-i386-pvops broken
 build-i386-xsm   broken
 build-arm64   4 host-install(4)broken REGR. vs. 151714
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 151714
 build-i3864 host-install(4)broken REGR. vs. 151714
 build-armhf-pvops 4 host-install(4)broken REGR. vs. 151714
 build-i386-prev   4 host-install(4)broken REGR. vs. 151714
 build-i386-xsm4 host-install(4)broken REGR. vs. 151714
 build-amd64   4 host-install(4)broken REGR. vs. 151714
 build-i386-pvops  4 host-install(4)broken REGR. vs. 151714
 build-arm64-pvops 4 host-install(4)broken REGR. vs. 151714
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 151714
 build-armhf   4 host-install(4)broken REGR. vs. 151714
 test-amd64-i386-xl-xsmqueued
 test-amd64-amd64-xl-xsm   queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsmqueued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   queued
 test-amd64-amd64-libvirt-xsm  queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmqueued
 test-amd64-i386-libvirt-xsm   queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  queued
 build-amd64-xsm   4 host-install(4)  running
 build-amd64-xsm   3 syslog-serverrunning

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-11 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 build-arm64-libvirt   1 bu

[xen-4.12-testing test] 154981: trouble: blocked/broken/preparing/queued

2020-09-28 Thread osstest service owner
flight 154981 xen-4.12-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154981/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-amd64-prev broken
 build-amd64-pvopsbroken
 build-amd64-xtf  broken
 build-arm64  broken
 build-arm64-pvopsbroken
 build-arm64-xsm  broken
 build-armhf  broken
 build-armhf-pvopsbroken
 build-i386   broken
 build-i386-prev  broken
 build-i386-pvops broken
 build-i386-xsm   broken
 build-i386-xsm4 host-install(4)broken REGR. vs. 154601
 build-i3864 host-install(4)broken REGR. vs. 154601
 build-arm64   4 host-install(4)broken REGR. vs. 154601
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 154601
 build-arm64-pvops 4 host-install(4)broken REGR. vs. 154601
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 154601
 build-i386-prev   4 host-install(4)broken REGR. vs. 154601
 build-i386-pvops  4 host-install(4)broken REGR. vs. 154601
 build-amd64-xtf   4 host-install(4)broken REGR. vs. 154601
 build-amd64   4 host-install(4)broken REGR. vs. 154601
 build-amd64-prev  4 host-install(4)broken REGR. vs. 154601
 build-armhf-pvops 4 host-install(4)broken REGR. vs. 154601
 build-armhf   4 host-install(4)broken REGR. vs. 154601
 test-amd64-i386-xl-xsmqueued
 test-amd64-amd64-xl-xsm   queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsmqueued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   queued
 test-amd64-amd64-libvirt-xsm  queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmqueued
 test-amd64-i386-libvirt-xsm   queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  queued
 build-amd64-xsm   2 hosts-allocate   running

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-11 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-21 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blo

[xen-4.13-testing test] 154994: trouble: blocked/broken/preparing/queued/running

2020-09-28 Thread osstest service owner
flight 154994 xen-4.13-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154994/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-amd64-prev broken
 build-amd64-pvopsbroken
 build-armhf  broken
 build-armhf-pvopsbroken
 build-i386   broken
 build-i386-prev  broken
 build-i386-pvops broken
 build-i386-xsm   broken
 build-i386-xsm4 host-install(4)broken REGR. vs. 154358
 build-i3864 host-install(4)broken REGR. vs. 154358
 build-amd64-prev  4 host-install(4)broken REGR. vs. 154358
 build-i386-prev   4 host-install(4)broken REGR. vs. 154358
 build-i386-pvops  4 host-install(4)broken REGR. vs. 154358
 build-amd64   4 host-install(4)broken REGR. vs. 154358
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 154358
 build-armhf-pvops 4 host-install(4)broken REGR. vs. 154358
 build-armhf   4 host-install(4)broken REGR. vs. 154358
 test-amd64-i386-xl-xsmqueued
 test-arm64-arm64-libvirt-xsm  queued
 test-arm64-arm64-xl   queued
 test-arm64-arm64-xl-credit1   queued
 test-arm64-arm64-xl-credit2   queued
 test-arm64-arm64-xl-seattle   queued
 test-arm64-arm64-xl-thunderx  queued
 test-arm64-arm64-xl-xsm   queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsmqueued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
 build-arm64-libvirt   queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   queued
 test-amd64-amd64-libvirt-xsm  queued
 test-amd64-amd64-migrupgrade  queued
 test-amd64-amd64-xl-xsm   queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmqueued
 test-amd64-i386-libvirt-xsm   queued
 test-amd64-i386-migrupgrade   queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  queued
 test-xtf-amd64-amd64-1queued
 test-xtf-amd64-amd64-2queued
 test-xtf-amd64-amd64-3queued
 test-xtf-amd64-amd64-4queued
 test-xtf-amd64-amd64-5queued
 build-amd64-xsm   2 hosts-allocate   running
 build-arm64   2 hosts-allocate   running
 build-arm64-pvops 2 hosts-allocate   running
 build-arm64-xsm   2 hosts-allocate   running
 build-amd64-prev  3 syslog-serverrunning
 build-amd64-prev  5 capture-logs running
 build-amd64-xtf   4 host-install(4)  running
 build-amd64-xtf   3 syslog-serverrunning

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) 

[linux-linus test] 154694: trouble: fail/pass/preparing/queued

2020-09-28 Thread osstest service owner
flight 154694 linux-linus running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154694/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-examine   queued
 test-amd64-coresched-i386-xl  queued
 test-amd64-i386-xl-xsmqueued
 test-amd64-i386-xl-shadow queued
 test-amd64-i386-freebsd10-amd64  queued
 test-amd64-i386-freebsd10-i386  queued
 test-amd64-i386-libvirt   queued
 test-amd64-i386-libvirt-pair  queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmqueued
 test-amd64-i386-libvirt-xsm   queued
 test-amd64-i386-pair  queued
 test-amd64-i386-qemut-rhel6hvm-amd  queued
 test-amd64-i386-qemut-rhel6hvm-intel  queued
 test-amd64-i386-qemuu-rhel6hvm-amd  queued
 test-amd64-i386-qemuu-rhel6hvm-intel  queued
 test-amd64-i386-xlqueued
 test-amd64-i386-xl-pvshim queued
 test-amd64-i386-xl-qemut-debianhvm-amd64 queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
 test-amd64-i386-xl-qemut-win7-amd64  queued
 test-amd64-i386-xl-qemut-ws16-amd64  queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
 test-amd64-i386-xl-qemuu-ovmf-amd64  queued
 test-amd64-i386-xl-qemuu-win7-amd64  queued
 test-amd64-i386-xl-qemuu-ws16-amd64  queued
 test-amd64-i386-xl-rawqueued
 build-i386-pvops  2 hosts-allocate   running

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 152332
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 152332
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 152332
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfa

[xen-4.10-testing test] 154991: trouble: blocked/broken/preparing/queued/running

2020-09-28 Thread osstest service owner
flight 154991 xen-4.10-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154991/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-amd64-xsm  broken
 build-amd64-xtf  broken
 build-arm64  broken
 build-arm64-pvopsbroken
 build-arm64-xsm  broken
 build-armhf  broken
 build-armhf-pvopsbroken
 build-i386   broken
 build-i386-prev  broken
 build-i386-pvops broken
 build-i386-xsm   broken
 build-i386-pvops  4 host-install(4)broken REGR. vs. 151728
 build-i386-prev   4 host-install(4)broken REGR. vs. 151728
 build-i386-xsm4 host-install(4)broken REGR. vs. 151728
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 151728
 build-i3864 host-install(4)broken REGR. vs. 151728
 build-armhf-pvops 4 host-install(4)broken REGR. vs. 151728
 build-arm64   4 host-install(4)broken REGR. vs. 151728
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 151728
 build-arm64-pvops 4 host-install(4)broken REGR. vs. 151728
 build-amd64   4 host-install(4)broken REGR. vs. 151728
 build-amd64-xtf   4 host-install(4)broken REGR. vs. 151728
 build-armhf   4 host-install(4)broken REGR. vs. 151728
 test-xtf-amd64-amd64-1queued
 test-xtf-amd64-amd64-2queued
 test-xtf-amd64-amd64-3queued
 test-amd64-amd64-xl-xsm   queued
 test-amd64-amd64-xl-shadowqueued
 test-amd64-amd64-xl-rtds  queued
 test-amd64-amd64-xl-qemuu-ws16-amd64  queued
 test-amd64-amd64-xl-qemuu-win7-amd64  queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64  queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrictqueued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64queued
 test-amd64-amd64-xl-qemut-ws16-amd64  queued
 test-amd64-amd64-xl-qemut-win7-amd64  queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsmqueued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
 test-amd64-amd64-xl-qemut-debianhvm-amd64queued
 test-amd64-amd64-xl-qcow2 queued
 test-amd64-amd64-xl-pvhv2-intel  queued
 test-amd64-amd64-xl-pvhv2-amd  queued
 test-amd64-amd64-xl-multivcpu  queued
 test-amd64-amd64-amd64-pvgrub  queued
 test-amd64-amd64-i386-pvgrub  queued
 test-amd64-amd64-xl-credit2   queued
 test-amd64-amd64-libvirt  queued
 test-amd64-amd64-libvirt-pair  queued
 test-amd64-amd64-xl-credit1   queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   queued
 test-amd64-amd64-libvirt-vhd  queued
 test-amd64-amd64-xl   queued
 test-amd64-amd64-libvirt-xsm  queued
 test-amd64-amd64-livepatchqueued
 test-amd64-amd64-qemuu-nested-intel  queued
 test-amd64-amd64-migrupgrade  queued
 test-amd64-amd64-pair queued
 test-amd64-amd64-qemuu-nested-amd  queued
 test-amd64-amd64-pygrub   queued
 test-amd64-amd64-qemuu-freebsd11-amd64  queued
 test-amd64-amd64-qemuu-freebsd12-amd64  queued
 test-amd64-i386-migrupgrade   queued
 test-xtf-amd64-amd64-4queued
 test-xtf-amd64-amd64-5queued
 build-amd64-pvops 2 hosts-allocate   running
 build-amd64-prev  4 host-install(4)  running
 build-amd64-prev  3 syslog-serverrunning

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 

[seabios test] 154974: trouble: blocked/broken/preparing/queued/running

2020-09-28 Thread osstest service owner
flight 154974 seabios running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154974/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-amd64-pvopsbroken
 build-amd64-xsm  broken
 build-amd64   4 host-install(4)broken REGR. vs. 152554
 build-amd64-pvops 4 host-install(4)broken REGR. vs. 152554
 build-amd64-xsm   4 host-install(4)broken REGR. vs. 152554
 build-i386-libvirtqueued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmqueued
 test-amd64-i386-qemuu-rhel6hvm-amd  queued
 test-amd64-i386-qemuu-rhel6hvm-intel  queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
 test-amd64-i386-xl-qemuu-win7-amd64  queued
 test-amd64-i386-xl-qemuu-ws16-amd64  queued
 build-i3862 hosts-allocate   running
 build-i386-xsm2 hosts-allocate   running
 build-i386-pvops  4 host-install(4)  running
 build-i386-pvops  3 syslog-serverrunning

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a

version targeted for testing:
 seabios  41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5
baseline version:
 seabios  155821a1990b6de78dde5f98fa5ab90e802021e0

Last test of basis   152554  2020-08-10 15:41:45 Z   48 days
Testing same since   154814  2020-09-25 16:10:32 Z2 days1 attempts


People who touched revisions under test:
  Daniel P. Berrangé 
  Matt DeVillier 

jobs:
 build-amd64-xsm  broken  
 build-i386-xsm   preparing
 build-amd64  broken  
 build-i386   preparing
 build-amd64-libvirt  blocked 
 build-i386-libvirt   queued  
 build-amd64-pvopsbroken  
 build-i386-pvops running 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmqueued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  queued  
 test-amd64-amd64-qemuu-nested-amdblocked 
 test-amd64-i386-qemuu-rhel6hvm-amd   queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64 queued  
 test-amd64-amd64-qemuu-freebsd11-amd64   blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64   blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64 blocked 
 test-amd64-i386-xl-qemuu-win7-amd64  queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64 blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64  queued  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrictblocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued  
 test-amd64-amd64-qemuu-nested-intel  blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  queued  


[xen-unstable-smoke test] 154998: trouble: blocked/broken/preparing/queued/running

2020-09-28 Thread osstest service owner
flight 154998 xen-unstable-smoke running [real]
http://logs.test-lab.xenproject.org/osstest/logs/154998/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm  broken
 build-arm64-xsm   4 host-install(4)broken REGR. vs. 154728
 build-amd64-libvirt   queued
 test-amd64-amd64-libvirt  queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64queued
 test-armhf-armhf-xl   queued
 build-amd64   2 hosts-allocate   running
 build-armhf   4 host-install(4)  running
 build-armhf   3 syslog-serverrunning

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a

version targeted for testing:
 xen  4bdbf746ac9152e70f264f87db4472707da805ce
baseline version:
 xen  5bcac985498ed83d89666959175ca9c9ed561ae1

Last test of basis   154728  2020-09-24 21:01:24 Z3 days
Testing same since  (not found) 0 attempts


People who touched revisions under test:
  Jan Beulich 
  Julien Grall 
  Marek Marczykowski-Górecki 
  Roger Pau Monné 

jobs:
 build-arm64-xsm  broken  
 build-amd64  preparing
 build-armhf  running 
 build-amd64-libvirt  queued  
 test-armhf-armhf-xl  queued  
 test-arm64-arm64-xl-xsm  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64queued  
 test-amd64-amd64-libvirt queued  



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64-libvirt queued
broken-job build-arm64-xsm broken
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job test-armhf-armhf-xl queued
broken-step build-arm64-xsm host-install(4)

Not pushing.


commit 4bdbf746ac9152e70f264f87db4472707da805ce
Author: Marek Marczykowski-Górecki 
Date:   Mon Sep 28 10:43:10 2020 +0200

x86/S3: fix shadow stack resume path

Fix the resume path to load the shadow stack pointer from saved_ssp (not
saved_rsp), to match what suspend path does.

Fixes: 633ecc4a7cb2 ("x86/S3: Save and restore Shadow Stack configuration")
Backport: 4.14
Signed-off-by: Marek Marczykowski-Górecki 
Reviewed-by: Jan Beulich 

commit 28fb8cf323dd93f59a9c851c93ba9b79de8b1c4e
Author: Roger Pau Monné 
Date:   Mon Sep 28 10:42:29 2020 +0200

x86/iommu: remove code to fetch MSI message from remap table

Remove the code to compose a MSI message based on the information from
the MSI registers and the data in the interrupt remapping table.
Since the removal of read_msi_msg and its user there's no longer a
need for such code, as the last written (untranslated) MSI message is
cached internally by Xen.

Suggested-by: Jan Beulich 
Signed-off-by: Roger Pau Monné 
Reviewed-by: Andrew Cooper 

commit f9ffd20f946c0315937f85d2f124a9bc4be49473
Author: Roger Pau Monné 
Date:   Mon Sep 28 10:41:48 2020 +0200

x86/hpet: remove hpet_msi_read

It's dead code, even more now that read_msi_msg has been removed.

Suggested-by: Jan Beulich 
Signed-off-by: Roger Pau Monné 
Reviewed-by: Andrew Cooper 

commit fe41405f5ee650d3fe39105cf59193b1494cdcdc
Author: Jan Beulich 
Date:   Mon Sep 28 10:40:53 2020 +0200

common/Kconfig: sort HAS_*

Later additions look to have been put at the end, with MEM_ACCESS*
somewhere in the middle. Re-sort this part of the file, in the hope that
future additions will be made noticing the intentions here.

Signed-off-by: Jan Beulich 
Acked-by: Julien Grall 

commit 643e2f3cbb3b607f3365b230f439845e9bf113b0
Author: Jan Beulich 
Date:   Mon Sep 28 10:39:47 2020 +0200

EFI: some easy constification

Inspired by some of Trammell's suggestions, this harvests some low
hanging fruit, without needing t

[PATCH 12/12] evtchn: convert domain event lock to an r/w one

2020-09-28 Thread Jan Beulich
Especially for the use in evtchn_move_pirqs() (called when moving a vCPU
across pCPU-s) and the ones in EOI handling in PCI pass-through code,
serializing perhaps an entire domain isn't helpful when no state (which
isn't e.g. further protected by the per-channel lock) changes.

Unfortunately this implies dropping of lock profiling for this lock,
until r/w locks may get enabled for such functionality.

While ->notify_vcpu_id is now meant to be consistently updated with the
per-channel lock held for writing, an extension applies to ECS_PIRQ: The
field is also guaranteed to not change with the per-domain event lock
held. Therefore the unlink_pirq_port() call from evtchn_bind_vcpu() as
well as the link_pirq_port() one from evtchn_bind_pirq() could in
principle be moved out of the per-channel locked regions, but this
further code churn didn't seem worth it.

Signed-off-by: Jan Beulich 
---
RFC:
* In evtchn_bind_vcpu() the question is whether limiting the use of
  write_lock() to just the ECS_PIRQ case is really worth it.
* In flask_get_peer_sid() the question is whether we wouldn't better
  switch to using the per-channel lock.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -909,7 +909,7 @@ int arch_domain_soft_reset(struct domain
 if ( !is_hvm_domain(d) )
 return -EINVAL;
 
-spin_lock(&d->event_lock);
+write_lock(&d->event_lock);
 for ( i = 0; i < d->nr_pirqs ; i++ )
 {
 if ( domain_pirq_to_emuirq(d, i) != IRQ_UNBOUND )
@@ -919,7 +919,7 @@ int arch_domain_soft_reset(struct domain
 break;
 }
 }
-spin_unlock(&d->event_lock);
+write_unlock(&d->event_lock);
 
 if ( ret )
 return ret;
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -528,9 +528,9 @@ void hvm_migrate_pirqs(struct vcpu *v)
 if ( !is_iommu_enabled(d) || !hvm_domain_irq(d)->dpci )
return;
 
-spin_lock(&d->event_lock);
+read_lock(&d->event_lock);
 pt_pirq_iterate(d, migrate_pirq, v);
-spin_unlock(&d->event_lock);
+read_unlock(&d->event_lock);
 }
 
 static bool hvm_get_pending_event(struct vcpu *v, struct x86_event *info)
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -404,9 +404,9 @@ int hvm_inject_msi(struct domain *d, uin
 {
 int rc;
 
-spin_lock(&d->event_lock);
+write_lock(&d->event_lock);
 rc = map_domain_emuirq_pirq(d, pirq, IRQ_MSI_EMU);
-spin_unlock(&d->event_lock);
+write_unlock(&d->event_lock);
 if ( rc )
 return rc;
 info = pirq_info(d, pirq);
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -203,9 +203,9 @@ static int vioapic_hwdom_map_gsi(unsigne
 {
 gprintk(XENLOG_WARNING, "vioapic: error binding GSI %u: %d\n",
 gsi, ret);
-spin_lock(&currd->event_lock);
+write_lock(&currd->event_lock);
 unmap_domain_pirq(currd, pirq);
-spin_unlock(&currd->event_lock);
+write_unlock(&currd->event_lock);
 }
 pcidevs_unlock();
 
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -465,7 +465,7 @@ int msixtbl_pt_register(struct domain *d
 int r = -EINVAL;
 
 ASSERT(pcidevs_locked());
-ASSERT(spin_is_locked(&d->event_lock));
+ASSERT(rw_is_write_locked(&d->event_lock));
 
 if ( !msixtbl_initialised(d) )
 return -ENODEV;
@@ -535,7 +535,7 @@ void msixtbl_pt_unregister(struct domain
 struct msixtbl_entry *entry;
 
 ASSERT(pcidevs_locked());
-ASSERT(spin_is_locked(&d->event_lock));
+ASSERT(rw_is_write_locked(&d->event_lock));
 
 if ( !msixtbl_initialised(d) )
 return;
@@ -589,13 +589,13 @@ void msixtbl_pt_cleanup(struct domain *d
 if ( !msixtbl_initialised(d) )
 return;
 
-spin_lock(&d->event_lock);
+write_lock(&d->event_lock);
 
 list_for_each_entry_safe( entry, temp,
   &d->arch.hvm.msixtbl_list, list )
 del_msixtbl_entry(entry);
 
-spin_unlock(&d->event_lock);
+write_unlock(&d->event_lock);
 }
 
 void msix_write_completion(struct vcpu *v)
@@ -719,9 +719,9 @@ int vpci_msi_arch_update(struct vpci_msi
  msi->arch.pirq, msi->mask);
 if ( rc )
 {
-spin_lock(&pdev->domain->event_lock);
+write_lock(&pdev->domain->event_lock);
 unmap_domain_pirq(pdev->domain, msi->arch.pirq);
-spin_unlock(&pdev->domain->event_lock);
+write_unlock(&pdev->domain->event_lock);
 pcidevs_unlock();
 msi->arch.pirq = INVALID_PIRQ;
 return rc;
@@ -760,9 +760,9 @@ static int vpci_msi_enable(const struct
 rc = vpci_msi_update(pdev, data, address, vectors, pirq, mask);
 if ( rc )
 {
-spin_lock(&pdev->domain->event_lock);
+write_lock(&pdev->domain->event_lock);
 unmap_domain_pirq(pdev->domain, pirq);
-spin_unlock(&pdev->domai

[PATCH 11/12] evtchn: convert vIRQ lock to an r/w one

2020-09-28 Thread Jan Beulich
There's no need to serialize all sending of vIRQ-s; all that's needed
is serialization against the closing of the respective event channels
(by means of a barrier). To facilitate the conversion, introduce a new
rw_barrier().

Signed-off-by: Jan Beulich 

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -160,7 +160,7 @@ struct vcpu *vcpu_create(struct domain *
 v->vcpu_id = vcpu_id;
 v->dirty_cpu = VCPU_CPU_CLEAN;
 
-spin_lock_init(&v->virq_lock);
+rwlock_init(&v->virq_lock);
 
 tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
 
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -640,7 +640,7 @@ int evtchn_close(struct domain *d1, int
 if ( v->virq_to_evtchn[chn1->u.virq] != port1 )
 continue;
 v->virq_to_evtchn[chn1->u.virq] = 0;
-spin_barrier(&v->virq_lock);
+rw_barrier(&v->virq_lock);
 }
 break;
 
@@ -794,7 +794,7 @@ void send_guest_vcpu_virq(struct vcpu *v
 
 ASSERT(!virq_is_global(virq));
 
-spin_lock_irqsave(&v->virq_lock, flags);
+read_lock_irqsave(&v->virq_lock, flags);
 
 port = v->virq_to_evtchn[virq];
 if ( unlikely(port == 0) )
@@ -807,7 +807,7 @@ void send_guest_vcpu_virq(struct vcpu *v
 spin_unlock(&chn->lock);
 
  out:
-spin_unlock_irqrestore(&v->virq_lock, flags);
+read_unlock_irqrestore(&v->virq_lock, flags);
 }
 
 void send_guest_global_virq(struct domain *d, uint32_t virq)
@@ -826,7 +826,7 @@ void send_guest_global_virq(struct domai
 if ( unlikely(v == NULL) )
 return;
 
-spin_lock_irqsave(&v->virq_lock, flags);
+read_lock_irqsave(&v->virq_lock, flags);
 
 port = v->virq_to_evtchn[virq];
 if ( unlikely(port == 0) )
@@ -838,7 +838,7 @@ void send_guest_global_virq(struct domai
 spin_unlock(&chn->lock);
 
  out:
-spin_unlock_irqrestore(&v->virq_lock, flags);
+read_unlock_irqrestore(&v->virq_lock, flags);
 }
 
 void send_guest_pirq(struct domain *d, const struct pirq *pirq)
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -2,7 +2,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
@@ -334,6 +334,12 @@ void _spin_unlock_recursive(spinlock_t *
 }
 }
 
+void _rw_barrier(rwlock_t *lock)
+{
+check_barrier(&lock->lock.debug);
+do { smp_mb(); } while ( _rw_is_locked(lock) );
+}
+
 #ifdef CONFIG_DEBUG_LOCK_PROFILE
 
 struct lock_profile_anc {
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -237,6 +237,8 @@ static inline int _rw_is_write_locked(rw
 return (atomic_read(&lock->cnts) & _QW_WMASK) == _QW_LOCKED;
 }
 
+void _rw_barrier(rwlock_t *lock);
+
 #define read_lock(l)  _read_lock(l)
 #define read_lock_irq(l)  _read_lock_irq(l)
 #define read_lock_irqsave(l, f) \
@@ -266,6 +268,7 @@ static inline int _rw_is_write_locked(rw
 #define rw_is_locked(l)   _rw_is_locked(l)
 #define rw_is_write_locked(l) _rw_is_write_locked(l)
 
+#define rw_barrier(l) _rw_barrier(l)
 
 typedef struct percpu_rwlock percpu_rwlock_t;
 
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -235,7 +235,7 @@ struct vcpu
 
 /* IRQ-safe virq_lock protects against delivering VIRQ to stale evtchn. */
 evtchn_port_tvirq_to_evtchn[NR_VIRQS];
-spinlock_t   virq_lock;
+rwlock_t virq_lock;
 
 /* Tasklet for continue_hypercall_on_cpu(). */
 struct tasklet   continue_hypercall_tasklet;




[PATCH 10/12] evtchn/fifo: use stable fields when recording "last queue" information

2020-09-28 Thread Jan Beulich
Both evtchn->priority and evtchn->notify_vcpu_id could, prior to recent
locking adjustments, change behind the back of
evtchn_fifo_set_pending(). Neither the queue's priority nor the vCPU's
vcpu_id fields have similar properties, so they seem better suited for
the purpose. In particular they reflect the respective evtchn fields'
values at the time they were used to determine queue and vCPU.

Signed-off-by: Jan Beulich 

--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -246,8 +246,8 @@ static void evtchn_fifo_set_pending(stru
 /* Moved to a different queue? */
 if ( old_q != q )
 {
-evtchn->last_vcpu_id = evtchn->notify_vcpu_id;
-evtchn->last_priority = evtchn->priority;
+evtchn->last_vcpu_id = v->vcpu_id;
+evtchn->last_priority = q->priority;
 
 spin_unlock_irqrestore(&old_q->lock, flags);
 spin_lock_irqsave(&q->lock, flags);




[PATCH 08/12] evtchn: ECS_CLOSED => ECS_FREE

2020-09-28 Thread Jan Beulich
There's no ECS_CLOSED; correct a comment naming it.

Signed-off-by: Jan Beulich 

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -673,7 +673,7 @@ int evtchn_close(struct domain *d1, int
  * We can only get here if the port was closed and re-bound after
  * unlocking d1 but before locking d2 above. We could retry but
  * it is easier to return the same error as if we had seen the
- * port in ECS_CLOSED. It must have passed through that state for
+ * port in ECS_FREE. It must have passed through that state for
  * us to end up here, so it's a valid error to return.
  */
 rc = -EINVAL;




[PATCH 09/12] evtchn: move FIFO-private struct declarations

2020-09-28 Thread Jan Beulich
There's no need to expose them.

Signed-off-by: Jan Beulich 
---
I wonder whether we shouldn't do away with event_fifo.h altogether.

--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -21,6 +21,27 @@
 
 #include 
 
+struct evtchn_fifo_queue {
+uint32_t *head; /* points into control block */
+uint32_t tail;
+uint8_t priority;
+spinlock_t lock;
+};
+
+struct evtchn_fifo_vcpu {
+struct evtchn_fifo_control_block *control_block;
+struct evtchn_fifo_queue queue[EVTCHN_FIFO_MAX_QUEUES];
+};
+
+#define EVTCHN_FIFO_EVENT_WORDS_PER_PAGE (PAGE_SIZE / sizeof(event_word_t))
+#define EVTCHN_FIFO_MAX_EVENT_ARRAY_PAGES \
+(EVTCHN_FIFO_NR_CHANNELS / EVTCHN_FIFO_EVENT_WORDS_PER_PAGE)
+
+struct evtchn_fifo_domain {
+event_word_t *event_array[EVTCHN_FIFO_MAX_EVENT_ARRAY_PAGES];
+unsigned int num_evtchns;
+};
+
 static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
unsigned int port)
 {
--- a/xen/include/xen/event_fifo.h
+++ b/xen/include/xen/event_fifo.h
@@ -9,27 +9,6 @@
 #ifndef __XEN_EVENT_FIFO_H__
 #define __XEN_EVENT_FIFO_H__
 
-struct evtchn_fifo_queue {
-uint32_t *head; /* points into control block */
-uint32_t tail;
-uint8_t priority;
-spinlock_t lock;
-};
-
-struct evtchn_fifo_vcpu {
-struct evtchn_fifo_control_block *control_block;
-struct evtchn_fifo_queue queue[EVTCHN_FIFO_MAX_QUEUES];
-};
-
-#define EVTCHN_FIFO_EVENT_WORDS_PER_PAGE (PAGE_SIZE / sizeof(event_word_t))
-#define EVTCHN_FIFO_MAX_EVENT_ARRAY_PAGES \
-(EVTCHN_FIFO_NR_CHANNELS / EVTCHN_FIFO_EVENT_WORDS_PER_PAGE)
-
-struct evtchn_fifo_domain {
-event_word_t *event_array[EVTCHN_FIFO_MAX_EVENT_ARRAY_PAGES];
-unsigned int num_evtchns;
-};
-
 int evtchn_fifo_init_control(struct evtchn_init_control *init_control);
 int evtchn_fifo_expand_array(const struct evtchn_expand_array *expand_array);
 void evtchn_fifo_destroy(struct domain *domain);




[PATCH 07/12] evtchn: cut short evtchn_reset()'s loop in the common case

2020-09-28 Thread Jan Beulich
The general expectation is that there are only a few open ports left
when a domain asks its event channel configuration to be reset.
Similarly on average half a bucket worth of event channels can be
expected to be inactive. Try to avoid iterating over all channels, by
utilizing usage data we're maintaining anyway.

Signed-off-by: Jan Beulich 

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -232,7 +232,11 @@ void evtchn_free(struct domain *d, struc
 evtchn_port_clear_pending(d, chn);
 
 if ( consumer_is_xen(chn) )
+{
 write_atomic(&d->xen_evtchns, d->xen_evtchns - 1);
+/* Decrement ->xen_evtchns /before/ ->active_evtchns. */
+smp_wmb();
+}
 write_atomic(&d->active_evtchns, d->active_evtchns - 1);
 
 /* Reset binding to vcpu0 when the channel is freed. */
@@ -1073,6 +1077,19 @@ int evtchn_unmask(unsigned int port)
 return 0;
 }
 
+static bool has_active_evtchns(const struct domain *d)
+{
+unsigned int xen = read_atomic(&d->xen_evtchns);
+
+/*
+ * Read ->xen_evtchns /before/ active_evtchns, to prevent
+ * evtchn_reset() exiting its loop early.
+ */
+smp_rmb();
+
+return read_atomic(&d->active_evtchns) > xen;
+}
+
 int evtchn_reset(struct domain *d, bool resuming)
 {
 unsigned int i;
@@ -1097,7 +1114,7 @@ int evtchn_reset(struct domain *d, bool
 if ( !i )
 return -EBUSY;
 
-for ( ; port_is_valid(d, i); i++ )
+for ( ; port_is_valid(d, i) && has_active_evtchns(d); i++ )
 {
 evtchn_close(d, i, 1);
 
@@ -1340,6 +1357,10 @@ int alloc_unbound_xen_event_channel(
 
 spin_unlock_irqrestore(&chn->lock, flags);
 
+/*
+ * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
+ * barrier needed due to spin-locked region just above.
+ */
 write_atomic(&ld->xen_evtchns, ld->xen_evtchns + 1);
 
  out:




[PATCH 06/12] evtchn: don't bypass unlinking pIRQ when closing port

2020-09-28 Thread Jan Beulich
There's no other path causing a terminal unlink_pirq_port() to be called
(evtchn_bind_vcpu() relinks it right away) and hence _if_ pirq can
indeed be NULL when closing the port, list corruption would occur when
bypassing the unlink (unless the structure never gets linked again). As
we can't come here after evtchn_destroy() anymore, (late) domain
destruction also isn't a reason for a possible exception, and hence the
only alternative looks to be that the check was pointless in the first
place. While I haven't observed the case, from code inspection I'm far
from sure I can exclude this being possible, so it feels more safe to
re-arrange the code instead.

Fixes: c24536b636f2 ("replace d->nr_pirqs sized arrays with radix tree")
Signed-off-by: Jan Beulich 

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -615,17 +615,18 @@ int evtchn_close(struct domain *d1, int
 case ECS_PIRQ: {
 struct pirq *pirq = pirq_info(d1, chn1->u.pirq.irq);
 
-if ( !pirq )
-break;
-if ( !is_hvm_domain(d1) )
-pirq_guest_unbind(d1, pirq);
-pirq->evtchn = 0;
-pirq_cleanup_check(pirq, d1);
-unlink_pirq_port(chn1, d1->vcpu[chn1->notify_vcpu_id]);
+if ( pirq )
+{
+if ( !is_hvm_domain(d1) )
+pirq_guest_unbind(d1, pirq);
+pirq->evtchn = 0;
+pirq_cleanup_check(pirq, d1);
 #ifdef CONFIG_X86
-if ( is_hvm_domain(d1) && domain_pirq_to_irq(d1, pirq->pirq) > 0 )
-unmap_domain_pirq_emuirq(d1, pirq->pirq);
+if ( is_hvm_domain(d1) && domain_pirq_to_irq(d1, pirq->pirq) > 0 )
+unmap_domain_pirq_emuirq(d1, pirq->pirq);
 #endif
+}
+unlink_pirq_port(chn1, d1->vcpu[chn1->notify_vcpu_id]);
 break;
 }
 




[PATCH 05/12] evtchn/sched: reject poll requests for unusable ports

2020-09-28 Thread Jan Beulich
Before and after XSA-342 there has been an asymmetry in how not really
usable ports get treated in do_poll(): Ones beyond a certain boundary
(max_evtchns originally, valid_evtchns subsequently) did get refused
with -EINVAL, while lower ones were accepted despite there potentially
being no way to wake the vCPU again from its polling state. Arrange to
also honor evtchn_usable() output in the decision.

Requested-by: Andrew Cooper 
Signed-off-by: Jan Beulich 

--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1427,13 +1427,13 @@ static long do_poll(struct sched_poll *s
 if ( __copy_from_guest_offset(&port, sched_poll->ports, i, 1) )
 goto out;
 
-rc = -EINVAL;
-if ( !port_is_valid(d, port) )
-goto out;
-
-rc = 0;
-if ( evtchn_port_is_pending(d, port) )
+rc = evtchn_port_poll(d, port);
+if ( rc )
+{
+if ( rc > 0 )
+rc = 0;
 goto out;
+}
 }
 
 if ( sched_poll->nr_ports == 1 )
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -240,19 +240,6 @@ static inline bool evtchn_is_pending(con
 return evtchn_usable(evtchn) && d->evtchn_port_ops->is_pending(d, evtchn);
 }
 
-static inline bool evtchn_port_is_pending(struct domain *d, evtchn_port_t port)
-{
-struct evtchn *evtchn = evtchn_from_port(d, port);
-bool rc;
-unsigned long flags;
-
-spin_lock_irqsave(&evtchn->lock, flags);
-rc = evtchn_is_pending(d, evtchn);
-spin_unlock_irqrestore(&evtchn->lock, flags);
-
-return rc;
-}
-
 static inline bool evtchn_is_masked(const struct domain *d,
 const struct evtchn *evtchn)
 {
@@ -279,6 +266,24 @@ static inline bool evtchn_is_busy(const
d->evtchn_port_ops->is_busy(d, evtchn);
 }
 
+static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
+{
+int rc = -EINVAL;
+
+if ( port_is_valid(d, port) )
+{
+struct evtchn *evtchn = evtchn_from_port(d, port);
+unsigned long flags;
+
+spin_lock_irqsave(&evtchn->lock, flags);
+if ( evtchn_usable(evtchn) )
+rc = evtchn_is_pending(d, evtchn);
+spin_unlock_irqrestore(&evtchn->lock, flags);
+}
+
+return rc;
+}
+
 static inline int evtchn_port_set_priority(struct domain *d,
struct evtchn *evtchn,
unsigned int priority)




[PATCH 04/12] evtchn: evtchn_set_priority() needs to acquire the per-channel lock

2020-09-28 Thread Jan Beulich
evtchn_fifo_set_pending() (invoked with the per-channel lock held) has
two uses of the channel's priority field. The field gets updated by
evtchn_fifo_set_priority() with only the per-domain event_lock held,
i.e. the two reads may observe two different values. While the 2nd use
could - afaict - in principle be replaced by q->priority, I think
evtchn_set_priority() should acquire the per-channel lock in any event.

Signed-off-by: Jan Beulich 

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1132,7 +1132,9 @@ static long evtchn_set_priority(const st
 {
 struct domain *d = current->domain;
 unsigned int port = set_priority->port;
+struct evtchn *chn;
 long ret;
+unsigned long flags;
 
 spin_lock(&d->event_lock);
 
@@ -1142,8 +1144,10 @@ static long evtchn_set_priority(const st
 return -EINVAL;
 }
 
-ret = evtchn_port_set_priority(d, evtchn_from_port(d, port),
-   set_priority->priority);
+chn = evtchn_from_port(d, port);
+spin_lock_irqsave(&chn->lock, flags);
+ret = evtchn_port_set_priority(d, chn, set_priority->priority);
+spin_unlock_irqrestore(&chn->lock, flags);
 
 spin_unlock(&d->event_lock);
 




[PATCH 03/12] evtchn: don't call Xen consumer callback with per-channel lock held

2020-09-28 Thread Jan Beulich
While there don't look to be any problems with this right now, the lock
order implications from holding the lock can be very difficult to follow
(and may be easy to violate unknowingly). The present callbacks don't
(and no such callback should) have any need for the lock to be held.

Signed-off-by: Jan Beulich 

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -746,9 +746,18 @@ int evtchn_send(struct domain *ld, unsig
 rport = lchn->u.interdomain.remote_port;
 rchn  = evtchn_from_port(rd, rport);
 if ( consumer_is_xen(rchn) )
-xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
-else
-evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
+{
+/* Don't keep holding the lock for the call below. */
+xen_event_channel_notification_t fn = xen_notification_fn(rchn);
+struct vcpu *rv = rd->vcpu[rchn->notify_vcpu_id];
+
+rcu_lock_domain(rd);
+spin_unlock_irqrestore(&lchn->lock, flags);
+fn(rv, rport);
+rcu_unlock_domain(rd);
+return 0;
+}
+evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
 break;
 case ECS_IPI:
 evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);




[PATCH 02/12] evtchn: avoid race in get_xen_consumer()

2020-09-28 Thread Jan Beulich
There's no global lock around the updating of this global piece of data.
Make use of cmpxchg() to avoid two entities racing with their updates.

Signed-off-by: Jan Beulich 
---
TBD: Initially I used cmpxchgptr() here, until realizing Arm doesn't
 have it. It's slightly more type-safe than cmpxchg() (requiring
 all arguments to actually be pointers), so I now wonder whether Arm
 should gain it (perhaps simply by moving the x86 implementation to
 xen/lib.h), or whether we should purge it from x86 as being
 pointless.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -57,7 +57,8 @@
  * with a pointer, we stash them dynamically in a small lookup array which
  * can be indexed by a small integer.
  */
-static xen_event_channel_notification_t xen_consumers[NR_XEN_CONSUMERS];
+static xen_event_channel_notification_t __read_mostly
+xen_consumers[NR_XEN_CONSUMERS];
 
 /* Default notification action: wake up from wait_on_xen_event_channel(). */
 static void default_xen_notification_fn(struct vcpu *v, unsigned int port)
@@ -81,7 +82,7 @@ static uint8_t get_xen_consumer(xen_even
 for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
 {
 if ( xen_consumers[i] == NULL )
-xen_consumers[i] = fn;
+(void)cmpxchg(&xen_consumers[i], NULL, fn);
 if ( xen_consumers[i] == fn )
 break;
 }




[PATCH 01/12] evtchn: refuse EVTCHNOP_status for Xen-bound event channels

2020-09-28 Thread Jan Beulich
Callers have no business knowing the state of the Xen end of an event
channel.

Signed-off-by: Jan Beulich 

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -933,6 +933,11 @@ int evtchn_status(evtchn_status_t *statu
 }
 
 chn = evtchn_from_port(d, port);
+if ( consumer_is_xen(chn) )
+{
+rc = -EACCES;
+goto out;
+}
 
 rc = xsm_evtchn_status(XSM_TARGET, d, chn);
 if ( rc )




[PATCH 00/12] evtchn: recent XSAs follow-on

2020-09-28 Thread Jan Beulich
These are grouped into a series largely because of their origin,
not so much because there are heavy dependencies among them.

01: refuse EVTCHNOP_status for Xen-bound event channels
02: avoid race in get_xen_consumer()
03: don't call Xen consumer callback with per-channel lock held
04: evtchn_set_priority() needs to acquire the per-channel lock
05: sched: reject poll requests for unusable ports
06: don't bypass unlinking pIRQ when closing port
07: cut short evtchn_reset()'s loop in the common case
08: ECS_CLOSED => ECS_FREE
09: move FIFO-private struct declarations
10: fifo: use stable fields when recording "last queue" information
11: convert vIRQ lock to an r/w one
12: convert domain event lock to an r/w one

Jan



  1   2   >