[Xen-devel] [linux-linus test] 110016: regressions - FAIL

2017-06-05 Thread osstest service owner
flight 110016 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110016/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail 
REGR. vs. 109994

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-xsm  9 debian-install   fail in 110006 pass in 110016
 test-amd64-i386-libvirt-pair 12 host-ping-check-xen/dst_host fail pass in 
110006
 test-armhf-armhf-xl-credit2  16 guest-start.2  fail pass in 110006
 test-armhf-armhf-xl-arndale   5 xen-installfail pass in 110006

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop  fail blocked in 109994
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 host-ping-check-xen fail 
in 110006 like 109963
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop   fail in 110006 like 109994
 test-armhf-armhf-xl-rtds 11 guest-start fail in 110006 like 109994
 test-armhf-armhf-xl-arndale 12 migrate-support-check fail in 110006 never pass
 test-armhf-armhf-xl-arndale 13 saverestore-support-check fail in 110006 never 
pass
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail like 109963
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 109963
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail  like 109963
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 109994
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 109994
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 109994
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 109994
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 109994
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386  9 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemut-ws16-amd64  9 windows-install fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   

[Xen-devel] [linux-next test] 110012: regressions - FAIL

2017-06-05 Thread osstest service owner
flight 110012 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110012/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win10-i386  6 xen-boot  fail REGR. vs. 109994
 test-amd64-i386-xl-qemut-ws16-amd64  6 xen-boot  fail REGR. vs. 109994
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-libvirt-pair  9 xen-boot/src_hostfail REGR. vs. 109994
 test-amd64-i386-libvirt-pair 10 xen-boot/dst_hostfail REGR. vs. 109994
 test-amd64-i386-xl-xsm6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
109994
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
109994
 test-amd64-i386-freebsd10-i386  6 xen-boot   fail REGR. vs. 109994
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-boot   fail REGR. vs. 109994
 test-amd64-i386-freebsd10-amd64  6 xen-boot  fail REGR. vs. 109994
 test-amd64-i386-libvirt   6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl-raw6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-boot  fail REGR. vs. 109994
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-boot   fail REGR. vs. 109994
 test-amd64-i386-examine   6 reboot   fail REGR. vs. 109994
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-boot  fail REGR. vs. 109994
 test-amd64-i386-pair  9 xen-boot/src_hostfail REGR. vs. 109994
 test-amd64-i386-pair 10 xen-boot/dst_hostfail REGR. vs. 109994
 test-amd64-i386-libvirt-xsm   6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-boot  fail REGR. vs. 109994
 test-arm64-arm64-xl   6 xen-boot fail REGR. vs. 109994
 test-arm64-arm64-xl-credit2   6 xen-boot fail REGR. vs. 109994
 test-arm64-arm64-libvirt-xsm  6 xen-boot fail REGR. vs. 109994
 test-arm64-arm64-xl-xsm   6 xen-boot fail REGR. vs. 109994
 test-arm64-arm64-examine  6 reboot   fail REGR. vs. 109994
 test-amd64-amd64-i386-pvgrub 21 leak-check/check fail REGR. vs. 109994
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl-qemuu-ws16-amd64  6 xen-boot  fail REGR. vs. 109994
 test-amd64-i386-rumprun-i386  6 xen-boot fail REGR. vs. 109994
 test-amd64-i386-xl-qemut-win10-i386  6 xen-boot  fail REGR. vs. 109994
 build-armhf-pvops 5 kernel-build fail REGR. vs. 109994

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-examine  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop  fail blocked in 109994
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail like 109963
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-install  

[Xen-devel] [qemu-mainline test] 110013: regressions - trouble: broken/fail/pass

2017-06-05 Thread osstest service owner
flight 110013 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110013/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm   3 host-install(3)broken REGR. vs. 109975
 test-amd64-amd64-xl-qcow210 guest-start  fail REGR. vs. 109975
 test-amd64-amd64-libvirt-vhd 10 guest-start  fail REGR. vs. 109975
 test-armhf-armhf-xl-credit2 15 guest-start/debian.repeat fail REGR. vs. 109975
 test-amd64-i386-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail REGR. vs. 109975
 test-armhf-armhf-xl-vhd  10 guest-start  fail REGR. vs. 109975

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop   fail blocked in 109975
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 109975
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail  like 109975
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 109975
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 109975
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 109975
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass

version targeted for testing:
 qemuucb8b8ef4578dc17c350fd4b27700a9f178e2dad0
baseline version:
 qemuuc6e84fbd447a51e1161d74d71566a5f67b47eac5

Last test of basis   109975  2017-06-04 00:16:43 Z1 days
Testing same since   110013  2017-06-05 10:45:10 Z0 days1 attempts


People who touched revisions under test:
  Marc-André Lureau 
  Peter Maydell 
  Philippe Mathieu-Daudé 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf 

[Xen-devel] [PATCH 0/1] xl.cfg man page cleanup and fixes

2017-06-05 Thread Armando Vega
Hey everyone,

so I've made a new round of cleaning and fixing. There was quite some work to
be done with this one. And there are a few issues that are left still, but
at least not with the general correctness and style of the manual. More info
below.

I've had to rework the NUMA node examples as it had what I would call a
counting error and in the end presented incorrect information. It would be
great if someone could check me up on that once more. Also, there is no clear
explanation whether a person can use ^nodes:1 and nodes:^1 interchangably and
to be honest I wasn't sure myself. Haven't had the time to give this proper
testing.

Also there is an issue with at least one of the HVM-only options that can
actually be used with PV guests as well. I know because we've been using
CPU masking / feature leveling for our PV guests on Xen 4.6., and if it went
from being HVM-only to also on PV I couldn't say when that actually happened.
It is quite possible that there are more such options which aren't exclusive to
one type anymore.

Anyway, that is something to be discussed and fixed in another iteration.

kind regards,
Armando Vega

Armando Vega (1):
  xl.cfg man page cleanup and fixes

 docs/man/xl.cfg.pod.5.in | 1103 --
 1 file changed, 586 insertions(+), 517 deletions(-)

-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 1/1] xl.cfg man page cleanup and fixes

2017-06-05 Thread Armando Vega
From: Armando Vega 

Signed-off-by: Armando Vega 
---
 docs/man/xl.cfg.pod.5.in | 1103 --
 1 file changed, 586 insertions(+), 517 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index 13167ff2b6..dae23d8c10 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -1,6 +1,6 @@
 =head1 NAME
 
-xl.cfg - XL Domain Configuration File Syntax
+xl.cfg - xl domain configuration file syntax
 
 =head1 SYNOPSIS
 
@@ -8,20 +8,21 @@ xl.cfg - XL Domain Configuration File Syntax
 
 =head1 DESCRIPTION
 
-To create a VM (a domain in Xen terminology, sometimes called a guest)
-with xl requires the provision of a domain config file.  Typically
-these live in `/etc/xen/DOMAIN.cfg` where DOMAIN is the name of the
+Creating a VM (a domain in Xen terminology, sometimes called a guest)
+with xl requires the provision of a domain configuration file.  Typically,
+these live in F, where DOMAIN is the name of the
 domain.
 
 =head1 SYNTAX
 
-A domain config file consists of a series of 

[Xen-devel] [ovmf baseline-only test] 71513: tolerable FAIL

2017-06-05 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 71513 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71513/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 build-amd64-libvirt   5 libvirt-buildfail   like 71511
 build-i386-libvirt5 libvirt-buildfail   like 71511

version targeted for testing:
 ovmf 5225084439bd47f2cdd210a98d6a445a2eccc9e2
baseline version:
 ovmf 7ec69844b8f1d348c0699cc88c728acb13ad

Last test of basis71511  2017-06-05 08:47:43 Z0 days
Testing same since71513  2017-06-05 17:50:06 Z0 days1 attempts


People who touched revisions under test:
  Jiaxin Wu 
  Wu Jiaxin 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


commit 5225084439bd47f2cdd210a98d6a445a2eccc9e2
Author: Jiaxin Wu 
Date:   Mon May 22 09:25:57 2017 +0800

MdeModulePkg/UefiPxeBcDxe: Refine the PXE boot displayed information

This path is to refine the PXE boot displayed information so as to
in line with NetworkPkg/UefiPxeBcDxe driver.

Cc: Ye Ting 
Cc: Fu Siyuan 
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Wu Jiaxin 
Reviewed-by: Ye Ting 
Reviewed-by: Fu Siyuan 

commit ef931b311fd772c8ab9f453cb0f9d0cd0b1deacf
Author: Jiaxin Wu 
Date:   Mon May 22 09:13:18 2017 +0800

MdeModulePkg/UefiPxeBcDxe: Fix the PXE BootMenu selection issue

Currently implementation doesn't accept the input during the user
is trying to select the PXE BootMenu from option 43. This path is
to fix that problem.

Cc: Ye Ting 
Cc: Fu Siyuan 
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Wu Jiaxin 
Reviewed-by: Ye Ting 
Reviewed-by: Fu Siyuan 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xtf test] 110014: all pass - PUSHED

2017-06-05 Thread osstest service owner
flight 110014 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110014/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf  2bcda1aa60cd0032ea7371037c645b3d87104e21
baseline version:
 xtf  8ebc31bc85546a265aa0dbd26fda88b9b195fa2e

Last test of basis   109906  2017-05-31 16:17:25 Z5 days
Testing same since   110014  2017-06-05 11:16:20 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 

jobs:
 build-amd64-xtf  pass
 build-amd64  pass
 build-amd64-pvopspass
 test-xtf-amd64-amd64-1   pass
 test-xtf-amd64-amd64-2   pass
 test-xtf-amd64-amd64-3   pass
 test-xtf-amd64-amd64-4   pass
 test-xtf-amd64-amd64-5   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xtf
+ revision=2bcda1aa60cd0032ea7371037c645b3d87104e21
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xtf 
2bcda1aa60cd0032ea7371037c645b3d87104e21
+ branch=xtf
+ revision=2bcda1aa60cd0032ea7371037c645b3d87104e21
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xtf
+ xenbranch=xen-unstable
+ '[' xxtf = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable
+ prevxenbranch=xen-4.9-testing
+ '[' x2bcda1aa60cd0032ea7371037c645b3d87104e21 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.14
++ : tested/linux-arm-xen
++ '[' 

[Xen-devel] [distros-debian-sid test] 71510: tolerable trouble: blocked/broken/fail/pass

2017-06-05 Thread Platform Team regression test user
flight 71510 distros-debian-sid real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71510/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-amd64-sid-netboot-pvgrub 10 guest-start   fail like 71454
 test-amd64-i386-i386-sid-netboot-pvgrub 10 guest-start fail like 71454
 test-armhf-armhf-armhf-sid-netboot-pygrub  9 debian-di-install fail like 71454

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-armhf-sid-netboot-pygrub  1 build-check(1)blocked n/a
 build-arm64-pvops 2 hosts-allocate   broken never pass
 build-arm64   2 hosts-allocate   broken never pass
 build-arm64-pvops 3 capture-logs broken never pass
 build-arm64   3 capture-logs broken never pass

baseline version:
 flight   71454

jobs:
 build-amd64  pass
 build-arm64  broken  
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-arm64-pvopsbroken  
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-sid-netboot-pvgrubfail
 test-amd64-i386-i386-sid-netboot-pvgrub  fail
 test-amd64-i386-amd64-sid-netboot-pygrub pass
 test-arm64-arm64-armhf-sid-netboot-pygrubblocked 
 test-armhf-armhf-armhf-sid-netboot-pygrubfail
 test-amd64-amd64-i386-sid-netboot-pygrub pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.9 test] 110010: regressions - FAIL

2017-06-05 Thread osstest service owner
flight 110010 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110010/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2   6 xen-boot fail REGR. vs. 107358
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-start/win.repeat fail in 109749 
REGR. vs. 107358

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-multivcpu 15 guest-start/debian.repeat fail in 109749 pass 
in 109913
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail in 
109749 pass in 110010
 test-amd64-i386-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail in 109749 pass 
in 110010
 test-amd64-i386-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail in 109749 
pass in 110010
 test-amd64-amd64-rumprun-amd64 16 rumprun-demo-xenstorels/xenstorels.repeat 
fail in 109749 pass in 110010
 test-armhf-armhf-libvirt-xsm  5 xen-install  fail in 109878 pass in 110010
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail in 109961 pass in 
109878
 test-amd64-i386-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail in 109961 
pass in 110010
 test-amd64-i386-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail in 109961 
pass in 110010
 test-amd64-i386-xl-raw   9 debian-di-install fail in 109961 pass in 110010
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail pass in 109749
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail pass in 
109961

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds  9 debian-install   fail REGR. vs. 107358

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-multivcpu 12 migrate-support-check fail in 109749 never 
pass
 test-arm64-arm64-xl-multivcpu 13 saverestore-support-check fail in 109749 
never pass
 test-arm64-arm64-libvirt12 migrate-support-check fail in 109749 never pass
 test-arm64-arm64-libvirt 13 saverestore-support-check fail in 109749 never pass
 test-arm64-arm64-xl-rtds12 migrate-support-check fail in 109749 never pass
 test-arm64-arm64-xl-rtds 13 saverestore-support-check fail in 109749 never pass
 test-arm64-arm64-libvirt-qcow2 11 migrate-support-check fail in 109749 never 
pass
 test-arm64-arm64-libvirt-qcow2 12 saverestore-support-check fail in 109749 
never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-start/win.repeat fail in 109878 
like 107358
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-start/win.repeat fail in 109961 
blocked in 107358
 test-armhf-armhf-xl-vhd   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-xsm   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-rtds  6 xen-boot fail  like 107358
 test-armhf-armhf-xl   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail like 107358
 test-armhf-armhf-libvirt-raw  6 xen-boot fail  like 107358
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 107358
 test-armhf-armhf-libvirt-xsm  6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt  6 xen-boot fail  like 107358
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale   6 xen-boot fail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-armhf-armhf-examine  6 reboot   fail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 

[Xen-devel] [linux-4.1 baseline-only test] 71509: regressions - trouble: blocked/broken/fail/pass

2017-06-05 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 71509 linux-4.1 real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71509/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt   5 libvirt-build fail REGR. vs. 71024
 build-i386-libvirt5 libvirt-build fail REGR. vs. 71024

Regressions which are regarded as allowable (not blocking):
 build-armhf-libvirt   5 libvirt-buildfail blocked in 71024
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-start/win.repeat fail blocked in 
71024
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 16 guest-stop   fail blocked in 71024
 test-amd64-amd64-xl-qemut-winxpsp3 17 guest-start/win.repeat fail blocked in 
71024
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stopfail blocked in 71024
 test-armhf-armhf-xl-vhd   9 debian-di-installfail blocked in 71024
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail like 71024

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64   2 hosts-allocate   broken never pass
 build-arm64-pvops 2 hosts-allocate   broken never pass
 build-arm64-xsm   2 hosts-allocate   broken never pass
 build-arm64-xsm   3 capture-logs broken never pass
 build-arm64   3 capture-logs broken never pass
 build-arm64-pvops 3 capture-logs broken never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-midway   12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-start/win.repeat fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-start/win.repeat  fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux56d847e3ef9433d7ac92376e4ba49d3cf3cb70d2
baseline version:
 linuxd9e0350d2575a20ee7783427da9bd6b6107eb983

Last test of basis71024  2017-03-20 11:22:56 Z   77 days
Testing same since71509  2017-06-05 06:52:37 Z0 days1 attempts


People who touched revisions under test:
  "Eric W. Biederman" 
  Adrian Hunter 
  Adrian Salido 
  Ajay Kaher 
  Al Viro 
  Alan Stern 

Re: [Xen-devel] (pv)?grub and PVHv2

2017-06-05 Thread Marek Marczykowski-Górecki
On Mon, Jun 05, 2017 at 11:55:24AM +0100, George Dunlap wrote:
> On Fri, Jun 2, 2017 at 10:58 AM, Roger Pau Monné  wrote:
> > On Fri, Jun 02, 2017 at 11:33:50AM +0200, Marek Marczykowski-Górecki wrote:
> >> Hi,
> >>
> >> Is there any method to boot PVHv2 domain using a kernel fetched from
> >> that domain's disk image, _without_ mounting it in dom0? Something like
> >> pvgrub was for PV.
> >
> > Hello,
> >
> > Anthony (Cced) is working on an OVMF port, so it can be used as
> > firmware for PVHv2 guests.
> 
> I think in theory it shouldn't be too hard to port the pvgrub2 code to
> boot into PVH, since it already boots in PV, right?
> 
> Is this something we should try to encourage, or do you think it would
> be better to route everyone through EFI?

For Qubes OS I think EFI is good enough here. Any system supporting
PVHv2 also support EFI (right?), so it shouldn't limit anything.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 0/3] ARM-XEN: Adjustments for __set_phys_to_machine_multi()

2017-06-05 Thread Stefano Stabellini
On Sun, 4 Jun 2017, SF Markus Elfring wrote:
> From: Markus Elfring 
> Date: Sun, 4 Jun 2017 22:45:54 +0200
> 
> Three update suggestions were taken into account
> from static source code analysis.
> 
> Markus Elfring (3):
>   Improve a size determination
>   Delete an error message for a failed memory allocation
>   Adjust one function call together with a variable assignment
> 
>  arch/arm/xen/p2m.c | 10 +-
>  1 file changed, 5 insertions(+), 5 deletions(-)

Thanks Markus, I queued them up for 4.13.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [ovmf test] 110011: all pass - PUSHED

2017-06-05 Thread osstest service owner
flight 110011 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110011/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 5225084439bd47f2cdd210a98d6a445a2eccc9e2
baseline version:
 ovmf 7ec69844b8f1d348c0699cc88c728acb13ad

Last test of basis   110007  2017-06-05 03:06:01 Z0 days
Testing same since   110011  2017-06-05 09:20:57 Z0 days1 attempts


People who touched revisions under test:
  Jiaxin Wu 
  Wu Jiaxin 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=ovmf
+ revision=5225084439bd47f2cdd210a98d6a445a2eccc9e2
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push ovmf 
5225084439bd47f2cdd210a98d6a445a2eccc9e2
+ branch=ovmf
+ revision=5225084439bd47f2cdd210a98d6a445a2eccc9e2
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=ovmf
+ xenbranch=xen-unstable
+ '[' xovmf = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable
+ prevxenbranch=xen-4.9-testing
+ '[' x5225084439bd47f2cdd210a98d6a445a2eccc9e2 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git

Re: [Xen-devel] [PATCH 3/3] arm/xen: Adjust one function call together with a variable assignment

2017-06-05 Thread Stefano Stabellini
On Sun, 4 Jun 2017, SF Markus Elfring wrote:
> From: Markus Elfring 
> Date: Sun, 4 Jun 2017 21:21:20 +0200
> 
> The script "checkpatch.pl" pointed information out like the following.
> 
> ERROR: do not use assignment in if condition
> 
> Thus fix the affected source code place.
> 
> Signed-off-by: Markus Elfring 

Reviewed-by: Stefano Stabellini 

> ---
>  arch/arm/xen/p2m.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
> index f5f74ac637b9..e71eefa2e427 100644
> --- a/arch/arm/xen/p2m.c
> +++ b/arch/arm/xen/p2m.c
> @@ -153,7 +153,8 @@ bool __set_phys_to_machine_multi(unsigned long pfn,
>   p2m_entry->mfn = mfn;
>  
>   write_lock_irqsave(_lock, irqflags);
> - if ((rc = xen_add_phys_to_mach_entry(p2m_entry)) < 0) {
> + rc = xen_add_phys_to_mach_entry(p2m_entry);
> + if (rc < 0) {
>   write_unlock_irqrestore(_lock, irqflags);
>   return false;
>   }
> -- 
> 2.13.0
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/3] arm/xen: Improve a size determination in __set_phys_to_machine_multi()

2017-06-05 Thread Stefano Stabellini
On Sun, 4 Jun 2017, SF Markus Elfring wrote:
> From: Markus Elfring 
> Date: Sun, 4 Jun 2017 20:50:55 +0200
> 
> Replace the specification of a data structure by a pointer dereference
> as the parameter for the operator "sizeof" to make the corresponding size
> determination a bit safer according to the Linux coding style convention.
> 
> Signed-off-by: Markus Elfring 

Reviewed-by: Stefano Stabellini 


> ---
>  arch/arm/xen/p2m.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
> index 0ed01f2d5ee4..11e78432b663 100644
> --- a/arch/arm/xen/p2m.c
> +++ b/arch/arm/xen/p2m.c
> @@ -144,5 +144,5 @@ bool __set_phys_to_machine_multi(unsigned long pfn,
>   return true;
>   }
>  
> - p2m_entry = kzalloc(sizeof(struct xen_p2m_entry), GFP_NOWAIT);
> + p2m_entry = kzalloc(sizeof(*p2m_entry), GFP_NOWAIT);
>   if (!p2m_entry) {
> -- 
> 2.13.0
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU

2017-06-05 Thread Anoob Soman

On 05/06/17 17:46, Boris Ostrovsky wrote:


+static void evtchn_bind_interdom_next_vcpu(int evtchn)
+{
+   unsigned int selected_cpu, irq;
+   struct irq_desc *desc = NULL;  <



Oh, thanks. I will send out a V2, with the modifications.

-Anoob.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-next v3 12/22] x86/traps: move send_guest_trap to pv/traps.c

2017-06-05 Thread Wei Liu
On Mon, May 29, 2017 at 09:55:14AM -0600, Jan Beulich wrote:
> >>> On 18.05.17 at 19:09,  wrote:
> 
> As said on patch 10(?), this shouldn't be moved alone. And whether
> we want to move it in the first place depends on what the PVH
> plans here are.
> 

What do you want me to do with this patch? I'm inclined to just move it
because it is only used by PV at the moment.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] (pv)?grub and PVHv2

2017-06-05 Thread George Dunlap
On Mon, Jun 5, 2017 at 1:08 PM, Andrew Cooper  wrote:
> On 05/06/17 11:55, George Dunlap wrote:
>> On Fri, Jun 2, 2017 at 10:58 AM, Roger Pau Monné  
>> wrote:
>>> On Fri, Jun 02, 2017 at 11:33:50AM +0200, Marek Marczykowski-Górecki wrote:
 Hi,

 Is there any method to boot PVHv2 domain using a kernel fetched from
 that domain's disk image, _without_ mounting it in dom0? Something like
 pvgrub was for PV.
>>> Hello,
>>>
>>> Anthony (Cced) is working on an OVMF port, so it can be used as
>>> firmware for PVHv2 guests.
>> I think in theory it shouldn't be too hard to port the pvgrub2 code to
>> boot into PVH, since it already boots in PV, right?
>>
>> Is this something we should try to encourage, or do you think it would
>> be better to route everyone through EFI?
>
> Even a PVH pvgrub still suffers the a priori problem which makes booting
> PV guests extremely difficult.  You don't know ahead-of-time which
> bootloader the guest is using without peering at its disks, which opens
> a massive attack surface in dom0.
>
> Using things like EFI allows any compatible OS to function, not just
> ones which use grub.

I wasn't suggesting loading the grub bootloader off the disk image; I
was suggesting using a fixed pvgrub supplied by the host.  That's what
happens for PV guests using pvgrub at the moment.

Using pvgrub allows any grub-compatible OS to function; using EFI
allows any EFI-compatible OS to function.  There are many which would
be one but not the other.  (But I suppose, there would not be many
that were both PVH compatible and not EFI compatible.)

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 110009: regressions - FAIL

2017-06-05 Thread osstest service owner
flight 110009 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110009/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail REGR. vs. 
109841

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail like 109803
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 109828
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 109841
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 109841
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail  like 109841
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 109841
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 109841
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 109841
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64  9 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-installfail never pass

version targeted for testing:
 xen  d8eed4021d50eb48ca75c8559aed95a2ad74afaa
baseline version:
 xen  876800d5f9de8b15355172794cb82f505dd26e18

Last test of basis   109841  2017-05-30 02:02:16 Z6 days
Failing since109866  2017-05-30 19:48:42 Z5 days7 attempts
Testing same since   109957  2017-06-03 10:00:05 Z2 days4 attempts


People who touched revisions under test:
  Andrew Cooper 
  Armando Vega 

[Xen-devel] [ovmf baseline-only test] 71511: tolerable FAIL

2017-06-05 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 71511 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71511/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 build-amd64-libvirt   5 libvirt-buildfail   like 71506
 build-i386-libvirt5 libvirt-buildfail   like 71506

version targeted for testing:
 ovmf 7ec69844b8f1d348c0699cc88c728acb13ad
baseline version:
 ovmf a04ec6d9f70f7eedf5ab49b098970245270fa594

Last test of basis71506  2017-06-03 05:47:38 Z2 days
Testing same since71511  2017-06-05 08:47:43 Z0 days1 attempts


People who touched revisions under test:
  Ruiyu Ni 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


commit 7ec69844b8f1d348c0699cc88c728acb13ad
Author: Ruiyu Ni 
Date:   Thu Jun 1 22:09:14 2017 +0800

ShellPkg/alias: Fix bug to support upper-case alias

alias in UEFI Shell is case insensitive.
Old code saves the alias to variable storage without
converting the alias to lower-case, which results
upper case alias setting doesn't work.
The patch converts the alias to lower case before saving
to variable storage.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ruiyu Ni 
Reviewed-by: Jaben Carsey 
Cc: Michael D Kinney 
Cc: Tapan Shah 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU

2017-06-05 Thread Boris Ostrovsky
 
>> (BTW, I just noticed --- you don't need to initialize desc)
>
> Sorry, I didn't get it. Which desc doesn't need init ?

+static void evtchn_bind_interdom_next_vcpu(int evtchn)
+{
+   unsigned int selected_cpu, irq;
+   struct irq_desc *desc = NULL;  <
+   unsigned long flags;
+
+   irq = irq_from_evtchn(evtchn);
+   desc = irq_to_desc(irq);



-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU

2017-06-05 Thread Anoob Soman

On 05/06/17 16:32, Boris Ostrovsky wrote:

I believe we do need to take affinity into consideration even if the
chance that it is non-default is small.


Agreed.


I am not opposed to having bind_last_selected_cpu percpu, I just wanted
to understand the reason better. Additional locking would be a downside
with a global so if you feel that percpu is worth it then I won't object.


If affinity == cpu_online_mask, then percpu will give a better spread. 
atomic set/get can be used, if we want to use a global variable, but I 
think it will be more random than percpu.





Yes, you are correct. .irq_set_affinity pretty much does the same thing.

The code will now looks like this.
raw_spin_lock_irqsave(lock, flags);
percpu read
select_cpu
percpu write
xen_rebind_evtchn_to_cpu(evtchn, selected_cpu)
raw_spin_unlock_irqsave(lock, flags);

(BTW, I just noticed --- you don't need to initialize desc)


Sorry, I didn't get it. Which desc doesn't need init ?

-Anoob

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] VMWare and Linux Foundation.

2017-06-05 Thread Jason Long
Hello.
VMWare become a gold member of LF and I want to know can it a danger for Xen 
Project and Citrix XenServer? Xen Project and Citrix XenServer are competitor 
for VMWare and became a gold member of LF can cause any problem? 


Thank you.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for 4.9] vif-common.sh: Have iptables wait for the xtables lock

2017-06-05 Thread Ian Jackson
George Dunlap writes ("[PATCH for 4.9] vif-common.sh: Have iptables wait for 
the xtables lock"):
> iptables has a system-wide lock on the xtables.  Strangely though, in
> the case of two concurrent invocations, the default is for the
> instance not grabbing the lock to exit out rather than waiting for it.
> This means that when starting a large number of guests in parallel,
> many will fail out with messages like this:

What a mess, eh ?

Acked-by: Ian Jackson 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.9: Release date

2017-06-05 Thread Lars Kurth
Hi all, 

removed xen-announce

I created the following docs

https://wiki.xenproject.org/wiki/Category:Xen_4.9

If anyone created any 4.9 specific docs, feel free to add to the page or
let me know: I added links to generated 9pfs and pvcalls docs

https://wiki.xenproject.org/wiki/Xen_Project_4.9_Release_Notes

@Julien & everyone else: any restrictions, known issues, ... should go
here!

https://wiki.xenproject.org/wiki/Xen_Project_4.9_Feature_List

The only thing missing is the change-list: will add this *after* the last
RC was cut
Edits/additions by people who added features are welcome

https://wiki.xenproject.org/wiki/Xen_Project_4.9_Man_Pages
I added new pages (ran a diff) as there were lots of refactoring changes
Ran link checker: ok

https://wiki.xenproject.org/wiki/Xen_Project_4.9_Acknowledgements

Provisional with data to be updated on final RC (have a simple spreadsheet
which calculates these)
Is missing the individual acknowledgements, which I will do after the
final RC

The only thing which won't change is
https://wiki.xenproject.org/wiki/Xen_Project_4.9_Acknowledgements#4.9_Hyper
visor_Reviewers_.5B_5_.5D
For reviews, I can't map these onto a specific branch, so counted review
comments by people other than proposer in the time from "git-merge-base
staging-4.8 staging-4.9" (did git-merge-base staging-4.7 staging-4.8 for
the previous release).

https://wiki.xenproject.org/wiki/Xen_Project_Release_Features

Have not touched this yet

https://xenproject.org/downloads/xen-archives/xen-project-49-series.html &
other artifacts

Will create, when we cut the tarballs

Regards
Lars

On 02/06/2017 17:15, "Julien Grall"  wrote:

>Hi all,
>
>There are some pending security issues that have been found during the
>hardening period, which haven't been pre-disclosed yet.
>
>I am going to delay the release until one week after the embargo has
>lifted. I will give an exact time frame when they have been pre-disclosed.
>
>Cheers,
>
>-- 
>Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU

2017-06-05 Thread Boris Ostrovsky
On 06/05/2017 10:49 AM, Anoob Soman wrote:
> On 05/06/17 15:10, Boris Ostrovsky wrote:
>>> The reason for percpu instead of global, was to avoid locking. We can
>>> have a global variable (last_cpu) without locking, but value of
>>> last_cpu wont be consistent, without locks. Moreover, since
>>> irq_affinity is also used in the calculation of cpu to bind, having a
>>> percpu or global wouldn't really matter, as the result (selected_cpu)
>>> is more likely to be random (because different irqs can have different
>>> affinity). What do you guys suggest.
>> Doesn't initial affinity (which is what we expect here since irqbalance
>> has not run yet) typically cover all guest VCPUs?
>
> Yes, initial affinity covers all online VCPUs. But there is a small
> chance that initial affinity might change, before
> evtch_bind_interdom_next_vcpu is called. For example, I could run a
> script to change irq affinity, just when irq sysfs entry appears. This
> is the reason that I thought it would be sensible (based on your
> suggestion) to include irq_affinity to calculate the next VCPU. If you
> think, changing irq_affinity between request_irq() and
> evtch_bind_interdom_next_vcpu is virtually impossible, then we can
> drop affinity and just use cpu_online_mask.

I believe we do need to take affinity into consideration even if the
chance that it is non-default is small.

I am not opposed to having bind_last_selected_cpu percpu, I just wanted
to understand the reason better. Additional locking would be a downside
with a global so if you feel that percpu is worth it then I won't object.

>
>>>
>>> I think we would still require spin_lock(). spin_lock is for irq_desc.
>> If you are trying to protect affinity then it may well change after you
>> drop the lock.
>>
>> In fact, don't you have a race here? If we offline a VCPU we will (by
>> way of cpu_disable_common()->fixup_irqs()) update affinity to reflect
>> that a CPU is gone and there is a chance that xen_rebind_evtchn_to_cpu()
>> will happen after that.
>>
>> So, contrary to what I said earlier ;-) not only do you need the lock,
>> but you should hold it across xen_rebind_evtchn_to_cpu() call. Does this
>> make sense?
>
> Yes, you are correct. .irq_set_affinity pretty much does the same thing.
>
> The code will now looks like this.
> raw_spin_lock_irqsave(lock, flags);
> percpu read
> select_cpu
> percpu write
> xen_rebind_evtchn_to_cpu(evtchn, selected_cpu)
> raw_spin_unlock_irqsave(lock, flags);

(BTW, I just noticed --- you don't need to initialize desc)

-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU

2017-06-05 Thread Anoob Soman

On 05/06/17 15:10, Boris Ostrovsky wrote:

The reason for percpu instead of global, was to avoid locking. We can
have a global variable (last_cpu) without locking, but value of
last_cpu wont be consistent, without locks. Moreover, since
irq_affinity is also used in the calculation of cpu to bind, having a
percpu or global wouldn't really matter, as the result (selected_cpu)
is more likely to be random (because different irqs can have different
affinity). What do you guys suggest.

Doesn't initial affinity (which is what we expect here since irqbalance
has not run yet) typically cover all guest VCPUs?


Yes, initial affinity covers all online VCPUs. But there is a small 
chance that initial affinity might change, before 
evtch_bind_interdom_next_vcpu is called. For example, I could run a 
script to change irq affinity, just when irq sysfs entry appears. This 
is the reason that I thought it would be sensible (based on your 
suggestion) to include irq_affinity to calculate the next VCPU. If you 
think, changing irq_affinity between request_irq() and 
evtch_bind_interdom_next_vcpu is virtually impossible, then we can drop 
affinity and just use cpu_online_mask.




I think we would still require spin_lock(). spin_lock is for irq_desc.

If you are trying to protect affinity then it may well change after you
drop the lock.

In fact, don't you have a race here? If we offline a VCPU we will (by
way of cpu_disable_common()->fixup_irqs()) update affinity to reflect
that a CPU is gone and there is a chance that xen_rebind_evtchn_to_cpu()
will happen after that.

So, contrary to what I said earlier ;-) not only do you need the lock,
but you should hold it across xen_rebind_evtchn_to_cpu() call. Does this
make sense?


Yes, you are correct. .irq_set_affinity pretty much does the same thing.

The code will now looks like this.
raw_spin_lock_irqsave(lock, flags);
percpu read
select_cpu
percpu write
xen_rebind_evtchn_to_cpu(evtchn, selected_cpu)
raw_spin_unlock_irqsave(lock, flags);

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU

2017-06-05 Thread Boris Ostrovsky
On 06/05/2017 06:14 AM, Anoob Soman wrote:
> On 02/06/17 17:24, Boris Ostrovsky wrote:
>>> static int set_affinity_irq(struct irq_data *data, const struct
>>> cpumask *dest,
>>>   bool force)
>>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>>> index 10f1ef5..1192f24 100644
>>> --- a/drivers/xen/evtchn.c
>>> +++ b/drivers/xen/evtchn.c
>>> @@ -58,6 +58,8 @@
>>>   #include 
>>>   #include 
>>>   +static DEFINE_PER_CPU(int, bind_last_selected_cpu);
>> This should be moved into evtchn_bind_interdom_next_vcpu() since that's
>> the only place referencing it.
>
> Sure, I will do it.
>
>>
>> Why is it a percpu variable BTW? Wouldn't making it global result in
>> better interrupt distribution?
>
> The reason for percpu instead of global, was to avoid locking. We can
> have a global variable (last_cpu) without locking, but value of
> last_cpu wont be consistent, without locks. Moreover, since
> irq_affinity is also used in the calculation of cpu to bind, having a
> percpu or global wouldn't really matter, as the result (selected_cpu)
> is more likely to be random (because different irqs can have different
> affinity). What do you guys suggest.

Doesn't initial affinity (which is what we expect here since irqbalance
has not run yet) typically cover all guest VCPUs?

>
>>
>>> +
>>>   struct per_user_data {
>>>   struct mutex bind_mutex; /* serialize bind/unbind operations */
>>>   struct rb_root evtchns;
>>> @@ -421,6 +423,36 @@ static void evtchn_unbind_from_user(struct
>>> per_user_data *u,
>>>   del_evtchn(u, evtchn);
>>>   }
>>>   +static void evtchn_bind_interdom_next_vcpu(int evtchn)
>>> +{
>>> +unsigned int selected_cpu, irq;
>>> +struct irq_desc *desc = NULL;
>>> +unsigned long flags;
>>> +
>>> +irq = irq_from_evtchn(evtchn);
>>> +desc = irq_to_desc(irq);
>>> +
>>> +if (!desc)
>>> +return;
>>> +
>>> +raw_spin_lock_irqsave(>lock, flags);
>>> +selected_cpu = this_cpu_read(bind_last_selected_cpu);
>>> +selected_cpu = cpumask_next_and(selected_cpu,
>>> +desc->irq_common_data.affinity, cpu_online_mask);
>>> +
>>> +if (unlikely(selected_cpu >= nr_cpu_ids))
>>> +selected_cpu =
>>> cpumask_first_and(desc->irq_common_data.affinity,
>>> +cpu_online_mask);
>>> +
>>> +raw_spin_unlock_irqrestore(>lock, flags);
>> I think if you follow Juergen's suggestion of wrapping everything into
>> irq_enable/disable you can drop the lock altogether (assuming you keep
>> bind_last_selected_cpu percpu).
>>
>> -boris
>>
>
> I think we would still require spin_lock(). spin_lock is for irq_desc.

If you are trying to protect affinity then it may well change after you
drop the lock.

In fact, don't you have a race here? If we offline a VCPU we will (by
way of cpu_disable_common()->fixup_irqs()) update affinity to reflect
that a CPU is gone and there is a chance that xen_rebind_evtchn_to_cpu()
will happen after that.

So, contrary to what I said earlier ;-) not only do you need the lock,
but you should hold it across xen_rebind_evtchn_to_cpu() call. Does this
make sense?

-boris


>
>>> +this_cpu_write(bind_last_selected_cpu, selected_cpu);
>>> +
>>> +local_irq_disable();
>>> +/* unmask expects irqs to be disabled */
>>> +xen_rebind_evtchn_to_cpu(evtchn, selected_cpu);
>>> +local_irq_enable();
>>> +}
>>> +
>>>
>


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] QEMU XenServer/XenProject Working group meeting 10th May 2017

2017-06-05 Thread Jennifer Herbert

QEMU XenServer/XenProject Working group meeting 5th May 2017
=

Attendees:
* Paul Durrant
* Andrew Cooper
* Ian Jackson
* Jenny Herbert
* Igor Druzhinin
* Simon Crow
* Marcus Granado

Reviewed previous action points

* Paul to carry on with Xen device model work. - Done.
* Ian - No progress with re factoring libxl split, XenStore restrictions
  or using and FD for QMP cd insert and emulated hotplug, due to feature
  freeze on 4.8.9.
* Andrew to look over Ian's patch series to see how bad extra XenStore
  permission would be.  - No.
* Jenny – XenServer is continuing to make slow progress. We have XenCentre
  patch that means it can talk to QEMU trad, working on some bugs.  Various
  other small bit of work done.

Andrew to double check dm-op sub ops for continuation support.

XenServer uses XenStore to communicate the VNC QEMU clipboard to the
guest and back.  It was concluded this isnt nice, as has caused
veriose security problems in the past.  New plan is to implement this
using a shared page.

Next XenStore restriction was discussed. Including an approach
involving using a shared ring buffer and removing the previous
xs-restrict.  Instead implementing an xs restriction prefixing command
within the xenbus command.  Slightly fiddly, but small number of
parts.  Its doable and not controversial.

Andrew had a diffident plan, to remove any use of XenStore from QEMU –
specifically for the hvm case.  Currently little us in the hvm case.
Two uses should be trivial to implement in QMP, and the other is
physmap, which Igor has been working on.

Its agreed the phymap key needs removing, and that removing the last
few uses of XenStore would be a good idea, however, if it seemed that
dealing with the physmap issue where to take a long time, the
xs-ristrict could be used as an intermidiate plan, which would allow
this project to move on while physmap or any other complications are
being sorted out.

Andrew is not convinced that allowing a guest to fiddle with the
phymap keys (as wolud have to be permitted under the xs-restict
scheme) would be safe.  Certainly the guest could destroy itself,
which in itself is permit-able, but as it changes the interval virtual
memory layout for QEMU, there might be some exploit hidden there –
this would need to be checked.

The conversation turns to the physmap keys, being the biggest XenStore
concern.  Andrew suggests that the correct way to fix it in a
compatible way would be if grant map foreign took in a pointer that
would be nmaped. The pointer would allow the range to be mapped
exactly where it pointed.  A new library call to xen foreign map could
be written, but the complication of needing compatibility for QEMU
with older versions of Xen was raised.

Ian suggested that this wasn’t really a problem, since we could do the
old thing with the old versions.  Libxl already tests for the version
of QEMU, and so if it finds this too old, it does the phymap keys
thing, otherwise, it can use QEMU-depriv including the the new physmap
mechanism.

The new physmap mechanism would not require a substitute for the
physmap keys, as QEMU already has all the information it needs.  The
keys where entirely to allow two parts of startup logic in QEMU to be
reversed.  Andrew  says that Igor has a solution but doestn think its
upstreamable.

Ian suggests that given the size of the xs-restrict patch queue, if
can fix the physmap issue relatively easily, we should, and then he
could drop most of the xs-restict patchque.  Ian offers to help with
the libxl part of the phymap work.

Igor explains an approach he tried hit a block of QXL, which
initialised certain registers and write then into various memory
locations.  He tried in each place to insert an  'if Xen and
runstate=resuming', doent touch.   But the fix is very intrusive in
terms of QXL code, and so he didn’t try to upstream it.

The idea of using helper functions was discussed.  Other helper
functions have been used before for similar problems.  A function
could be created to access vram, instead of just writing to a pointer.
All the access parts of QXL code could eb redirected though this
helper function, and that would be likely ustreamable.  In particular,
its been suggested before that helper function could be used for range
checking.  They could be added for range checking, and then also
modified for with the 'if xen and resuming' clauses.

Igor explains how another approach he had been looking at was to
change the order of QEMU startup, and move the memory map, and
actually map the memory where QEMU expects it to be mapped.  Here
again, there was the issue of compatibility.  The compact layer is
still needed to work with old versions of libxc.  It was discussed how
it would have QEMU would have to be able to work with both skews.  It
could be decided at compile time, as you at this point if you have the
new function call.  It the new function call is not available, it
would 

[Xen-devel] [xen-4.9-testing test] 110008: tolerable FAIL - PUSHED

2017-06-05 Thread osstest service owner
flight 110008 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110008/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail 
in 109995 pass in 110008
 test-armhf-armhf-xl-arndale  10 debian-fixup fail in 109995 pass in 110008
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail in 109995 
pass in 110008
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeat fail in 109995 pass in 
110008
 test-amd64-amd64-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail pass in 
109995

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop   fail REGR. vs. 109925

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 109925
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64  9 windows-install fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass

version targeted for testing:
 xen  35f0fff2a67d1a5b93f9992e3a402ac3c896ae55
baseline version:
 xen  876800d5f9de8b15355172794cb82f505dd26e18

Last test of basis   109925  2017-06-01 11:14:13 Z4 days
Testing same since   109949  2017-06-03 00:54:25 Z2 days   

Re: [Xen-devel] [PATCH] x86/HVM: correct notion of new CPL in task switch emulation

2017-06-05 Thread Andrew Cooper
On 01/06/17 13:11, Jan Beulich wrote:
> Commit aac1df3d03 ("x86/HVM: introduce hvm_get_cpl() and respective
> hook") went too far in one aspect: When emulating a task switch we
> really shouldn't be looking at what hvm_get_cpl() returns, as we're
> switching all segment registers.
>
> However, instead of reverting the relevant parts of that commit, have
> the caller tell the segment loading function what the new CPL is. This
> at once fixes ES being loaded before CS so far having had its checks
> done against the old CPL.
>
> Reported-by: Andrew Cooper 
> Signed-off-by: Jan Beulich 

On further consideration, wouldn't it be better to audit all segment
registers, before updating any of them in the vmcs/vmcb?  This would
leave us with a far lower chance of other vmentry failures.

Loading the segment registers is beyond the commit point of a task
switch, and the manual says that the processor will try to skip further
segmentation checks in an attempt to deliver a fault in the new context.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus test] 110006: regressions - FAIL

2017-06-05 Thread osstest service owner
flight 110006 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110006/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-xsm  9 debian-install   fail REGR. vs. 109994

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop  fail blocked in 109994
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 host-ping-check-xen fail 
like 109963
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 109963
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 109994
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 109994
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 109994
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 109994
 test-armhf-armhf-xl-rtds 11 guest-start  fail  like 109994
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 109994
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386  9 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemut-ws16-amd64  9 windows-install fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass

version targeted for testing:
 linux3c2993b8c6143d8a5793746a54eba8f86f95240f
baseline version:
 linuxea094f3c830a67f252677aacba5d04ebcf55c4d9

Last test of basis   109994  2017-06-04 10:33:35 Z1 days
Testing same since   110006  2017-06-05 02:17:35 Z0 days1 attempts


People who touched revisions under test:
  Alexandre Belloni 
  Andi Shyti 
  Artem Savkov 
  Artemy Kovalyov 
  Arun Easi 
  Benjamin Coddington 
  Benjamin Tissoires 
  Byczkowski, Jakub 
  Dan Carpenter 

Re: [Xen-devel] (pv)?grub and PVHv2

2017-06-05 Thread Andrew Cooper
On 05/06/17 11:55, George Dunlap wrote:
> On Fri, Jun 2, 2017 at 10:58 AM, Roger Pau Monné  wrote:
>> On Fri, Jun 02, 2017 at 11:33:50AM +0200, Marek Marczykowski-Górecki wrote:
>>> Hi,
>>>
>>> Is there any method to boot PVHv2 domain using a kernel fetched from
>>> that domain's disk image, _without_ mounting it in dom0? Something like
>>> pvgrub was for PV.
>> Hello,
>>
>> Anthony (Cced) is working on an OVMF port, so it can be used as
>> firmware for PVHv2 guests.
> I think in theory it shouldn't be too hard to port the pvgrub2 code to
> boot into PVH, since it already boots in PV, right?
>
> Is this something we should try to encourage, or do you think it would
> be better to route everyone through EFI?

Even a PVH pvgrub still suffers the a priori problem which makes booting
PV guests extremely difficult.  You don't know ahead-of-time which
bootloader the guest is using without peering at its disks, which opens
a massive attack surface in dom0.

Using things like EFI allows any compatible OS to function, not just
ones which use grub.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/NPT: deal with fallout from 2Mb/1Gb unmapping change

2017-06-05 Thread George Dunlap
On 24/05/17 17:57, Boris Ostrovsky wrote:
> On 05/24/2017 06:21 AM, Jan Beulich wrote:
> On 24.05.17 at 11:14,  wrote:
>>> Commit efa9596e9d ("x86/mm: fix incorrect unmapping of 2MB and 1GB
>>> pages") left the NPT code untouched, as there is no explicit alignment
>>> check matching the one in EPT code. However, the now more widespread
>>> storing of INVALID_MFN into PTEs requires adjustments:
>>> - calculations when shattering large pages may spill into the p2m type
>>>   field (converting p2m_populate_on_demand to p2m_grant_map_rw) - use
>>>   OR instead of PLUS,
> 
> Would it be possible to just skip filling the entries if p2m_entry
> points to an INVALID_MFN?

No, because we still want to know the type of the entry, even though the
pfn is INVALID.

> 
> If not, I think a comment explaining the reason for using '|' would be
> useful.
> 
> 
>>> - the use of plain l{2,3}e_from_pfn() in p2m_pt_set_entry() results in
>>>   all upper (flag) bits being clobbered - introduce and use
>>>   p2m_l{2,3}e_from_pfn(), paralleling the existing L1 variant.
>>>
>>> Reported-by: Boris Ostrovsky 
>>> Signed-off-by: Jan Beulich 
> 
> Tested-by: Boris Ostrovsky 

Acked-by: George Dunlap 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.9] Restore HVM_OP hypercall continuation

2017-06-05 Thread George Dunlap
On Mon, Jun 5, 2017 at 12:20 PM, George Dunlap  wrote:
> On 05/06/17 12:18, George Dunlap wrote:
>> Commit ae20ccf removed the hypercall continuation logic from the end
>> of do_hvm_op(), claiming:
>>
>> "This patch removes the need for handling HVMOP restarts, so that
>> infrastructure is removed."
>>
>> That turns out to be only half true.  The removal of
>> HVMOP_set_mem_type removed the need to store a start iteration value
>> in the hypercall continuation, but a grep through hvm.c for ERESTART
>> turns up at least two places where do_hvm_op() may still need a
>> hypercall continuation:
>>
>>  * HVMOP_set_hvm_param can return -ERESTART when setting
>> HVM_PARAM_IDENT_PT in the event that it fails to acquire the domctl
>> lock
>>
>>  * HVMOP_flush_tlbs can return -ERESTART if several vcpus call it at
>>the same time
>>
>> In both cases, a simple restart (with no stored iteration information)
>> is necessary.
>>
>> Add a check for -ERESTART again, along with a comment at the top of
>> the function regarding the lack of decoding any information from the
>> op value.
>>
>> Remove a stray blank line at the end of the file while we're here.
>>
>> Reported-by: Xudong Hao 
>> Signed-off-by: George Dunlap 
>
> Oh, actually Andy and Julien both already acked this.  I'll check it in
> on staging and cherry-pick it to staging-4.9 unless I hear otherwise soon.

Either that, or I'll discover that it's already been checked in and I
didn't notice because I failed to merge origin/staging into staging.

Sorry for the noise everyone.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.9] Restore HVM_OP hypercall continuation

2017-06-05 Thread George Dunlap
On 05/06/17 12:18, George Dunlap wrote:
> Commit ae20ccf removed the hypercall continuation logic from the end
> of do_hvm_op(), claiming:
> 
> "This patch removes the need for handling HVMOP restarts, so that
> infrastructure is removed."
> 
> That turns out to be only half true.  The removal of
> HVMOP_set_mem_type removed the need to store a start iteration value
> in the hypercall continuation, but a grep through hvm.c for ERESTART
> turns up at least two places where do_hvm_op() may still need a
> hypercall continuation:
> 
>  * HVMOP_set_hvm_param can return -ERESTART when setting
> HVM_PARAM_IDENT_PT in the event that it fails to acquire the domctl
> lock
> 
>  * HVMOP_flush_tlbs can return -ERESTART if several vcpus call it at
>the same time
> 
> In both cases, a simple restart (with no stored iteration information)
> is necessary.
> 
> Add a check for -ERESTART again, along with a comment at the top of
> the function regarding the lack of decoding any information from the
> op value.
> 
> Remove a stray blank line at the end of the file while we're here.
> 
> Reported-by: Xudong Hao 
> Signed-off-by: George Dunlap 

Oh, actually Andy and Julien both already acked this.  I'll check it in
on staging and cherry-pick it to staging-4.9 unless I hear otherwise soon.

 -George

> ---
> CC: Andrew Cooper 
> CC: Jan Beulich 
> CC: Paul Durrant 
> CC: Julien Grall 
> ---
>  xen/arch/x86/hvm/hvm.c | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 81691e2..e3e817d 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4544,6 +4544,13 @@ long do_hvm_op(unsigned long op, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
>  long rc = 0;
>  
> +/* 
> + * NB: hvm_op can be part of a restarted hypercall; but at the
> + * moment the only hypercalls which do continuations don't need to
> + * store any iteration information (since they're just re-trying
> + * the acquisition of a lock).
> + */
> +
>  switch ( op )
>  {
>  case HVMOP_set_evtchn_upcall_vector:
> @@ -4636,6 +4643,10 @@ long do_hvm_op(unsigned long op, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>  }
>  }
>  
> +if ( rc == -ERESTART )
> +rc = hypercall_create_continuation(__HYPERVISOR_hvm_op, "lh",
> +   op, arg);
> +
>  return rc;
>  }
>  
> @@ -4869,4 +4880,3 @@ void hvm_set_segment_register(struct vcpu *v, enum 
> x86_segment seg,
>   * indent-tabs-mode: nil
>   * End:
>   */
> -
> 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH for-4.9] Restore HVM_OP hypercall continuation

2017-06-05 Thread George Dunlap
Commit ae20ccf removed the hypercall continuation logic from the end
of do_hvm_op(), claiming:

"This patch removes the need for handling HVMOP restarts, so that
infrastructure is removed."

That turns out to be only half true.  The removal of
HVMOP_set_mem_type removed the need to store a start iteration value
in the hypercall continuation, but a grep through hvm.c for ERESTART
turns up at least two places where do_hvm_op() may still need a
hypercall continuation:

 * HVMOP_set_hvm_param can return -ERESTART when setting
HVM_PARAM_IDENT_PT in the event that it fails to acquire the domctl
lock

 * HVMOP_flush_tlbs can return -ERESTART if several vcpus call it at
   the same time

In both cases, a simple restart (with no stored iteration information)
is necessary.

Add a check for -ERESTART again, along with a comment at the top of
the function regarding the lack of decoding any information from the
op value.

Remove a stray blank line at the end of the file while we're here.

Reported-by: Xudong Hao 
Signed-off-by: George Dunlap 
---
CC: Andrew Cooper 
CC: Jan Beulich 
CC: Paul Durrant 
CC: Julien Grall 
---
 xen/arch/x86/hvm/hvm.c | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 81691e2..e3e817d 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4544,6 +4544,13 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
 
+/* 
+ * NB: hvm_op can be part of a restarted hypercall; but at the
+ * moment the only hypercalls which do continuations don't need to
+ * store any iteration information (since they're just re-trying
+ * the acquisition of a lock).
+ */
+
 switch ( op )
 {
 case HVMOP_set_evtchn_upcall_vector:
@@ -4636,6 +4643,10 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 }
 }
 
+if ( rc == -ERESTART )
+rc = hypercall_create_continuation(__HYPERVISOR_hvm_op, "lh",
+   op, arg);
+
 return rc;
 }
 
@@ -4869,4 +4880,3 @@ void hvm_set_segment_register(struct vcpu *v, enum 
x86_segment seg,
  * indent-tabs-mode: nil
  * End:
  */
-
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for 4.9] vif-common.sh: Have iptables wait for the xtables lock

2017-06-05 Thread George Dunlap
Forgot to cc' the release manager.

On Mon, Jun 5, 2017 at 11:02 AM, George Dunlap  wrote:
> iptables has a system-wide lock on the xtables.  Strangely though, in
> the case of two concurrent invocations, the default is for the
> instance not grabbing the lock to exit out rather than waiting for it.
> This means that when starting a large number of guests in parallel,
> many will fail out with messages like this:
>
>   2017-05-10 11:45:40 UTC libxl: error: libxl_exec.c:118: 
> libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge remove [18767] 
> exited with error status 4
>   2017-05-10 11:50:52 UTC libxl: error: libxl_exec.c:118: 
> libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge offline [1554] 
> exited with error status 4
>
> In order to instruct iptables to wait for the lock, you have to
> specify '-w'.  Unfortunately, not all versions of iptables have the
> '-w' option, so on first invocation check to see if it accepts the -w
> command.
>
> Reported-by: Antony Saba 
> Signed-off-by: George Dunlap 
> ---
> CC: Ian Jackson 
> CC: Wei Liu 
> ---
>  tools/hotplug/Linux/vif-common.sh | 38 +++---
>  1 file changed, 35 insertions(+), 3 deletions(-)
>
> diff --git a/tools/hotplug/Linux/vif-common.sh 
> b/tools/hotplug/Linux/vif-common.sh
> index 6e8d584..29cd8dd 100644
> --- a/tools/hotplug/Linux/vif-common.sh
> +++ b/tools/hotplug/Linux/vif-common.sh
> @@ -120,6 +120,38 @@ fi
>  ip=${ip:-}
>  ip=$(xenstore_read_default "$XENBUS_PATH/ip" "$ip")
>
> +IPTABLES_WAIT_RUNE="-w"
> +IPTABLES_WAIT_RUNE_CHECKED=false
> +
> +# When iptables introduced locking, in the event of lock contention,
> +# they made "fail" rather than "wait for the lock" the default
> +# behavior.  In order to select "wait for the lock" behavior, you have
> +# to add the '-w' parameter.  Unfortinately, both the locking and the
> +# option were only introduced in 2013, and older versions of iptables
> +# will fail if the '-w' parameter is included (since they don't
> +# recognize it).  So check to see if it's supported the first time we
> +# use it.
> +iptables_w()
> +{
> +if ! $IPTABLES_WAIT_RUNE_CHECKED ; then
> +   iptables $IPTABLES_WAIT_RUNE -L -n >& /dev/null
> +   if [[ $? == 0 ]] ; then
> +   # If we succeed, then -w is supported; don't check again
> +   IPTABLES_WAIT_RUNE_CHECKED=true
> +   elif [[ $? == 2 ]] ; then
> +   iptables -L -n >& /dev/null
> +   if [[ $? != 2 ]] ; then
> +   # If we fail with PARAMETER_PROBLEM (2) with -w and
> +   # don't fail with PARAMETER_PROBLEM without it, then
> +   # it's the -w option
> +   IPTABLES_WAIT_RUNE_CHECKED=true
> +   IPTABLES_WAIT_RUNE=""
> +   fi
> +   fi
> +fi
> +iptables $IPTABLES_WAIT_RUNE "$@"
> +}
> +
>  frob_iptable()
>  {
>if [ "$command" == "online" -o "$command" == "add" ]
> @@ -129,9 +161,9 @@ frob_iptable()
>  local c="-D"
>fi
>
> -  iptables "$c" FORWARD -m physdev --physdev-is-bridged --physdev-in "$dev" \
> +  iptables_w "$c" FORWARD -m physdev --physdev-is-bridged --physdev-in 
> "$dev" \
>  "$@" -j ACCEPT 2>/dev/null &&
> -  iptables "$c" FORWARD -m physdev --physdev-is-bridged --physdev-out "$dev" 
> \
> +  iptables_w "$c" FORWARD -m physdev --physdev-is-bridged --physdev-out 
> "$dev" \
>  -j ACCEPT 2>/dev/null
>
>if [ \( "$command" == "online" -o "$command" == "add" \) -a $? -ne 0 ]
> @@ -154,7 +186,7 @@ handle_iptable()
># binary is not sufficient, because the user may not have the appropriate
># modules installed.  If iptables is not working, then there's no need to 
> do
># anything with it, so we can just return.
> -  if ! iptables -L -n >&/dev/null
> +  if ! iptables_w -L -n >&/dev/null
>then
>  return
>fi
> --
> 2.1.4
>
>
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] (pv)?grub and PVHv2

2017-06-05 Thread George Dunlap
On Fri, Jun 2, 2017 at 10:58 AM, Roger Pau Monné  wrote:
> On Fri, Jun 02, 2017 at 11:33:50AM +0200, Marek Marczykowski-Górecki wrote:
>> Hi,
>>
>> Is there any method to boot PVHv2 domain using a kernel fetched from
>> that domain's disk image, _without_ mounting it in dom0? Something like
>> pvgrub was for PV.
>
> Hello,
>
> Anthony (Cced) is working on an OVMF port, so it can be used as
> firmware for PVHv2 guests.

I think in theory it shouldn't be too hard to port the pvgrub2 code to
boot into PVH, since it already boots in PV, right?

Is this something we should try to encourage, or do you think it would
be better to route everyone through EFI?

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU

2017-06-05 Thread Anoob Soman

On 02/06/17 17:24, Boris Ostrovsky wrote:
  
  static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,

bool force)
diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index 10f1ef5..1192f24 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -58,6 +58,8 @@
  #include 
  #include 
  
+static DEFINE_PER_CPU(int, bind_last_selected_cpu);

This should be moved into evtchn_bind_interdom_next_vcpu() since that's
the only place referencing it.


Sure, I will do it.



Why is it a percpu variable BTW? Wouldn't making it global result in
better interrupt distribution?


The reason for percpu instead of global, was to avoid locking. We can 
have a global variable (last_cpu) without locking, but value of last_cpu 
wont be consistent, without locks. Moreover, since irq_affinity is also 
used in the calculation of cpu to bind, having a percpu or global 
wouldn't really matter, as the result (selected_cpu) is more likely to 
be random (because different irqs can have different affinity). What do 
you guys suggest.





+
  struct per_user_data {
struct mutex bind_mutex; /* serialize bind/unbind operations */
struct rb_root evtchns;
@@ -421,6 +423,36 @@ static void evtchn_unbind_from_user(struct per_user_data 
*u,
del_evtchn(u, evtchn);
  }
  
+static void evtchn_bind_interdom_next_vcpu(int evtchn)

+{
+   unsigned int selected_cpu, irq;
+   struct irq_desc *desc = NULL;
+   unsigned long flags;
+
+   irq = irq_from_evtchn(evtchn);
+   desc = irq_to_desc(irq);
+
+   if (!desc)
+   return;
+
+   raw_spin_lock_irqsave(>lock, flags);
+   selected_cpu = this_cpu_read(bind_last_selected_cpu);
+   selected_cpu = cpumask_next_and(selected_cpu,
+   desc->irq_common_data.affinity, cpu_online_mask);
+
+   if (unlikely(selected_cpu >= nr_cpu_ids))
+   selected_cpu = cpumask_first_and(desc->irq_common_data.affinity,
+   cpu_online_mask);
+
+   raw_spin_unlock_irqrestore(>lock, flags);

I think if you follow Juergen's suggestion of wrapping everything into
irq_enable/disable you can drop the lock altogether (assuming you keep
bind_last_selected_cpu percpu).

-boris



I think we would still require spin_lock(). spin_lock is for irq_desc.


+   this_cpu_write(bind_last_selected_cpu, selected_cpu);
+
+   local_irq_disable();
+   /* unmask expects irqs to be disabled */
+   xen_rebind_evtchn_to_cpu(evtchn, selected_cpu);
+   local_irq_enable();
+}
+




___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH for 4.9] vif-common.sh: Have iptables wait for the xtables lock

2017-06-05 Thread George Dunlap
iptables has a system-wide lock on the xtables.  Strangely though, in
the case of two concurrent invocations, the default is for the
instance not grabbing the lock to exit out rather than waiting for it.
This means that when starting a large number of guests in parallel,
many will fail out with messages like this:

  2017-05-10 11:45:40 UTC libxl: error: libxl_exec.c:118: 
libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge remove [18767] 
exited with error status 4
  2017-05-10 11:50:52 UTC libxl: error: libxl_exec.c:118: 
libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge offline [1554] 
exited with error status 4

In order to instruct iptables to wait for the lock, you have to
specify '-w'.  Unfortunately, not all versions of iptables have the
'-w' option, so on first invocation check to see if it accepts the -w
command.

Reported-by: Antony Saba 
Signed-off-by: George Dunlap 
---
CC: Ian Jackson 
CC: Wei Liu 
---
 tools/hotplug/Linux/vif-common.sh | 38 +++---
 1 file changed, 35 insertions(+), 3 deletions(-)

diff --git a/tools/hotplug/Linux/vif-common.sh 
b/tools/hotplug/Linux/vif-common.sh
index 6e8d584..29cd8dd 100644
--- a/tools/hotplug/Linux/vif-common.sh
+++ b/tools/hotplug/Linux/vif-common.sh
@@ -120,6 +120,38 @@ fi
 ip=${ip:-}
 ip=$(xenstore_read_default "$XENBUS_PATH/ip" "$ip")
 
+IPTABLES_WAIT_RUNE="-w"
+IPTABLES_WAIT_RUNE_CHECKED=false
+
+# When iptables introduced locking, in the event of lock contention,
+# they made "fail" rather than "wait for the lock" the default
+# behavior.  In order to select "wait for the lock" behavior, you have
+# to add the '-w' parameter.  Unfortinately, both the locking and the
+# option were only introduced in 2013, and older versions of iptables
+# will fail if the '-w' parameter is included (since they don't
+# recognize it).  So check to see if it's supported the first time we
+# use it.
+iptables_w()
+{
+if ! $IPTABLES_WAIT_RUNE_CHECKED ; then
+   iptables $IPTABLES_WAIT_RUNE -L -n >& /dev/null
+   if [[ $? == 0 ]] ; then
+   # If we succeed, then -w is supported; don't check again
+   IPTABLES_WAIT_RUNE_CHECKED=true
+   elif [[ $? == 2 ]] ; then
+   iptables -L -n >& /dev/null
+   if [[ $? != 2 ]] ; then
+   # If we fail with PARAMETER_PROBLEM (2) with -w and
+   # don't fail with PARAMETER_PROBLEM without it, then
+   # it's the -w option
+   IPTABLES_WAIT_RUNE_CHECKED=true
+   IPTABLES_WAIT_RUNE=""
+   fi
+   fi
+fi
+iptables $IPTABLES_WAIT_RUNE "$@"
+}
+
 frob_iptable()
 {
   if [ "$command" == "online" -o "$command" == "add" ]
@@ -129,9 +161,9 @@ frob_iptable()
 local c="-D"
   fi
 
-  iptables "$c" FORWARD -m physdev --physdev-is-bridged --physdev-in "$dev" \
+  iptables_w "$c" FORWARD -m physdev --physdev-is-bridged --physdev-in "$dev" \
 "$@" -j ACCEPT 2>/dev/null &&
-  iptables "$c" FORWARD -m physdev --physdev-is-bridged --physdev-out "$dev" \
+  iptables_w "$c" FORWARD -m physdev --physdev-is-bridged --physdev-out "$dev" 
\
 -j ACCEPT 2>/dev/null
 
   if [ \( "$command" == "online" -o "$command" == "add" \) -a $? -ne 0 ]
@@ -154,7 +186,7 @@ handle_iptable()
   # binary is not sufficient, because the user may not have the appropriate
   # modules installed.  If iptables is not working, then there's no need to do
   # anything with it, so we can just return.
-  if ! iptables -L -n >&/dev/null
+  if ! iptables_w -L -n >&/dev/null
   then
 return
   fi
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU

2017-06-05 Thread Anoob Soman

On 02/06/17 16:10, Juergen Gross wrote:


I'd prefer the to have irq disabled from taking the lock until here.
This will avoid problems due to preemption and will be faster as it
avoids one irq on/off cycle. So:

local_irq_disable();
raw_spin_lock();
...
raw_spin_unlock();
this_cpu_write();
xen_rebind_evtchn_to_cpu();
local_irq_enable();


Juergen



Agreed, I will send a V2 with your suggestion.


-Anoob.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/PoD: drop a pointless local variable

2017-06-05 Thread George Dunlap
On Wed, May 31, 2017 at 8:52 AM, Jan Beulich  wrote:
> ... and move another one into a more narrow scope.
>
> Signed-off-by: Jan Beulich 

Acked-by: George Dunlap 

>
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1180,8 +1180,6 @@ guest_physmap_mark_populate_on_demand(st
>  {
>  struct p2m_domain *p2m = p2m_get_hostp2m(d);
>  unsigned long i, n, pod_count = 0;
> -p2m_type_t ot;
> -mfn_t omfn;
>  int rc = 0;
>
>  if ( !paging_mode_translate(d) )
> @@ -1194,10 +1192,11 @@ guest_physmap_mark_populate_on_demand(st
>  /* Make sure all gpfns are unused */
>  for ( i = 0; i < (1UL << order); i += n )
>  {
> +p2m_type_t ot;
>  p2m_access_t a;
>  unsigned int cur_order;
>
> -omfn = p2m->get_entry(p2m, gfn + i, , , 0, _order, NULL);
> +p2m->get_entry(p2m, gfn + i, , , 0, _order, NULL);
>  n = 1UL << min(order, cur_order);
>  if ( p2m_is_ram(ot) )
>  {
>
>
>
>
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
>

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] 4.8.1 migration fails over 1st interface, works over 2nd

2017-06-05 Thread Andrew Cooper
On 05/06/17 10:17, George Dunlap wrote:
> On Mon, May 29, 2017 at 10:04 AM, Andreas Pflug
>  wrote:
>> I've setup a fresh Debian stretch with xen 4.8.1 and shared storage via
>> custom block scripts on two machines.
>>
>> Both machine have one main interface with some VLAN stuff, the VM
>> bridges and the SAN interface connected to a switch, and another
>> interface directly interconnecting both machines. To insure packets
>> don't take weird routes, arp_announce=2/arp_ignore=1 is configured.
>> Everything on the primary interface seems to work flawlessly, e.g.
>> ssh-ing from one machine to the other (no firewall or other filter
>> involved).
>>
>> With xl migrate  , migration
>> works as expected, bringing up the test domain fully functional back again.
>>
>> With xl migrate --debug  , I get
>> xc: info: Saving domain 17, type x86 PV
>> xc: info: Found x86 PV domain from Xen 4.8
>> xc: info: Restoring domain
>>
>> and migration will stop here. The target machine will show the incoming
>> VM, but nothing more happens. I have to kill xl on the target, Ctrl-C xl
>> on the source machine, and destroy the target VM--incoming
> Are you saying that migration works fine for you *unless* you add the
> `--debug` option?
>
> Andy / Wei, any ideas?

--debug adds a extra full memory copy, using memcmp() on the destination
side to spot if any memory got missed during the live phase.

It is only indented for development purposes, but it also expect it to
function normally in the way you've used it.

What does `xl -vvv migrate ...` say?

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] 4.8.1 migration fails over 1st interface, works over 2nd

2017-06-05 Thread George Dunlap
On Mon, May 29, 2017 at 10:04 AM, Andreas Pflug
 wrote:
> I've setup a fresh Debian stretch with xen 4.8.1 and shared storage via
> custom block scripts on two machines.
>
> Both machine have one main interface with some VLAN stuff, the VM
> bridges and the SAN interface connected to a switch, and another
> interface directly interconnecting both machines. To insure packets
> don't take weird routes, arp_announce=2/arp_ignore=1 is configured.
> Everything on the primary interface seems to work flawlessly, e.g.
> ssh-ing from one machine to the other (no firewall or other filter
> involved).
>
> With xl migrate  , migration
> works as expected, bringing up the test domain fully functional back again.
>
> With xl migrate --debug  , I get
> xc: info: Saving domain 17, type x86 PV
> xc: info: Found x86 PV domain from Xen 4.8
> xc: info: Restoring domain
>
> and migration will stop here. The target machine will show the incoming
> VM, but nothing more happens. I have to kill xl on the target, Ctrl-C xl
> on the source machine, and destroy the target VM--incoming

Are you saying that migration works fine for you *unless* you add the
`--debug` option?

Andy / Wei, any ideas?

- George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.9 test] 110003: regressions - FAIL

2017-06-05 Thread osstest service owner
flight 110003 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110003/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2   6 xen-boot fail REGR. vs. 107358
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-start/win.repeat fail in 109749 
REGR. vs. 107358

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-multivcpu 15 guest-start/debian.repeat fail in 109749 pass 
in 109913
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail in 
109749 pass in 110003
 test-amd64-i386-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail in 109749 pass 
in 110003
 test-amd64-i386-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail in 109749 
pass in 110003
 test-amd64-amd64-rumprun-amd64 16 rumprun-demo-xenstorels/xenstorels.repeat 
fail in 109749 pass in 110003
 test-armhf-armhf-libvirt-xsm  5 xen-install  fail in 109878 pass in 110003
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail in 109961 pass in 
109878
 test-amd64-i386-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail in 109961 
pass in 110003
 test-amd64-i386-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail in 109961 
pass in 110003
 test-amd64-i386-xl-raw   9 debian-di-install fail in 109961 pass in 110003
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail pass in 109749
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail pass in 
109961
 test-amd64-i386-pair 15 debian-install/dst_hostfail pass in 109988
 test-amd64-amd64-xl-pvh-intel 17 guest-localmigrate/x10fail pass in 109988
 test-amd64-i386-freebsd10-amd64 10 guest-start fail pass in 109988

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds  9 debian-install   fail REGR. vs. 107358

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-multivcpu 12 migrate-support-check fail in 109749 never 
pass
 test-arm64-arm64-xl-multivcpu 13 saverestore-support-check fail in 109749 
never pass
 test-arm64-arm64-libvirt12 migrate-support-check fail in 109749 never pass
 test-arm64-arm64-libvirt 13 saverestore-support-check fail in 109749 never pass
 test-arm64-arm64-xl-rtds12 migrate-support-check fail in 109749 never pass
 test-arm64-arm64-xl-rtds 13 saverestore-support-check fail in 109749 never pass
 test-arm64-arm64-libvirt-qcow2 11 migrate-support-check fail in 109749 never 
pass
 test-arm64-arm64-libvirt-qcow2 12 saverestore-support-check fail in 109749 
never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-start/win.repeat fail in 109878 
like 107358
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-start/win.repeat fail in 109988 
blocked in 107358
 test-armhf-armhf-xl-vhd   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-xsm   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-rtds  6 xen-boot fail  like 107358
 test-armhf-armhf-xl   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail like 107358
 test-armhf-armhf-libvirt-raw  6 xen-boot fail  like 107358
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 107358
 test-armhf-armhf-libvirt-xsm  6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt  6 xen-boot fail  like 107358
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale   6 xen-boot fail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 

[Xen-devel] [PATCH v1] xen: fix HYPERVISOR_dm_op() prototype

2017-06-05 Thread Sergey Dyasli
Change the third parameter to be the required struct xen_dm_op_buf *
instead of a generic void * (which blindly accepts any pointer).

Signed-off-by: Sergey Dyasli 
---
 arch/x86/include/asm/xen/hypercall.h | 3 ++-
 include/xen/arm/hypercall.h  | 4 +++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/xen/hypercall.h 
b/arch/x86/include/asm/xen/hypercall.h
index f6d20f6cca12..e111ae874647 100644
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -49,6 +49,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * The hypercall asms have to meet several constraints:
@@ -474,7 +475,7 @@ HYPERVISOR_xenpmu_op(unsigned int op, void *arg)
 
 static inline int
 HYPERVISOR_dm_op(
-   domid_t dom, unsigned int nr_bufs, void *bufs)
+   domid_t dom, unsigned int nr_bufs, struct xen_dm_op_buf *bufs)
 {
return _hypercall3(int, dm_op, dom, nr_bufs, bufs);
 }
diff --git a/include/xen/arm/hypercall.h b/include/xen/arm/hypercall.h
index 73db4b2eeb89..d3a732d1ede8 100644
--- a/include/xen/arm/hypercall.h
+++ b/include/xen/arm/hypercall.h
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 
 
 long privcmd_call(unsigned call, unsigned long a1,
unsigned long a2, unsigned long a3,
@@ -53,7 +54,8 @@ int HYPERVISOR_physdev_op(int cmd, void *arg);
 int HYPERVISOR_vcpu_op(int cmd, int vcpuid, void *extra_args);
 int HYPERVISOR_tmem_op(void *arg);
 int HYPERVISOR_vm_assist(unsigned int cmd, unsigned int type);
-int HYPERVISOR_dm_op(domid_t domid, unsigned int nr_bufs, void *bufs);
+int HYPERVISOR_dm_op(domid_t domid, unsigned int nr_bufs,
+struct xen_dm_op_buf *bufs);
 int HYPERVISOR_platform_op_raw(void *arg);
 static inline int HYPERVISOR_platform_op(struct xen_platform_op *op)
 {
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [ovmf test] 110007: all pass - PUSHED

2017-06-05 Thread osstest service owner
flight 110007 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110007/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 7ec69844b8f1d348c0699cc88c728acb13ad
baseline version:
 ovmf a04ec6d9f70f7eedf5ab49b098970245270fa594

Last test of basis   109950  2017-06-03 01:16:07 Z2 days
Testing same since   110007  2017-06-05 03:06:01 Z0 days1 attempts


People who touched revisions under test:
  Ruiyu Ni 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=ovmf
+ revision=7ec69844b8f1d348c0699cc88c728acb13ad
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push ovmf 
7ec69844b8f1d348c0699cc88c728acb13ad
+ branch=ovmf
+ revision=7ec69844b8f1d348c0699cc88c728acb13ad
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=ovmf
+ xenbranch=xen-unstable
+ '[' xovmf = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable
+ prevxenbranch=xen-4.9-testing
+ '[' x7ec69844b8f1d348c0699cc88c728acb13ad = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : 

Re: [Xen-devel] [PATCH v11 12/23] x86: refactor psr: L3 CAT: set value: implement write msr flow.

2017-06-05 Thread Yi Sun
On 17-05-30 09:35:53, Jan Beulich wrote:
> >>> On 03.05.17 at 10:44,  wrote:
> > +struct cos_write_info
> > +{
> > +unsigned int cos;
> > +struct feat_node *feature;
> > +uint32_t *val;
> > +enum psr_feat_type feat_type;
> > +};
> > +
> > +static void do_write_psr_msrs(void *data)
> > +{
> > +struct cos_write_info *info = data;
> > +unsigned int cos = info->cos;
> > +struct feat_node *feat = info->feature;
> > +const struct feat_props *props = feat_props[info->feat_type];
> > +unsigned int i;
> > +
> > +for ( i = 0; i < props->cos_num; i++ )
> > +{
> > +if ( feat->cos_reg_val[cos * props->cos_num + i] != info->val[i] )
> > +{
> > +feat->cos_reg_val[cos * props->cos_num + i] = info->val[i];
> > +props->write_msr(cos, info->val[i], props->type[i]);
> > +}
> > +}
> > +}
> 
> Again you're passing feat_type here only to get at props. Why
> not pass props right away? Also I think it would make sense to
> pull props->cos_num into a local variable.
> 
Have modified these according to your comments. Thanks!

> >  static int write_psr_msrs(unsigned int socket, unsigned int cos,
> >uint32_t val[], unsigned int array_len,
> >enum psr_feat_type feat_type)
> >  {
> > -return -ENOENT;
> > +unsigned int i;
> > +struct psr_socket_info *info = get_socket_info(socket);
> > +struct cos_write_info data =
> > +{
> > +.cos = cos,
> > +.feature = info->features[feat_type],
> > +.feat_type = feat_type,
> > +};
> > +
> > +if ( cos > info->features[feat_type]->cos_max )
> > +return -EINVAL;
> > +
> > +/* Skip to the feature's value head. */
> > +for ( i = 0; i < feat_type; i++ )
> > +{
> > +if ( !info->features[i] )
> > +continue;
> 
> This is inconsistent with checks done elsewhere, where you also
> check feat_props[feat_type] against NULL. I've made a comment
> regarding whether both checks are wanted in a uniform or non-
> uniform way pretty early in the series. Whatever is selected
> should then be used consistently.
> 
Have changed it. Thanks!

> Jan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 14/23] x86: refactor psr: CDP: implement get hw info flow.

2017-06-05 Thread Yi Sun
On 17-05-31 03:40:40, Jan Beulich wrote:
> >>> On 03.05.17 at 10:44,  wrote:
> > --- a/xen/arch/x86/psr.c
> > +++ b/xen/arch/x86/psr.c
> > @@ -207,7 +207,9 @@ static void free_socket_resources(unsigned int socket)
> >  memset(info->dom_ids, 0, ((DOMID_IDLE + 1) + 7) / 8);
> >  }
> >  
> > -static enum psr_feat_type psr_cbm_type_to_feat_type(enum cbm_type type)
> > +static enum psr_feat_type psr_cbm_type_to_feat_type(
> > +const struct psr_socket_info *info,
> > +enum cbm_type type)
> 
> Couldn't you avoid adding this new parameter by checking ...
> 
> > @@ -215,7 +217,18 @@ static enum psr_feat_type 
> > psr_cbm_type_to_feat_type(enum cbm_type type)
> >  {
> >  case PSR_CBM_TYPE_L3:
> >  feat_type = PSR_SOCKET_L3_CAT;
> > +
> > +/* If type is L3 CAT but we cannot find it in feature array, try 
> > CDP. */
> > +if ( !info->features[feat_type] )
> 
> ... the props array entry here?
> 
Sure, thanks!

> Jan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.1 test] 110002: tolerable FAIL - PUSHED

2017-06-05 Thread osstest service owner
flight 110002 linux-4.1 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110002/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm 17 guest-start/debian.repeat fail in 109834 pass 
in 110002
 test-amd64-i386-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail in 109834 
pass in 110002
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 leak-check/basis(8) fail 
in 109983 pass in 110002
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail in 
109983 pass in 110002
 test-amd64-i386-freebsd10-amd64 10 guest-start   fail in 109983 pass in 110002
 test-amd64-amd64-qemuu-nested-intel 9 debian-hvm-install fail in 109983 pass 
in 110002
 test-amd64-i386-libvirt-xsm  11 guest-start  fail in 109983 pass in 110002
 test-amd64-amd64-libvirt-xsm 13 saverestore-support-check fail in 109983 pass 
in 110002
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail in 
109983 pass in 110002
 test-amd64-i386-freebsd10-i386 10 guest-startfail in 109983 pass in 110002
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail in 109983 
pass in 110002
 test-amd64-i386-xl  19 guest-start/debian.repeat fail in 109983 pass in 110002
 test-amd64-i386-qemuu-rhel6hvm-intel 10 guest-stop fail in 109983 pass in 
110002
 test-amd64-i386-xl-xsm 19 guest-start/debian.repeat fail in 109983 pass in 
110002
 test-amd64-amd64-libvirt-vhd 10 guest-start  fail in 109983 pass in 110002
 test-amd64-i386-rumprun-i386 16 rumprun-demo-xenstorels/xenstorels.repeat fail 
in 109983 pass in 110002
 test-amd64-amd64-xl-pvh-amd   5 xen-install  fail in 109983 pass in 110002
 test-amd64-amd64-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail in 109983 
pass in 110002
 test-amd64-amd64-xl-rtds  9 debian-install fail pass in 109834
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail pass in 
109983
 test-armhf-armhf-xl-arndale  15 guest-start/debian.repeat  fail pass in 109983

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-qcow2  6 xen-boot   fail in 109834 never pass
 test-arm64-arm64-xl-multivcpu  6 xen-bootfail in 109834 never pass
 test-arm64-arm64-libvirt  6 xen-boot fail in 109834 never pass
 test-arm64-arm64-xl-rtds  6 xen-boot fail in 109834 never pass
 test-amd64-amd64-rumprun-amd64 16 rumprun-demo-xenstorels/xenstorels.repeat 
fail in 109983 like 106655
 test-armhf-armhf-xl-rtds 11 guest-start fail in 109983 like 106669
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop   fail in 109983 like 106776
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail  like 106756
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 106776
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 106776
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 106776
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 106776
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 106776
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm  6 xen-boot fail   never pass
 test-arm64-arm64-xl   6 xen-boot fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-examine  6 reboot   fail   never pass
 test-arm64-arm64-xl-credit2   6 xen-boot fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-arm64-arm64-xl-xsm   6 xen-boot fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 12 migrate-support-check