Re: [Xen-devel] [SeaBIOS] Seabios build failure with gcc 6.2.0 in Debian Sid

2016-11-06 Thread Wei Liu
On Sat, Nov 05, 2016 at 02:09:22PM +0100, Dario Faggioli wrote:
> On Fri, 2016-11-04 at 14:36 -0400, Kevin O'Connor wrote:
> > On Fri, Nov 04, 2016 at 07:06:07PM +0100, Dario Faggioli wrote:
> > > And even more to this:
> > > https://lists.debian.org/debian-gcc/2016/10/msg00147.html
> > > 
> > > A colleague of mine (Cc-ed) said in chat that gcc 6.2.1 (don't know
> > > on
> > > what distro) seems to build all fine.
> > 
> > The SeaBIOS build was updated to work around this with commit
> > 99e3316d59.  What version of SeaBIOS are you attempting to build?
> > 
> I'm building what Xen's build system tries to build by default which
> appears to be e2fc41e24ee0ada aka rel-1.9.3.
> 
> Wei, maybe we need to update to another changeset/release?
> 

I'm inclined to believe this is a gcc issue. This is discovered due to
Debian has -fpie when building.

After going through the list of changes between 1.9.3 and 1.10.0 I think
I would be fine with updating our in tree version to that.

Wei.

> Regards,
> Dario
> -- 
> <> (Raistlin Majere)
> -
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R Ltd., Cambridge (UK)



> ___
> SeaBIOS mailing list
> seab...@seabios.org
> https://www.coreboot.org/mailman/listinfo/seabios


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xsm: add missing permissions discovered in testing

2016-11-06 Thread Wei Liu
On Fri, Nov 04, 2016 at 11:35:20AM -0400, Daniel De Graaf wrote:
> Add two missing allow rules:
> 
> 1. Device model domain construction uses getvcpucontext, discovered by
> Andrew Cooper in an (apparently) unrelated bisection.
> 
> 2. When a domain is destroyed with a device passthrough active, the
> calls to remove_{irq,ioport,iomem} can be made by the hypervisor itself
> (which results in an XSM check with the source xen_t).  It does not make
> sense to deny these permissions; no domain should be using xen_t, and
> forbidding the hypervisor from performing cleanup is not useful.
> 
> Signed-off-by: Daniel De Graaf 
> Cc: Andrew Cooper 

Acked-by: Wei Liu 

I will pick this up for 4.8.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 101965: regressions - FAIL

2016-11-06 Thread osstest service owner
flight 101965 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101965/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt 11 guest-start  fail REGR. vs. 101909
 test-amd64-i386-libvirt-xsm  11 guest-start  fail REGR. vs. 101909
 test-amd64-i386-libvirt  11 guest-start  fail REGR. vs. 101909
 test-amd64-i386-libvirt-pair 20 guest-start/debian   fail REGR. vs. 101909
 test-amd64-amd64-libvirt-xsm 11 guest-start  fail REGR. vs. 101909
 test-amd64-amd64-libvirt-vhd  9 debian-di-installfail REGR. vs. 101909
 test-amd64-amd64-xl-qcow2 9 debian-di-installfail REGR. vs. 101909
 test-amd64-amd64-libvirt-pair 20 guest-start/debian  fail REGR. vs. 101909
 test-armhf-armhf-libvirt-xsm 11 guest-start  fail REGR. vs. 101909
 test-armhf-armhf-xl-vhd   9 debian-di-installfail REGR. vs. 101909
 test-armhf-armhf-libvirt 11 guest-start  fail REGR. vs. 101909
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail REGR. vs. 101909
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install  fail REGR. vs. 101909

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-xsm7 host-ping-check-xen fail in 101954 pass in 101965
 test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat  fail pass in 101954

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 101909
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 101909
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 101909

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass

version targeted for testing:
 qemuu9226682a401f34b10fd79dfe17ba334da0800747
baseline version:
 qemuu199a5bde46b0eab898ab1ec591f423000302569f

Last test of basis   101909  2016-11-03 23:21:40 Z3 days
Testing same since   101943  2016-11-04 22:40:48 Z2 days3 attempts


People who touched revisions under test:
  Olaf Hering 
  Sander Eikelenboom 
  Stefan Hajnoczi 
  Stefano Stabellini 
  Thomas Huth 
  Wei Liu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass

[Xen-devel] [linux-3.4 test] 101964: regressions - FAIL

2016-11-06 Thread osstest service owner
flight 101964 linux-3.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101964/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl   6 xen-boot  fail REGR. vs. 92983
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
92983
 test-amd64-amd64-libvirt-vhd  6 xen-boot  fail REGR. vs. 92983
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-boot  fail REGR. vs. 92983
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-boot  fail REGR. vs. 92983
 test-amd64-amd64-xl-qcow2 6 xen-boot  fail REGR. vs. 92983
 test-amd64-amd64-xl-qemuu-winxpsp3  6 xen-bootfail REGR. vs. 92983
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  6 xen-boot  fail REGR. vs. 92983
 test-amd64-i386-xl-qemuu-winxpsp3  6 xen-boot fail REGR. vs. 92983
 test-amd64-amd64-qemuu-nested-intel  6 xen-boot   fail REGR. vs. 92983
 test-amd64-i386-xl6 xen-boot  fail REGR. vs. 92983
 test-amd64-amd64-xl-xsm   6 xen-boot  fail REGR. vs. 92983
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  6 xen-boot  fail REGR. vs. 92983

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail in 101695 pass 
in 101964
 test-amd64-amd64-i386-pvgrub  6 xen-boot   fail pass in 101695
 test-amd64-amd64-xl-rtds  6 xen-boot   fail pass in 101695
 test-amd64-i386-freebsd10-amd64  6 xen-bootfail pass in 101695
 test-amd64-i386-pair  9 xen-boot/src_host  fail pass in 101720
 test-amd64-i386-pair 10 xen-boot/dst_host  fail pass in 101720
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-boot   fail pass in 101822
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-boot   fail pass in 101840
 test-amd64-i386-xl-qemut-winxpsp3  6 xen-boot  fail pass in 101840
 test-amd64-amd64-amd64-pvgrub  6 xen-boot  fail pass in 101867
 test-amd64-i386-libvirt-pair  9 xen-boot/src_host  fail pass in 101867
 test-amd64-i386-libvirt-pair 10 xen-boot/dst_host  fail pass in 101867
 test-amd64-amd64-libvirt-pair  9 xen-boot/src_host fail pass in 101951
 test-amd64-amd64-libvirt-pair 10 xen-boot/dst_host fail pass in 101951
 test-amd64-amd64-pair 9 xen-boot/src_host  fail pass in 101951
 test-amd64-amd64-pair10 xen-boot/dst_host  fail pass in 101951

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 92983
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 92983
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 92983

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)   blocked  n/a
 build-amd64-rumprun   7 xen-buildfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 build-i386-rumprun7 xen-buildfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass

version targeted for testing:
 linux8d1988f838a95e836342b505398d38b223181f17
baseline version:
 linux343a5fbeef08baf2097b8cf4e26137cebe3cfef4

Last test of basis92983  2016-04-27 16:21:44 Z  193 days
Testing same since   101695  2016-10-26 18:26:23 Z   11 days   17 attempts


People who touched revisions under test:
  "Suzuki K. Poulose" 
  Aaro Koskinen 
  Al Viro 
  Alan Stern 
  Aleksander Morgado 
  Alex Thorlton 
  Alexandru Cornea 
  Alexey Khoroshilov 
  Amitkumar Karwar 
  Andrew Banman 
  Andrew Morton 
  Andrey Ryabinin 
  Anson Huang 
  Arnaldo Carvalho de Melo 
  

[Xen-devel] [PATCH 03/10] pvh: Set online VCPU map to avail_vcpus

2016-11-06 Thread Boris Ostrovsky
ACPI builder marks VCPUS set in vcpu_online map as enabled in MADT.
With ACPI-based CPU hotplug we only want VCPUs that are started by
the guest to be marked as such. Remaining VCPUs will be set to
"enable" by ACPI code during hotplug.

Signed-off-by: Boris Ostrovsky 
---
 tools/libxl/libxl_x86_acpi.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_x86_acpi.c b/tools/libxl/libxl_x86_acpi.c
index ff0e2df..949f555 100644
--- a/tools/libxl/libxl_x86_acpi.c
+++ b/tools/libxl/libxl_x86_acpi.c
@@ -98,7 +98,7 @@ static int init_acpi_config(libxl__gc *gc,
 uint32_t domid = dom->guest_domid;
 xc_dominfo_t info;
 struct hvm_info_table *hvminfo;
-int i, rc = 0;
+int rc = 0;
 
 config->dsdt_anycpu = config->dsdt_15cpu = dsdt_pvh;
 config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_pvh_len;
@@ -144,8 +144,8 @@ static int init_acpi_config(libxl__gc *gc,
 hvminfo->nr_vcpus = info.max_vcpu_id + 1;
 }
 
-for (i = 0; i < hvminfo->nr_vcpus; i++)
-hvminfo->vcpu_online[i / 8] |= 1 << (i & 7);
+memcpy(hvminfo->vcpu_online, b_info->avail_vcpus.map,
+   b_info->avail_vcpus.size);
 
 config->hvminfo = hvminfo;
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 07/10] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses

2016-11-06 Thread Boris Ostrovsky
No IOREQ server installed for an HVM guest (as indicated
by HVM_PARAM_NR_IOREQ_SERVER_PAGES being set to zero) implies
a PVH guest. These guests need to handle ACPI-related IO
accesses.

Logic for the handler will be provided by a later patch.

Signed-off-by: Boris Ostrovsky 
---
CC: Paul Durrant 
---
 tools/libxc/xc_dom_x86.c|  3 +++
 xen/arch/x86/hvm/hvm.c  | 13 +
 xen/arch/x86/hvm/ioreq.c| 17 +
 xen/include/asm-x86/hvm/ioreq.h |  1 +
 4 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
index 7fcdee1..0017694 100644
--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xc_dom_x86.c
@@ -649,6 +649,9 @@ static int alloc_magic_pages_hvm(struct xc_dom_image *dom)
 /* Limited to one module. */
 if ( dom->ramdisk_blob )
 start_info_size += sizeof(struct hvm_modlist_entry);
+
+/* No IOREQ server for PVH guests. */
+xc_hvm_param_set(xch, domid, HVM_PARAM_NR_IOREQ_SERVER_PAGES, 0);
 }
 else
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 704fd64..6f8439d 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5206,14 +5206,19 @@ static int hvmop_set_param(
 {
 unsigned int i;
 
-if ( a.value == 0 ||
- a.value > sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8 )
+if ( a.value > sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8 )
 {
 rc = -EINVAL;
 break;
 }
-for ( i = 0; i < a.value; i++ )
-set_bit(i, >arch.hvm_domain.ioreq_gmfn.mask);
+
+if ( a.value == 0 ) /* PVH guest */
+acpi_ioreq_init(d);
+else
+{
+for ( i = 0; i < a.value; i++ )
+set_bit(i, >arch.hvm_domain.ioreq_gmfn.mask);
+}
 
 break;
 }
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index d2245e2..171ea82 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -1389,6 +1389,23 @@ void hvm_ioreq_init(struct domain *d)
 register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 }
 
+static int acpi_ioaccess(
+int dir, unsigned int port, unsigned int bytes, uint32_t *val)
+{
+return X86EMUL_OKAY;
+}
+
+void acpi_ioreq_init(struct domain *d)
+{
+/* Online CPU map, see DSDT's PRST region. */
+register_portio_handler(d, 0xaf00, HVM_MAX_VCPUS/8, acpi_ioaccess);
+
+register_portio_handler(d, ACPI_GPE0_BLK_ADDRESS_V1,
+ACPI_GPE0_BLK_LEN_V1, acpi_ioaccess);
+register_portio_handler(d, ACPI_PM1A_EVT_BLK_ADDRESS_V1,
+ACPI_PM1A_EVT_BLK_LEN, acpi_ioaccess);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index fbf2c74..e7b7f52 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -53,6 +53,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t 
*proto_p,
 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool_t buffered);
 
 void hvm_ioreq_init(struct domain *d);
+void acpi_ioreq_init(struct domain *d);
 
 #endif /* __ASM_X86_HVM_IOREQ_H__ */
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 02/10] acpi: Define ACPI IO registers for PVH guests

2016-11-06 Thread Boris Ostrovsky
ACPI hotplug-related IO accesses (to GPE0 block) are handled
by qemu for HVM guests. Since PVH guests don't have qemu these
accesses will need to be procesed by the hypervisor.

Because ACPI event model expects pm1a block to be present we
need to have the hypervisor emulate it as well.

Signed-off-by: Boris Ostrovsky 
---
 tools/libacpi/static_tables.c| 28 +++-
 xen/include/asm-x86/hvm/domain.h |  6 ++
 xen/include/public/hvm/ioreq.h   |  3 +++
 3 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/tools/libacpi/static_tables.c b/tools/libacpi/static_tables.c
index 617bf68..413abcc 100644
--- a/tools/libacpi/static_tables.c
+++ b/tools/libacpi/static_tables.c
@@ -20,6 +20,8 @@
  * Firmware ACPI Control Structure (FACS).
  */
 
+#define ACPI_REG_BIT_OFFSET0
+
 struct acpi_20_facs Facs = {
 .signature = ACPI_2_0_FACS_SIGNATURE,
 .length= sizeof(struct acpi_20_facs),
@@ -30,14 +32,6 @@ struct acpi_20_facs Facs = {
 /*
  * Fixed ACPI Description Table (FADT).
  */
-
-#define ACPI_PM1A_EVT_BLK_BIT_WIDTH 0x20
-#define ACPI_PM1A_EVT_BLK_BIT_OFFSET0x00
-#define ACPI_PM1A_CNT_BLK_BIT_WIDTH 0x10
-#define ACPI_PM1A_CNT_BLK_BIT_OFFSET0x00
-#define ACPI_PM_TMR_BLK_BIT_WIDTH   0x20
-#define ACPI_PM_TMR_BLK_BIT_OFFSET  0x00
-
 struct acpi_20_fadt Fadt = {
 .header = {
 .signature= ACPI_2_0_FADT_SIGNATURE,
@@ -56,9 +50,9 @@ struct acpi_20_fadt Fadt = {
 .pm1a_cnt_blk = ACPI_PM1A_CNT_BLK_ADDRESS_V1,
 .pm_tmr_blk = ACPI_PM_TMR_BLK_ADDRESS_V1,
 .gpe0_blk = ACPI_GPE0_BLK_ADDRESS_V1,
-.pm1_evt_len = ACPI_PM1A_EVT_BLK_BIT_WIDTH / 8,
-.pm1_cnt_len = ACPI_PM1A_CNT_BLK_BIT_WIDTH / 8,
-.pm_tmr_len = ACPI_PM_TMR_BLK_BIT_WIDTH / 8,
+.pm1_evt_len = ACPI_PM1A_EVT_BLK_LEN,
+.pm1_cnt_len = ACPI_PM1A_CNT_BLK_LEN,
+.pm_tmr_len = ACPI_PM_TMR_BLK_LEN,
 .gpe0_blk_len = ACPI_GPE0_BLK_LEN_V1,
 
 .p_lvl2_lat = 0x0fff, /* >100,  means we do not support C2 state */
@@ -79,22 +73,22 @@ struct acpi_20_fadt Fadt = {
 
 .x_pm1a_evt_blk = {
 .address_space_id= ACPI_SYSTEM_IO,
-.register_bit_width  = ACPI_PM1A_EVT_BLK_BIT_WIDTH,
-.register_bit_offset = ACPI_PM1A_EVT_BLK_BIT_OFFSET,
+.register_bit_width  = ACPI_PM1A_EVT_BLK_LEN * 8,
+.register_bit_offset = ACPI_REG_BIT_OFFSET,
 .address = ACPI_PM1A_EVT_BLK_ADDRESS_V1,
 },
 
 .x_pm1a_cnt_blk = {
 .address_space_id= ACPI_SYSTEM_IO,
-.register_bit_width  = ACPI_PM1A_CNT_BLK_BIT_WIDTH,
-.register_bit_offset = ACPI_PM1A_CNT_BLK_BIT_OFFSET,
+.register_bit_width  = ACPI_PM1A_CNT_BLK_LEN * 8,
+.register_bit_offset = ACPI_REG_BIT_OFFSET,
 .address = ACPI_PM1A_CNT_BLK_ADDRESS_V1,
 },
 
 .x_pm_tmr_blk = {
 .address_space_id= ACPI_SYSTEM_IO,
-.register_bit_width  = ACPI_PM_TMR_BLK_BIT_WIDTH,
-.register_bit_offset = ACPI_PM_TMR_BLK_BIT_OFFSET,
+.register_bit_width  = ACPI_PM_TMR_BLK_LEN * 8,
+.register_bit_offset = ACPI_REG_BIT_OFFSET,
 .address = ACPI_PM_TMR_BLK_ADDRESS_V1,
 }
 };
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index f34d784..f492a2b 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -87,6 +87,12 @@ struct hvm_domain {
 } ioreq_server;
 struct hvm_ioreq_server *default_ioreq_server;
 
+/* PVH guests */
+struct {
+uint8_t pm1a[ACPI_PM1A_EVT_BLK_LEN];
+uint8_t gpe[ACPI_GPE0_BLK_LEN_V1];
+} acpi_io;
+
 /* Cached CF8 for guest PCI config cycles */
 uint32_tpci_cf8;
 
diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
index 2e5809b..c36dd0f 100644
--- a/xen/include/public/hvm/ioreq.h
+++ b/xen/include/public/hvm/ioreq.h
@@ -124,6 +124,9 @@ typedef struct buffered_iopage buffered_iopage_t;
 #define ACPI_GPE0_BLK_ADDRESSACPI_GPE0_BLK_ADDRESS_V0
 #define ACPI_GPE0_BLK_LENACPI_GPE0_BLK_LEN_V0
 
+#define ACPI_PM1A_EVT_BLK_LEN0x04
+#define ACPI_PM1A_CNT_BLK_LEN0x02
+#define ACPI_PM_TMR_BLK_LEN  0x04
 
 #endif /* _IOREQ_H_ */
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 10/10] pvh: Send an SCI on VCPU hotplug event

2016-11-06 Thread Boris Ostrovsky
.. and update GPE0 registers.

Signed-off-by: Boris Ostrovsky 
---
 xen/arch/x86/domctl.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 78b7d4b..8151fd7 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1439,6 +1439,18 @@ long arch_do_domctl(
 break;
 
 d->arch.avail_vcpus = num;
+
+/*
+ * For PVH guests we need to send an SCI and set enable/status
+ * bits in GPE block (DSDT specifies _E02, so it's bit 2).
+ */
+if ( is_hvm_domain(d) && d->arch.hvm_domain.ioreq_gmfn.mask == 0 )
+{
+d->arch.hvm_domain.acpi_io.gpe[2] =
+d->arch.hvm_domain.acpi_io.gpe[0] = 4;
+send_guest_vcpu_virq(d->vcpu[0], VIRQ_SCI);
+}
+
 ret = 0;
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 00/10] PVH VCPU hotplug support

2016-11-06 Thread Boris Ostrovsky
This series adds support for ACPI-based VCPU hotplug for unprivileged
PVH guests.

New XEN_DOMCTL_set_avail_vcpus is introduced and is called during
guest creation and in response to 'xl vcpu-set' command. This domctl
updates GPE0's status and enable registers and sends an SCI to the
guest using (newly added) VIRQ_SCI.


Boris Ostrovsky (10):
  x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  acpi: Define ACPI IO registers for PVH guests
  pvh: Set online VCPU map to avail_vcpus
  acpi: Power and Sleep ACPI buttons are not emulated
  acpi: Make pmtimer optional in FADT
  acpi: PVH guests need _E02 method
  pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  pvh/acpi: Handle ACPI accesses for PVH guests
  events/x86: Define SCI virtual interrupt
  pvh: Send an SCI on VCPU hotplug event

 tools/firmware/hvmloader/util.c   |  3 +-
 tools/flask/policy/modules/dom0.te|  2 +-
 tools/flask/policy/modules/xen.if |  4 +-
 tools/libacpi/build.c |  5 +++
 tools/libacpi/libacpi.h   |  1 +
 tools/libacpi/mk_dsdt.c   | 10 ++---
 tools/libacpi/static_tables.c | 31 ++---
 tools/libxc/include/xenctrl.h |  5 +++
 tools/libxc/xc_dom_x86.c  | 14 ++
 tools/libxl/libxl.c   | 10 -
 tools/libxl/libxl_arch.h  |  4 ++
 tools/libxl/libxl_arm.c   |  6 +++
 tools/libxl/libxl_dom.c   |  7 +++
 tools/libxl/libxl_x86.c   |  6 +++
 tools/libxl/libxl_x86_acpi.c  |  6 +--
 xen/arch/x86/domctl.c | 25 +++
 xen/arch/x86/hvm/hvm.c| 13 --
 xen/arch/x86/hvm/ioreq.c  | 83 +++
 xen/include/asm-x86/domain.h  |  6 +++
 xen/include/asm-x86/event.h   |  3 +-
 xen/include/asm-x86/hvm/domain.h  |  6 +++
 xen/include/asm-x86/hvm/ioreq.h   |  1 +
 xen/include/public/arch-x86/xen-mca.h |  2 -
 xen/include/public/arch-x86/xen.h |  3 ++
 xen/include/public/domctl.h   |  9 
 xen/include/public/hvm/ioreq.h|  3 ++
 xen/xsm/flask/hooks.c |  3 ++
 xen/xsm/flask/policy/access_vectors   |  2 +
 28 files changed, 235 insertions(+), 38 deletions(-)

-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 04/10] acpi: Power and Sleep ACPI buttons are not emulated

2016-11-06 Thread Boris Ostrovsky
.. for PVH guests. However, since emulating them for HVM guests
also doesn't seem useful we can have FADT disable those buttons
for both types of guests.

Signed-off-by: Boris Ostrovsky 
---
 tools/libacpi/static_tables.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/libacpi/static_tables.c b/tools/libacpi/static_tables.c
index 413abcc..ebe8ffe 100644
--- a/tools/libacpi/static_tables.c
+++ b/tools/libacpi/static_tables.c
@@ -61,7 +61,8 @@ struct acpi_20_fadt Fadt = {
 .flags = (ACPI_PROC_C1 |
   ACPI_WBINVD |
   ACPI_FIX_RTC | ACPI_TMR_VAL_EXT |
-  ACPI_USE_PLATFORM_CLOCK),
+  ACPI_USE_PLATFORM_CLOCK |
+  ACPI_PWR_BUTTON | ACPI_SLP_BUTTON),
 
 .reset_reg = {
 .address_space_id= ACPI_SYSTEM_IO,
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 01/10] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus

2016-11-06 Thread Boris Ostrovsky
This domctl is called when a VCPU is hot-(un)plugged to a guest (via
'xl vcpu-set'). While this currently is only intended to be needed by
PVH guests we will call this domctl for all (x86) guests for consistency.

Signed-off-by: Boris Ostrovsky 
---
CC: Daniel De Graaf 
---
 tools/flask/policy/modules/dom0.te  |  2 +-
 tools/flask/policy/modules/xen.if   |  4 ++--
 tools/libxc/include/xenctrl.h   |  5 +
 tools/libxc/xc_dom_x86.c| 11 +++
 tools/libxl/libxl.c | 10 +-
 tools/libxl/libxl_arch.h|  4 
 tools/libxl/libxl_arm.c |  6 ++
 tools/libxl/libxl_dom.c |  7 +++
 tools/libxl/libxl_x86.c |  6 ++
 xen/arch/x86/domctl.c   | 13 +
 xen/include/asm-x86/domain.h|  6 ++
 xen/include/public/domctl.h |  9 +
 xen/xsm/flask/hooks.c   |  3 +++
 xen/xsm/flask/policy/access_vectors |  2 ++
 14 files changed, 84 insertions(+), 4 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te 
b/tools/flask/policy/modules/dom0.te
index 2d982d9..fd60c39 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -38,7 +38,7 @@ allow dom0_t dom0_t:domain {
 };
 allow dom0_t dom0_t:domain2 {
set_cpuid gettsc settsc setscheduler set_max_evtchn set_vnumainfo
-   get_vnumainfo psr_cmt_op psr_cat_op
+   get_vnumainfo psr_cmt_op psr_cat_op set_avail_vcpus
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/tools/flask/policy/modules/xen.if 
b/tools/flask/policy/modules/xen.if
index d83f031..0ac4c5b 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -52,7 +52,7 @@ define(`create_domain_common', `
settime setdomainhandle };
allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim
set_max_evtchn set_vnumainfo get_vnumainfo cacheflush
-   psr_cmt_op psr_cat_op soft_reset };
+   psr_cmt_op psr_cat_op soft_reset set_avail_vcpus};
allow $1 $2:security check_context;
allow $1 $2:shadow enable;
allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage 
mmuext_op updatemp };
@@ -85,7 +85,7 @@ define(`manage_domain', `
getaddrsize pause unpause trigger shutdown destroy
setaffinity setdomainmaxmem getscheduler resume
setpodtarget getpodtarget };
-allow $1 $2:domain2 set_vnumainfo;
+allow $1 $2:domain2 { set_vnumainfo set_avail_vcpus };
 ')
 
 # migrate_domain_out(priv, target)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 2c83544..49e9b9f 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1256,6 +1256,11 @@ int xc_domain_getvnuma(xc_interface *xch,
 int xc_domain_soft_reset(xc_interface *xch,
  uint32_t domid);
 
+int xc_domain_set_avail_vcpus(xc_interface *xch,
+  uint32_t domid,
+  unsigned int num_vcpus);
+
+
 #if defined(__i386__) || defined(__x86_64__)
 /*
  * PC BIOS standard E820 types and structure.
diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
index 0eab8a7..7fcdee1 100644
--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xc_dom_x86.c
@@ -125,6 +125,17 @@ const char *xc_domain_get_native_protocol(xc_interface 
*xch,
 return protocol;
 }
 
+int xc_domain_set_avail_vcpus(xc_interface *xch,
+  uint32_t domid,
+  unsigned int num_vcpus)
+{
+DECLARE_DOMCTL;
+domctl.cmd = XEN_DOMCTL_set_avail_vcpus;
+domctl.domain = (domid_t)domid;
+domctl.u.avail_vcpus.num = num_vcpus;
+return do_domctl(xch, );
+}
+
 static int count_pgtables(struct xc_dom_image *dom, xen_vaddr_t from,
   xen_vaddr_t to, xen_pfn_t pfn)
 {
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 33c5e4c..9b94413 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5148,11 +5148,12 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t 
domid, libxl_bitmap *cpumap)
 case LIBXL_DOMAIN_TYPE_HVM:
 switch (libxl__device_model_version_running(gc, domid)) {
 case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
-case LIBXL_DEVICE_MODEL_VERSION_NONE:
 rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap, );
 break;
 case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
 rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap, );
+/* fallthrough */
+case LIBXL_DEVICE_MODEL_VERSION_NONE:
 break;
 default:
 rc = ERROR_INVAL;
@@ -5164,6 +5165,13 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, 
libxl_bitmap *cpumap)
 default:
 rc = ERROR_INVAL;

[Xen-devel] [PATCH 09/10] events/x86: Define SCI virtual interrupt

2016-11-06 Thread Boris Ostrovsky
PVH guests do not have IOAPIC which typically generates an SCI. For
those guests SCI will be provided as a virtual interrupt.

We also move VIRQ_MCA definition out of xen-mca.h to
keep all x86-specific VIRQ_ARCH_* in one place.

Signed-off-by: Boris Ostrovsky 
---
 xen/include/asm-x86/event.h   | 3 ++-
 xen/include/public/arch-x86/xen-mca.h | 2 --
 xen/include/public/arch-x86/xen.h | 3 +++
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-x86/event.h b/xen/include/asm-x86/event.h
index a82062e..9cad8e3 100644
--- a/xen/include/asm-x86/event.h
+++ b/xen/include/asm-x86/event.h
@@ -38,9 +38,10 @@ static inline void local_event_delivery_enable(void)
 vcpu_info(current, evtchn_upcall_mask) = 0;
 }
 
-/* No arch specific virq definition now. Default to global. */
 static inline int arch_virq_is_global(uint32_t virq)
 {
+if ( virq == VIRQ_SCI )
+   return 0;
 return 1;
 }
 
diff --git a/xen/include/public/arch-x86/xen-mca.h 
b/xen/include/public/arch-x86/xen-mca.h
index a97e821..b76c53c 100644
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -91,8 +91,6 @@
 
 #ifndef __ASSEMBLY__
 
-#define VIRQ_MCA VIRQ_ARCH_0 /* G. (DOM0) Machine Check Architecture */
-
 /*
  * Machine Check Architecure:
  * structs are read-only and used to report all kinds of
diff --git a/xen/include/public/arch-x86/xen.h 
b/xen/include/public/arch-x86/xen.h
index cdd93c1..bffa3e0 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -293,6 +293,9 @@ struct xen_arch_domainconfig {
 };
 #endif
 
+#define VIRQ_MCA VIRQ_ARCH_0 /* G. (DOM0) Machine Check Architecture */
+#define VIRQ_SCI VIRQ_ARCH_1 /* V. (PVH) ACPI interrupt */
+
 #endif /* !__ASSEMBLY__ */
 
 /*
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 08/10] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-06 Thread Boris Ostrovsky
Signed-off-by: Boris Ostrovsky 
---
CC: Paul Durrant 
---
 xen/arch/x86/hvm/ioreq.c | 66 
 1 file changed, 66 insertions(+)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 171ea82..ced7c92 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -1392,6 +1392,72 @@ void hvm_ioreq_init(struct domain *d)
 static int acpi_ioaccess(
 int dir, unsigned int port, unsigned int bytes, uint32_t *val)
 {
+unsigned int i;
+unsigned int bits = bytes * 8;
+uint8_t *reg = NULL;
+unsigned idx = port & 3;
+bool is_cpu_map = 0;
+struct domain *currd = current->domain;
+
+BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
+ (ACPI_GPE0_BLK_LEN_V1 != 4));
+
+switch (port)
+{
+case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
+(ACPI_PM1A_EVT_BLK_ADDRESS_V1 + ACPI_PM1A_EVT_BLK_LEN - 1):
+reg = currd->arch.hvm_domain.acpi_io.pm1a;
+break;
+case ACPI_GPE0_BLK_ADDRESS_V1 ...
+(ACPI_GPE0_BLK_ADDRESS_V1 + ACPI_GPE0_BLK_LEN_V1 - 1):
+reg = currd->arch.hvm_domain.acpi_io.gpe;
+break;
+case 0xaf00 ... (0xaf00 + HVM_MAX_VCPUS/8 - 1):
+is_cpu_map = 1;
+break;
+default:
+return X86EMUL_UNHANDLEABLE;
+}
+
+if ( bytes == 0 )
+return X86EMUL_OKAY;
+
+if ( dir == IOREQ_READ )
+{
+*val &= ~((1U << bits) - 1);
+
+if ( is_cpu_map )
+{
+unsigned first_bit, last_bit;
+
+first_bit = (port - 0xaf00) * 8;
+last_bit = min(currd->arch.avail_vcpus, first_bit + bits);
+for (i = first_bit; i < last_bit; i++)
+*val |= (1U << (i - first_bit));
+}
+else
+memcpy(val, [idx], bytes);
+}
+else
+{
+if ( is_cpu_map )
+/* CPU map should not be written. */
+return X86EMUL_UNHANDLEABLE;
+
+/* Write either status or enable reegister. */
+if ( (bytes > 2) || ((bytes == 2) && (port & 1)) )
+return X86EMUL_UNHANDLEABLE;
+
+if ( idx < 2 ) /* status, write 1 to clear. */
+{
+reg[idx] &= ~(*val & 0xff);
+if ( bytes == 2 )
+reg[idx + 1] &= ~((*val >> 8) & 0xff);
+}
+else   /* enable */
+memcpy([idx], val, bytes);
+}
+
 return X86EMUL_OKAY;
 }
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 05/10] acpi: Make pmtimer optional in FADT

2016-11-06 Thread Boris Ostrovsky
PM timer is not supported by PVH guests.

Signed-off-by: Boris Ostrovsky 
---
 tools/firmware/hvmloader/util.c | 3 ++-
 tools/libacpi/build.c   | 5 +
 tools/libacpi/libacpi.h | 1 +
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index 6e0cfe7..1d78973 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -948,7 +948,8 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
 if ( !strncmp(xenstore_read("platform/acpi_s4", "1"), "1", 1)  )
 config->table_flags |= ACPI_HAS_SSDT_S4;
 
-config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC | ACPI_HAS_WAET);
+config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC |
+ACPI_HAS_WAET | ACPI_HAS_PMTIMER);
 
 config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
 
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index 47dae01..58822d3 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -574,6 +574,11 @@ int acpi_build_tables(struct acpi_ctxt *ctxt, struct 
acpi_config *config)
 
 fadt = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_fadt), 16);
 if (!fadt) goto oom;
+if ( !(config->table_flags & ACPI_HAS_PMTIMER) )
+{
+Fadt.pm_tmr_blk = 0;
+memset(_pm_tmr_blk, 0, sizeof(Fadt.x_pm_tmr_blk));
+}
 memcpy(fadt, , sizeof(struct acpi_20_fadt));
 fadt->dsdt   = ctxt->mem_ops.v2p(ctxt, dsdt);
 fadt->x_dsdt = ctxt->mem_ops.v2p(ctxt, dsdt);
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index 1d388f9..bda692e 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -30,6 +30,7 @@
 #define ACPI_HAS_TCPA(1<<7)
 #define ACPI_HAS_IOAPIC  (1<<8)
 #define ACPI_HAS_WAET(1<<9)
+#define ACPI_HAS_PMTIMER (1<<10)
 
 struct xen_vmemrange;
 struct acpi_numa {
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 06/10] acpi: PVH guests need _E02 method

2016-11-06 Thread Boris Ostrovsky
This is the method that will get invoked on an SCI.

Signed-off-by: Boris Ostrovsky 
---
 tools/libacpi/mk_dsdt.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
index 4ae68bc..407386a 100644
--- a/tools/libacpi/mk_dsdt.c
+++ b/tools/libacpi/mk_dsdt.c
@@ -280,11 +280,6 @@ int main(int argc, char **argv)
 
 pop_block();
 
-if (dm_version == QEMU_NONE) {
-pop_block();
-return 0;
-}
-
 /* Define GPE control method. */
 push_block("Scope", "\\_GPE");
 push_block("Method",
@@ -292,6 +287,11 @@ int main(int argc, char **argv)
 stmt("\\_SB.PRSC ()", NULL);
 pop_block();
 pop_block();
+
+if (dm_version == QEMU_NONE) {
+pop_block();
+return 0;
+}
 / Processor end /
 
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-3.10 test] 101958: regressions - FAIL

2016-11-06 Thread osstest service owner
flight 101958 linux-3.10 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101958/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-amd64-pvgrub  6 xen-bootfail REGR. vs. 100648
 test-amd64-amd64-xl   6 xen-boot fail REGR. vs. 100648
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-boot  fail REGR. vs. 100648
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-boot fail REGR. vs. 100648
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 100648
 test-amd64-i386-xl-xsm6 xen-boot fail REGR. vs. 100648
 test-amd64-amd64-xl-credit2   6 xen-boot fail REGR. vs. 100648
 test-amd64-i386-xl6 xen-boot fail REGR. vs. 100648

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ovmf-amd64 3 host-install(3) broken in 101947 pass in 
101958
 test-amd64-i386-libvirt-xsm   9 debian-install   fail in 101783 pass in 101958
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail pass in 101576
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail pass in 101594
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail pass in 
101663
 test-amd64-i386-libvirt   6 xen-boot   fail pass in 101680
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  6 xen-boot  fail pass in 101680
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail pass in 
101731
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-boot   fail pass in 101731
 test-amd64-amd64-libvirt-pair  9 xen-boot/src_host fail pass in 101731
 test-amd64-amd64-libvirt-pair 10 xen-boot/dst_host fail pass in 101731
 test-amd64-amd64-xl-qemut-winxpsp3  6 xen-boot fail pass in 101783
 test-amd64-amd64-xl-qemuu-ovmf-amd64  6 xen-boot   fail pass in 101783
 test-amd64-i386-libvirt-pair  9 xen-boot/src_host  fail pass in 101800
 test-amd64-i386-libvirt-pair 10 xen-boot/dst_host  fail pass in 101800
 test-amd64-i386-pair  9 xen-boot/src_host  fail pass in 101800
 test-amd64-i386-pair 10 xen-boot/dst_host  fail pass in 101800
 test-amd64-amd64-qemuu-nested-intel  6 xen-bootfail pass in 101814
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail pass in 
101828
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-boot   fail pass in 101828
 test-amd64-amd64-pair 9 xen-boot/src_host  fail pass in 101828
 test-amd64-amd64-pair10 xen-boot/dst_host  fail pass in 101828
 test-amd64-i386-freebsd10-i386  6 xen-boot fail pass in 101837
 test-amd64-amd64-xl-multivcpu  6 xen-boot  fail pass in 101837
 test-amd64-i386-xl-qemuu-winxpsp3  6 xen-boot  fail pass in 101844
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-boot   fail pass in 101844
 test-amd64-amd64-xl-xsm   6 xen-boot   fail pass in 101856
 test-amd64-i386-freebsd10-amd64  6 xen-bootfail pass in 101947

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 15 
guest-localmigrate/x10 fail in 101594 like 100646
 build-i386-rumprun5 rumprun-build fail in 101663 baseline untested
 build-amd64-rumprun   5 rumprun-build fail in 101663 baseline untested
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 100648
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 100648

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt 12 migrate-support-check fail in 101680 never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail in 101680 never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail in 101800 never pass
 build-i386-rumprun7 xen-buildfail   never pass
 build-amd64-rumprun   7 xen-buildfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass

version targeted for testing:
 linux7828a9658951301a3fd83daa4ed0a607d370399e
baseline version:
 linux

[Xen-devel] [xen-unstable test] 101961: tolerable FAIL

2016-11-06 Thread osstest service owner
flight 101961 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101961/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2   5 xen-installfail pass in 101952
 test-armhf-armhf-libvirt-raw 14 guest-start/debian.repeat  fail pass in 101952
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeat  fail pass in 101952

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 101952
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 101952
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 101952
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 101952
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 101952
 test-armhf-armhf-libvirt-qcow2 12 saverestore-support-check   fail like 101952
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 101952
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 101952
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 101952

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2 12 migrate-support-check fail in 101952 never pass
 test-armhf-armhf-xl-credit2 13 saverestore-support-check fail in 101952 never 
pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 build-i386-rumprun7 xen-buildfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 build-amd64-rumprun   7 xen-buildfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  3ebe9a1a826e8d569bef6045777cc01a5699933d
baseline version:
 xen  3ebe9a1a826e8d569bef6045777cc01a5699933d

Last test of basis   101961  2016-11-06 05:39:47 Z0 days
Testing same since0  1970-01-01 00:00:00 Z 17111 days0 attempts

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt  

Re: [Xen-devel] [Intel-gfx] [Announcement] 2016-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2016-11-06 Thread Jike Song
Hi all,

We are pleased to announce another update of Intel GVT-g for Xen.

Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
starting from 4th generation Intel Core(TM) processors with Intel Graphics 
processors. A virtual GPU instance is maintained for each VM, with part of 
performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance among performance, feature, 
and sharing capability. Xen is currently supported on Intel Processor Graphics 
(a.k.a. XenGT).


Repositories

-Xen: https://github.com/01org/igvtg-xen (2016q3-4.6 branch)
-Kernel: https://github.com/01org/igvtg-kernel (2016q3-4.3.0 branch)
-Qemu: https://github.com/01org/igvtg-qemu (2016q3-2.3.0 branch)


This update consists of:

-Preliminary support new platform: 7th generation Intel® Core™ processors. 
For windows OS, it only supports Win10 RedStone 64 bit.

-Windows 10 RedStone guest Support

-Windows Guest QoS preliminary support:  Administrators now are able to 
control the maximum amount of vGPU resource to be consumed by each VM from 
value 1% ~ 99%”

-Display virtualization preliminary support: Besides the tracking of 
display register visit in guest VM, removing irrelative display pipeline info 
between host and guest VM

-Live Migration and savevm/restorevm preliminary support on BDW with 2D/3D 
workload running



Known issues:

-   At least 2GB memory is suggested for Guest Virtual Machine (win7-32/64, 
win8.1-64, win10-64) to run most 3D workloads

-   Windows8 and later Windows fast boot is not supported, the workaround is to 
disable power S3/S4 in HVM file by adding “acpi_S3=0, acpi_S4=0”

-   Sometimes when dom0 and guest has heavy workload, i915 in dom0 will trigger 
a false-alarmed TDR. The workaround is to disable dom0 hangcheck in dom0 grub 
file by adding “i915.enable_hangcheck=0”

-   Stability: When QoS feature is enabled, Windows guest full GPU reset is 
often trigger during MTBF test.  This bug will be fixed in next release

-   Windows guest running OpenCL allocations occurs to host crash; the 
workaround is to disable logd in dom0 grub file by adding “i915. logd_enable =0”


Next update will be around early Jan, 2017.


GVT-g project portal: https://01.org/igvt-g
Please subscribe mailing list: https://lists.01.org/mailman/listinfo/igvt-g


More information about background, architecture and others about Intel GVT-g, 
can be found at:

https://01.org/igvt-g
https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-REWRITE%203RD%20v4.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt


Note: The XenGT project should be considered a work in progress. As such it is 
not a complete product nor should it be considered one. Extra care should be 
taken when testing and configuring a system to use the XenGT project.

--
Thanks,
Jike

On 07/22/2016 01:42 PM, Jike Song wrote:
> Hi all,
> 
> We are pleased to announce another update of Intel GVT-g for Xen.
> 
> Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
> starting from 4th generation Intel Core(TM) processors with Intel Graphics 
> processors. A virtual GPU instance is maintained for each VM, with part of 
> performance critical resources directly assigned. The capability of running 
> native graphics driver inside a VM, without hypervisor intervention in 
> performance critical paths, achieves a good balance among performance, 
> feature, and sharing capability. Xen is currently supported on Intel 
> Processor Graphics (a.k.a. XenGT).
> 
> Repositories
> -Xen: https://github.com/01org/igvtg-xen (2016q2-4.6 branch)
> -Kernel: https://github.com/01org/igvtg-kernel (2016q2-4.3.0 branch)
> -Qemu: https://github.com/01org/igvtg-qemu (2016q2-2.3.0 branch)
> 
> This update consists of:
> -Support Windows 10 guest
> -Support Windows Graphics driver installation on both Windows Normal mode 
> and Safe mode
> 
> Known issues:
> -   At least 2GB memory is suggested for Guest Virtual Machine (VM) to run 
> most 3D workloads
> -   Dom0 S3 related feature is not supported
> -   Windows 8 and later versions: fast boot is not supported, the workaround 
> is to disable power S3/S4 in HVM file by adding "acpi_S3=0, acpi_S4=0"
> -   Using Windows Media Player play videos may cause host crash. Using VLC to 
> play .ogg file may cause mosaic or slow response.
> -   Sometimes when both dom0 and guest have heavy workloads, i915 in dom0 
> will trigger a false graphics reset,
> the workaround is to disable dom0 hangcheck in grub file by adding 
> "i915.enable_hangcheck=0".
> 
> Next update will be around early Oct, 

[Xen-devel] [xen-unstable-coverity test] 101966: all pass - PUSHED

2016-11-06 Thread osstest service owner
flight 101966 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101966/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen  3ebe9a1a826e8d569bef6045777cc01a5699933d
baseline version:
 xen  496673a2ada93c201fbe1cc83146c8bd8e79169d

Last test of basis   101857  2016-11-02 09:19:34 Z4 days
Testing same since   101966  2016-11-06 09:19:06 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Daniel De Graaf 
  Dario Faggioli 
  George Dunlap 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 
  Konrad Rzeszutek Wilk 
  Luwei Kang 
  Roger Pau Monne 
  Roger Pau Monné 
  Wei Liu 

jobs:
 coverity-amd64   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-coverity
+ revision=3ebe9a1a826e8d569bef6045777cc01a5699933d
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push 
xen-unstable-coverity 3ebe9a1a826e8d569bef6045777cc01a5699933d
+ branch=xen-unstable-coverity
+ revision=3ebe9a1a826e8d569bef6045777cc01a5699933d
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-coverity
+ qemuubranch=qemu-upstream-unstable-coverity
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-coverity
+ prevxenbranch=xen-4.7-testing
+ '[' x3ebe9a1a826e8d569bef6045777cc01a5699933d = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/xen/git/linux-pvops.git
++ : 

[Xen-devel] [qemu-mainline test] 101954: regressions - FAIL

2016-11-06 Thread osstest service owner
flight 101954 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101954/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt 11 guest-start  fail REGR. vs. 101909
 test-amd64-i386-libvirt-pair 20 guest-start/debian   fail REGR. vs. 101909
 test-amd64-i386-libvirt-xsm  11 guest-start  fail REGR. vs. 101909
 test-amd64-i386-libvirt  11 guest-start  fail REGR. vs. 101909
 test-amd64-amd64-libvirt-xsm 11 guest-start  fail REGR. vs. 101909
 test-armhf-armhf-xl-xsm   7 host-ping-check-xen  fail REGR. vs. 101909
 test-amd64-amd64-libvirt-vhd  9 debian-di-installfail REGR. vs. 101909
 test-amd64-amd64-xl-qcow2 9 debian-di-installfail REGR. vs. 101909
 test-amd64-amd64-libvirt-pair 20 guest-start/debian  fail REGR. vs. 101909
 test-armhf-armhf-libvirt-xsm 11 guest-start  fail REGR. vs. 101909
 test-armhf-armhf-xl-vhd   9 debian-di-installfail REGR. vs. 101909
 test-armhf-armhf-libvirt 11 guest-start  fail REGR. vs. 101909
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail REGR. vs. 101909
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install  fail REGR. vs. 101909

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 101909
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 101909
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 101909

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass

version targeted for testing:
 qemuu9226682a401f34b10fd79dfe17ba334da0800747
baseline version:
 qemuu199a5bde46b0eab898ab1ec591f423000302569f

Last test of basis   101909  2016-11-03 23:21:40 Z2 days
Testing same since   101943  2016-11-04 22:40:48 Z1 days2 attempts


People who touched revisions under test:
  Olaf Hering 
  Sander Eikelenboom 
  Stefan Hajnoczi 
  Stefano Stabellini 
  Thomas Huth 
  Wei Liu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 

[Xen-devel] [linux-3.4 test] 101951: regressions - FAIL

2016-11-06 Thread osstest service owner
flight 101951 linux-3.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101951/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl   6 xen-boot  fail REGR. vs. 92983
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
92983
 test-amd64-amd64-libvirt-vhd  6 xen-boot  fail REGR. vs. 92983
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-boot  fail REGR. vs. 92983
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-boot  fail REGR. vs. 92983
 test-amd64-amd64-xl-qcow2 6 xen-boot  fail REGR. vs. 92983
 test-amd64-amd64-xl-qemuu-winxpsp3  6 xen-bootfail REGR. vs. 92983
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  6 xen-boot  fail REGR. vs. 92983
 test-amd64-i386-xl-qemuu-winxpsp3  6 xen-boot fail REGR. vs. 92983
 test-amd64-amd64-qemuu-nested-intel  6 xen-boot   fail REGR. vs. 92983
 test-amd64-i386-xl6 xen-boot  fail REGR. vs. 92983
 test-amd64-amd64-xl-xsm   6 xen-boot  fail REGR. vs. 92983
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  6 xen-boot  fail REGR. vs. 92983

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail in 101695 pass 
in 101951
 test-amd64-amd64-i386-pvgrub  6 xen-boot   fail pass in 101695
 test-amd64-amd64-xl-rtds  6 xen-boot   fail pass in 101695
 test-amd64-i386-freebsd10-amd64  6 xen-bootfail pass in 101695
 test-amd64-i386-pair  9 xen-boot/src_host  fail pass in 101720
 test-amd64-i386-pair 10 xen-boot/dst_host  fail pass in 101720
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-boot   fail pass in 101822
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-boot   fail pass in 101840
 test-amd64-i386-xl-qemut-winxpsp3  6 xen-boot  fail pass in 101840
 test-amd64-amd64-amd64-pvgrub  6 xen-boot  fail pass in 101867
 test-amd64-i386-libvirt-pair  9 xen-boot/src_host  fail pass in 101867
 test-amd64-i386-libvirt-pair 10 xen-boot/dst_host  fail pass in 101867

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 92983
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 92983
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 92983

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)   blocked  n/a
 build-amd64-rumprun   7 xen-buildfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 build-i386-rumprun7 xen-buildfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass

version targeted for testing:
 linux8d1988f838a95e836342b505398d38b223181f17
baseline version:
 linux343a5fbeef08baf2097b8cf4e26137cebe3cfef4

Last test of basis92983  2016-04-27 16:21:44 Z  192 days
Testing same since   101695  2016-10-26 18:26:23 Z   10 days   16 attempts


People who touched revisions under test:
  "Suzuki K. Poulose" 
  Aaro Koskinen 
  Al Viro 
  Alan Stern 
  Aleksander Morgado 
  Alex Thorlton 
  Alexandru Cornea 
  Alexey Khoroshilov 
  Amitkumar Karwar 
  Andrew Banman 
  Andrew Morton 
  Andrey Ryabinin 
  Anson Huang 
  Arnaldo Carvalho de Melo 
  Arnaldo Carvalho de Melo 
  Arnd Bergmann 
  Ben Hutchings 
  Bjørn Mork 
  Boris Brezillon 
  Borislav Petkov 
  Brian Norris 
  Charles Keepax