[Xen-devel] [linux-arm-xen test] 107488: regressions - trouble: broken/fail/pass

2017-04-17 Thread osstest service owner
flight 107488 linux-arm-xen real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107488/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale   6 xen-boot   fail in 107371 REGR. vs. 107176

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   2 hosts-allocate broken in 107296 pass in 107488
 test-armhf-armhf-xl-arndale   3 host-install(3)  broken pass in 107483
 test-armhf-armhf-xl-credit2   6 xen-boot   fail pass in 107296
 test-armhf-armhf-libvirt-raw  6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-xsm   6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-multivcpu  6 xen-boot  fail pass in 107296
 test-armhf-armhf-libvirt  6 xen-boot   fail pass in 107296
 test-armhf-armhf-libvirt-xsm  6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl   6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-vhd   6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-rtds  6 xen-boot   fail pass in 107371

Regressions which are regarded as allowable (not blocking):
 test-arm64-arm64-libvirt-qcow2  9 debian-di-install fail blocked in 107176
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-check fail in 107296 like 
107176
 test-armhf-armhf-libvirt 13 saverestore-support-check fail in 107296 like 
107176

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-qcow2 11 migrate-support-check fail in 107296 never 
pass
 test-arm64-arm64-libvirt-qcow2 12 saverestore-support-check fail in 107296 
never pass
 test-armhf-armhf-xl 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check fail in 107296 never 
pass
 test-armhf-armhf-xl 13 saverestore-support-check fail in 107296 never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check fail in 107296 
never pass
 test-armhf-armhf-xl-credit2 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-credit2 13 saverestore-support-check fail in 107296 never 
pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-libvirt12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-xsm 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-xsm 13 saverestore-support-check fail in 107296 never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-libvirt-raw 12 saverestore-support-check fail in 107296 never 
pass
 test-armhf-armhf-xl-rtds12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-check fail in 107296 never pass
 test-armhf-armhf-xl-vhd 11 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-vhd 12 saverestore-support-check fail in 107296 never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-arm64-arm64-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-rtds 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-arm64-arm64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass

version targeted for testing:
 linux9ceff47026d8db55dc9f133a40ae4042c71fcb13
baseline version:
 linux6878b2fa7229c9208a02d45f280c71389cba0617

Last test of basis   107176  2017-04-04 09:44:38 Z   13 days
Failing since107256  2017-04-07 00:24:43 Z   11 days   15 attempts
Testing same since   107296  2017-04-08 07:12:44 Z9 days   14 attempts


10162 people touched revisions under test,
not listing them all

jobs:
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 

Re: [Xen-devel] [Qemu-devel][PATCH] configure: introduce --enable-xen-fb-backend

2017-04-17 Thread Juergen Gross
On 14/04/17 19:52, Stefano Stabellini wrote:
> On Fri, 14 Apr 2017, Juergen Gross wrote:
>> On 14/04/17 08:06, Oleksandr Andrushchenko wrote:
>>> On 04/14/2017 03:12 AM, Stefano Stabellini wrote:
 On Tue, 11 Apr 2017, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko 
>
> For some use cases when Xen framebuffer/input backend
> is not a part of Qemu it is required to disable it,
> because of conflicting access to input/display devices.
> Introduce additional configuration option for explicit
> input/display control.
 In these cases when you don't want xenfb, why don't you just remove
 "vfb" from the xl config file? QEMU only starts the xenfb backend when
 requested by the toolstack.

 Is it because you have an alternative xenfb backend? If so, is it really
 fully xenfb compatible, or is it a different protocol? If it is a
 different protocol, I suggest you rename your frontend/backend PV device
 name to something different from "vfb".

>>> Well, offending part is vkbd actually (for multi-touch
>>> we run our own user-space backend which supports
>>> kbd/ptr/mtouch), but vfb and vkbd is the same backend
>>> in QEMU. So, I am ok for vfb, but just want vkbd off
>>> So, there are 2 options:
>>> 1. At compile time remove vkbd and still allow vfb
>>> 2. Remove xenfb completely, if acceptable (this is my case)
>>
>> What about adding a Xenstore entry for backend type and let qemu test
>> for it being not present or containing "qemu"?
> 
> That is what we do for the console, using the xenstore node "type". QEMU
> is "ioemu" while xenconsoled is "xenconsoled". Weirdly, instead of a
> backend node, it is a read-only frontend node, see
> tools/libxl/libxl_console.c:libxl__device_console_add.
> 
> Oleksandr, I am sorry to feature-creep this simple patch, but I think
> Juergen is right. But we cannot do it just for one protocol. We need to
> introduce a generic way to enable/disable backends in QEMU. Using a
> xenstore node is OK.

An alternative solution would be similar to qdisk/tap or qusb/vusb
backends: Use different device types on backend side while keeping
frontend side of Xenstore the same as today.

So today the vkbd backend nodes are:

/local/domain/0/backend/vkbd/

You could use:

/local/domain/0/backend/mtouch

and keep the frontend nodes (/local/domain//device/vkbd/), possibly
with additional feature node(s).

The qemu backend would have to check for the vkbd backend nodes to be
present before enabling the related backend.


Juergen

> 
> We could do exactly the same as the PV console, thus "type" = "ioemu",
> read-only, under the frontend xenstore directory. Or we could introduce
> new nodes. I would probably go for "backend-type" = "qemu" under the
> backend xenstore directory. I don't have a strong opinion about this. In
> the example below I'll use the PV console convention.
> 
> For starters:
> 
> * libxl needs to write the "type" node to xenstore for *all* protocols.
>   The "type" is not yet configurable.
> * qemu reads them for all backends, proceeds if "type" = "ioemu"
> 
> These should be two simple patches. Stage 2:
> 
> * we add options in the xl config file to configure any backend, libxl
>   set "type" accordingly (Maybe not *any*, but vif, vkbd, vfb could all
>   have a "type". It is OK if you only add an option for vkbd.)
> * non-QEMU backends, in particular Linux backends, also read the "type"
>   node and proceed if it's "linux"
> 
> Does this sound OK to you?
> 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Tian, Kevin
> From: Roger Pau Monné [mailto:roger@citrix.com]
> Sent: Friday, April 14, 2017 11:35 PM
> 
> Hello,
> 
> Although PVHv2 Dom0 is not yet finished, I've been trying the current code
> on
> different hardware, and found that with pre-Haswell Intel hardware PVHv2
> Dom0
> completely freezes the box when calling iommu_hwdom_init in
> dom0_construct_pvh.
> OTOH the same doesn't happen when using a newer CPU (ie: haswell or
> newer).
> 
> I'm not able to debug that in any meaningful way because the box seems to
> lock
> up completely, even the watchdog NMI stops working. Here is the boot log,
> up to
> the point where it freezes:
> 

I don't have an idea now w/o seeing more meaningful debug message.
Maybe you have to add more fine-grained prints to capture some
useful hints.

Thanks
Kevin

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.9 test] 107487: regressions - FAIL

2017-04-17 Thread osstest service owner
flight 107487 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107487/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2   6 xen-boot fail REGR. vs. 107358

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt  9 debian-install   fail in 107477 pass in 107487
 test-amd64-amd64-i386-pvgrub 21 leak-check/check fail in 107477 pass in 107487
 test-armhf-armhf-xl-cubietruck 15 guest-start/debian.repeat fail in 107477 
pass in 107487
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail pass in 107477

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-xsm   6 xen-boot fail  like 107358
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 107358
 test-armhf-armhf-xl-rtds  6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt-xsm  6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt-raw  6 xen-boot fail  like 107358
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail like 107358
 test-armhf-armhf-xl-vhd   6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt  6 xen-boot fail  like 107358

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-arm64-arm64-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-rtds 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-rtds 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-intel 13 xen-boot/l1 fail never pass
 test-amd64-amd64-qemuu-nested-amd 13 xen-boot/l1   fail never pass
 test-arm64-arm64-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 12 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail never pass
 test-armhf-armhf-xl-arndale   6 xen-boot fail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass

version targeted for testing:
 linuxcf2586e60ede2217d7f53a0585e27e1cca693600
baseline version:
 linux37feaf8095d352014555b82adb4a04609ca17d3f

Last test of basis   107358  2017-04-10 19:42:52 Z7 days
Testing same since   107396  2017-04-12 11:15:19 Z5 days   11 attempts


People who touched revisions under test:
  Adrian Hunter 
  Alan Stern 
  Alberto Aguirre 
  Alex Deucher 
  Alex Williamson 
  Alex Wood 
  Alexander Polakov 
  Alexander Polyakov 
  Andrew Morton 
  Andrey Smetanin 
  Andy Gross 
  Andy Shevchenko 
  Arend van Spriel 
  Arnd Bergmann 
  Aurelien Aptel 
  Baoyou Xie 
  Bartosz Golaszewski 
  Bastien Nocera 

Re: [Xen-devel] [PATCH for-4.9] x86/vioapic: allow holes in the GSI range for PVH Dom0

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 05:09:22PM +0100, Roger Pau Monne wrote:
>The current vIO APIC for PVH Dom0 doesn't allow non-contiguous GSIs, which
>means that all GSIs must belong to an IO APIC. This doesn't match reality,
>where there are systems with non-contiguous GSIs.
>
>In order to fix this add a base_gsi field to each hvm_vioapic struct, in order
>to store the base GSI for each emulated IO APIC. For PVH Dom0 those values are
>populated based on the hardware ones.
>
>Signed-off-by: Roger Pau Monné 
>Reported-by: Chao Gao 

Tested-by: Chao Gao 

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus test] 107486: regressions - FAIL

2017-04-17 Thread osstest service owner
flight 107486 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107486/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 11 guest-start  fail REGR. vs. 59254
 test-armhf-armhf-xl-cubietruck 11 guest-start fail REGR. vs. 59254
 test-armhf-armhf-xl  11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-libvirt 11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-xl-xsm  11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-xl-arndale  11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-libvirt-xsm 11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-xl-credit2  11 guest-start fail in 107480 REGR. vs. 59254

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-xsm  3 host-install(3) broken in 107480 pass in 107486
 test-armhf-armhf-xl-credit2   9 debian-install fail pass in 107480

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 11 guest-start   fail REGR. vs. 59254
 test-amd64-amd64-xl-rtds  9 debian-installfail REGR. vs. 59254
 test-arm64-arm64-libvirt-xsm  6 xen-bootfail baseline untested
 test-armhf-armhf-xl-vhd   9 debian-di-install   fail baseline untested
 test-armhf-armhf-libvirt-raw  9 debian-di-install   fail baseline untested
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 59254
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 59254
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 59254
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 59254

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 guest-start  fail in 107480 never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-multivcpu 11 guest-start  fail  never pass
 test-arm64-arm64-xl-xsm  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 11 guest-start  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl  11 guest-start  fail   never pass
 test-arm64-arm64-xl-credit2  11 guest-start  fail   never pass
 test-arm64-arm64-xl-rtds 11 guest-start  fail   never pass
 test-arm64-arm64-libvirt-qcow2  9 debian-di-installfail never pass

version targeted for testing:
 linux4f7d029b9bf009fbee76bb10c0c4351a1870d2f3
baseline version:
 linux45820c294fe1b1a9df495d57f40585ef2d069a39

Last test of basis59254  2015-07-09 04:20:48 Z  648 days
Failing since 59348  2015-07-10 04:24:05 Z  647 days  394 attempts
Testing same since   107480  2017-04-17 00:47:59 Z0 days2 attempts


8168 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumprun  pass
 build-i386-rumprun   pass
 

[Xen-devel] [linux-next test] 107485: regressions - FAIL

2017-04-17 Thread osstest service owner
flight 107485 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107485/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3  6 xen-boot   fail REGR. vs. 107406
 test-amd64-i386-xl-qemut-winxpsp3  6 xen-bootfail REGR. vs. 107406
 test-amd64-i386-freebsd10-amd64  6 xen-boot  fail REGR. vs. 107406
 test-amd64-amd64-rumprun-amd64  6 xen-boot   fail REGR. vs. 107406
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-boot  fail REGR. vs. 107406
 test-amd64-amd64-qemuu-nested-intel  6 xen-boot  fail REGR. vs. 107406
 test-amd64-amd64-pair 9 xen-boot/src_hostfail REGR. vs. 107406
 test-amd64-amd64-pair10 xen-boot/dst_hostfail REGR. vs. 107406
 test-amd64-i386-libvirt-pair  9 xen-boot/src_hostfail REGR. vs. 107406
 test-amd64-amd64-libvirt-pair  9 xen-boot/src_host   fail REGR. vs. 107406
 test-amd64-amd64-libvirt-pair 10 xen-boot/dst_host   fail REGR. vs. 107406
 test-amd64-i386-libvirt-pair 10 xen-boot/dst_hostfail REGR. vs. 107406
 test-amd64-i386-libvirt-xsm   6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl   6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-libvirt   6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-libvirt  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl-qemuu-winxpsp3  6 xen-bootfail REGR. vs. 107406
 test-amd64-amd64-xl-qemut-winxpsp3  6 xen-boot   fail REGR. vs. 107406
 test-amd64-amd64-i386-pvgrub  6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl-pvh-intel  6 xen-bootfail REGR. vs. 107406
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail REGR. 
vs. 107406
 test-amd64-amd64-libvirt-xsm  6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-amd64-pvgrub  6 xen-bootfail REGR. vs. 107406
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
107406
 test-amd64-i386-xl-xsm6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl-raw6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-boot  fail REGR. vs. 107406
 test-amd64-i386-freebsd10-i386  6 xen-boot   fail REGR. vs. 107406
 test-amd64-amd64-pygrub   6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl-xsm   6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl-qemuu-ovmf-amd64  6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl-credit2   6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-libvirt-vhd  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-pair  9 xen-boot/src_hostfail REGR. vs. 107406
 test-amd64-i386-pair 10 xen-boot/dst_hostfail REGR. vs. 107406
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-rumprun-i386  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  6 xen-bootfail REGR. vs. 107406
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-boot   fail REGR. vs. 107406
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
107406
 test-amd64-amd64-xl-qemut-win7-amd64  6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl-qemut-debianhvm-amd64  6 xen-bootfail REGR. vs. 107406
 test-amd64-amd64-xl-multivcpu  6 xen-bootfail REGR. vs. 107406
 test-amd64-amd64-xl-pvh-amd   6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-xl-qemuu-win7-amd64  6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
107406
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-boot   fail REGR. vs. 107406
 test-amd64-amd64-xl-qcow2 6 xen-boot fail REGR. vs. 107406
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 107406
 test-amd64-amd64-qemuu-nested-amd  6 xen-bootfail REGR. vs. 107406
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-boot 

Re: [Xen-devel] [PATCH V2] tests/xen-access: Added vm_event emulation tests

2017-04-17 Thread Tamas K Lengyel
On Sat, Apr 15, 2017 at 1:45 AM, Razvan Cojocaru 
wrote:

> This patch adds support for testing instruction emulation when
> required by the vm_event reply sent for MEM_ACCESS events. To this
> end, it adds the "emulate_write" and "emulate_exec" parameters
> that behave like the old "write" and "exec" parameters, except
> instead of allowing writes / executes for a hit page, they emulate
> the trigger instruction. The new parameters don't mark all of the
> guest's pages, instead they stop at the arbitrary low limit of
> the first 1000 pages - otherwise the guest would slow to a crawl.
> Since the emulator is still incomplete and has trouble with
> emulating competing writes in SMP scenarios, the new tests are
> only meant for debugging issues.
>
> Signed-off-by: Razvan Cojocaru 
>
>
Acked-by: Tamas K Lengyel 
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] EFI + tboot + Xen

2017-04-17 Thread Rich Persaud
On Apr 14, 2017, at 16:43, Daniel Kiper  wrote:
> 
>> On Fri, Apr 14, 2017 at 04:17:54PM +0100, Andrew Cooper wrote:
>>> On 14/04/2017 15:54, Daniel Kiper wrote:
>>> Hey,
>>> 
>>> Has anybody tried to run EFI + tboot + Xen?
>>> I have a feeling that it does not work because
>>> tboot shuts down EFI boot services. However,
>>> even if it works then efibootmgr is unusable
>>> due to lack of EFI runtime services. Do we care?
>>> Is it possible to make it work with full blown
>>> EFI infrastructure available for Xen?
>> 
>> Judging by
>> http://hg.code.sf.net/p/tboot/code/file/9352e6391332/tboot/common/boot.S#l83
>> it will be grub exiting boot services.  tboot needs rather more
>> multiboot2 knowledge before it could participate in a hand-off to Xen
>> while keeping boot services active.
> 
> Sure, it is not a problem. However, I was told that it was (not) done
> deliberately because we cannot trust EFI due to lack of its measurement.
> I am not sure it is true or not. I though that somebody played with tboot
> and Xen and has some knowledge in that area. Anyway, I will investigate
> this further. However, any knowledge sharing is greatly appreciated.

On the OpenXT project, Ross Philipson has an early PoC:
https://github.com/rossphilipson/efi-tboot

From the README:
---
EFI TBOOT is mostly a proof of concept at this point. It is not currently 
functional. It can be built and installed as an EFI boot loader. It only works 
in conjunction with Xen at the moment. The current development work is being 
done on Fedora 25 x64. The status as of March 14, 2017 is: 

- EFI TBOOT will boot, but it needs a few key strokes to get going (this is for 
debugging purposes). 
- EFI TBOOT will relocate itself to EFI runtime memory and setup a shared 
runtime variable with Xen. 
- EFI related configuration setup is done as well as standard TBOOT pre- launch 
configuration. 
- Xen is launched and has code to call EFI TBOOT back after EBS. 
- EFI TBOOT then does the SENTER successfully in the callback. 
- The post launch entry point is reached but the switch back to long mode is 
not working
---

Rich___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 1/2] tools: Use POSIX poll.h instead of sys/poll.h

2017-04-17 Thread Alistair Francis
The POSIX spec specifies to use:
#include 
instead of:
#include 
as seen here:
http://pubs.opengroup.org/onlinepubs/009695399/functions/poll.html

This removes the warning:
#warning redirecting incorrect #include  to 
when building with the musl C-library.

Signed-off-by: Alistair Francis 
---
 tools/libxl/libxl_internal.h   | 2 +-
 tools/tests/xen-access/xen-access.c| 2 +-
 tools/xenstat/libxenstat/src/xenstat_qmp.c | 2 +-
 tools/xentrace/xentrace.c  | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index be24b76dfa..5d082c5704 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -38,7 +38,7 @@
 #include 
 
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
diff --git a/tools/tests/xen-access/xen-access.c 
b/tools/tests/xen-access/xen-access.c
index ff4d289b45..238011e010 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -36,7 +36,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 
 #include 
 #include 
diff --git a/tools/xenstat/libxenstat/src/xenstat_qmp.c 
b/tools/xenstat/libxenstat/src/xenstat_qmp.c
index a87c9373c2..3fda487d49 100644
--- a/tools/xenstat/libxenstat/src/xenstat_qmp.c
+++ b/tools/xenstat/libxenstat/src/xenstat_qmp.c
@@ -14,7 +14,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
diff --git a/tools/xentrace/xentrace.c b/tools/xentrace/xentrace.c
index f09fe6cf19..364a6fdad5 100644
--- a/tools/xentrace/xentrace.c
+++ b/tools/xentrace/xentrace.c
@@ -24,7 +24,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 2/2] tools: Use POSIX signal.h instead of sys/signal.h

2017-04-17 Thread Alistair Francis
The POSIX spec specifies to use:
#include 
instead of:
#include 
as seen here:
   http://pubs.opengroup.org/onlinepubs/009695399/functions/signal.html

This removes the warning:
#warning redirecting incorrect #include  to 
when building with the musl C-library.

Signed-off-by: Alistair Francis 
---
 tools/blktap2/drivers/tapdisk-server.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/blktap2/drivers/tapdisk-server.c 
b/tools/blktap2/drivers/tapdisk-server.c
index eecde3d23f..71315bb069 100644
--- a/tools/blktap2/drivers/tapdisk-server.c
+++ b/tools/blktap2/drivers/tapdisk-server.c
@@ -30,7 +30,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 
 #include "tapdisk-utils.h"
 #include "tapdisk-server.h"
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3] ns16550-Add-command-line-parsing-adjustments

2017-04-17 Thread Paratey, Swapnil

Hi Jan,

I have a question about __initconst that you mentioned.

On 4/3/2017 6:55 AM, Jan Beulich wrote:


On 31.03.17 at 17:42,  wrote:

The title needs improvement - it doesn't really reflect what the
patch does.


Add name=value parsing options for com1 and com2 to add flexibility
in setting register values for MMIO UART devices.

Maintain backward compatibility with previous positional parameter
specfications.

eg. com1=115200,8n1,0x3f8,4
eg. com1=baud=115200,parity=n,reg_width=4,reg_shift=2,irq=4
eg. com1=115200,8n1,0x3f8,4,reg_width=4,reg_shift=2

I would have been nice if you split the new format handling from
the addition of the new sub-options.


--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -324,6 +324,43 @@ Both option `com1` and `com2` follow the same format.
  
  A typical setup for most situations might be `com1=115200,8n1`
  
+In addition to the above positional specification for UART parameters,

+name=value pair specfications are also supported. This is used to add
+flexibility for UART devices which require additional UART parameter
+configurations.
+
+The comma separation still delineates positional parameters. Hence,
+unless the parameter is explicitly specified with name=value option, it
+will be considered a positional parameter.
+
+The syntax consists of
+com1=(comma-separated positional parameters),(comma separated name-value pairs)
+
+The accepted name keywords for name=value pairs are
+ * `baud` - accepts integer baud rate (eg. 115200) or `auto`
+ * `bridge`- accepts xx:xx:xx. Similar to bridge-bdf in positional parameters.
+ notation is ::
+ * `clock_hz`- accepts large integers to setup UART clock frequencies.
+   Do note - these values are multiplied by 16.
+ * `data_bits` - integer between 5 and 8
+ * `dev` - accepted values are `pci` OR `amt`. If this option
+   is used to specify if the serial device is pci-based. The io_base
+   cannot be specified when `dev=pci` or `dev=amt` is used.
+ * `io_base` - accepts integer which specified IO base port for UART registers
+ * `irq` - IRQ number to use
+ * `parity` - accepted values are same as positional parameters
+ * `port` - used to specify which port the PCI serial device is located on
+notation is xx:xx:xx ::

Everywhere above PCI device specifications wrongly use : instead
of . as separator between device and function.


+ * `reg_shift` - register shifts required to set UART registers
+ * `reg_width` - register width required to set UART registers
+ (only accepts 1 and 4)
+ * `stop_bits` - only accepts 1 or 2 for the number of stop bits

Since these are all new anyway, can we please use - instead of _
as separator characters inside sub-option names? Dashed are
slightly easier to type than underscores on most keyboards.


--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -48,7 +48,7 @@ static void __init assign_integer_param(
  
  void __init cmdline_parse(const char *cmdline)

  {
-char opt[100], *optval, *optkey, *q;
+char opt[512], *optval, *optkey, *q;

Why not MAX_CMDLINE_LENGTH? But anyway both this and ...


--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -38,11 +38,27 @@
   * can be specified in place of a numeric baud rate. Polled mode is specified
   * by requesting irq 0.
   */
-static char __initdata opt_com1[30] = "";
-static char __initdata opt_com2[30] = "";
+static char __initdata opt_com1[MAX_CMDLINE_LENGTH] = "";
+static char __initdata opt_com2[MAX_CMDLINE_LENGTH] = "";

... this seems to be excessive growth.


+typedef enum e_serial_param_type {
+BAUD=0,

Stray "=0". Also I don't think enumerator identifiers should be all
capitals.


+BRIDGEBDF,
+CLOCKHZ,
+DATABITS,
+DEVICE,
+IO_BASE,
+IRQ,
+PARITY,
+PORTBDF,
+REG_SHIFT,
+REG_WIDTH,
+STOPBITS,
+__MAX_SERIAL_PARAM /* introduce more parameters before this line */

Stray double underscores.


@@ -77,6 +93,29 @@ static struct ns16550 {
  #endif
  } ns16550_com[2] = { { 0 } };
  
+struct serial_param_var

+{
+char *sp_name;

const


+serial_param_type sp_type;
+};
+
+/* enum struct keeping a table of all accepted parameter names
+ * for parsing cmdline for serial port com1 and com2 */
+static struct serial_param_var sp_vars[] = {

const ... __initconst plus you should aim at arranging for the
string literals below to also get placed in .init.rodata (instead of
.rodata).


Adding an __initconst before the variable name (or after it) makes
sp_vars go into the .init.data section if I check through
"objdump -t xen-syms | grep sp_vars"

I'm not being able to see an init.rodata section at all for any
other variable to emulate similar behavior i.e.
doing an "objdump -t xen-syms | grep .init.rodata"
doesn't show any results (whereas .init.data shows many).

The header file for __initconst defines it as .init.rodata but
sp_vars ends up in 

[Xen-devel] [PATCH v2 2/2] kexec: remove spinlock now that all KEXEC hypercall ops are protected at the top-level

2017-04-17 Thread Eric DeVolder
The spinlock in kexec_swap_images() was removed as
this function is only reachable on the kexec hypercall, which is
now protected at the top-level in do_kexec_op_internal(),
thus the local spinlock is no longer necessary.

Signed-off-by: Eric DeVolder 
Reviewed-by: Bhavesh Davda 
Reviewed-by: Konrad Rzeszutek Wilk 
---
 xen/common/kexec.c | 5 -
 1 file changed, 5 deletions(-)

diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 3f96eb2..efecf60 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -820,7 +820,6 @@ static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
 static int kexec_swap_images(int type, struct kexec_image *new,
  struct kexec_image **old)
 {
-static DEFINE_SPINLOCK(kexec_lock);
 int base, bit, pos;
 int new_slot, old_slot;
 
@@ -832,8 +831,6 @@ static int kexec_swap_images(int type, struct kexec_image 
*new,
 if ( kexec_load_get_bits(type, , ) )
 return -EINVAL;
 
-spin_lock(_lock);
-
 pos = (test_bit(bit, _flags) != 0);
 old_slot = base + pos;
 new_slot = base + !pos;
@@ -846,8 +843,6 @@ static int kexec_swap_images(int type, struct kexec_image 
*new,
 clear_bit(old_slot, _flags);
 *old = kexec_image[old_slot];
 
-spin_unlock(_lock);
-
 return 0;
 }
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 0/2] kexec: Use hypercall_create_continuation to protect KEXEC ops

2017-04-17 Thread Eric DeVolder
During testing (using the script below) we found that multiple
invocations of kexec of unload/load are not safe.

This does not exist in classic Xen kernels in which the kexec-tools
did the kexec via Linux kernel syscall (which in turn made the
hypercall), as the Linux code has a mutex_trylock which would
inhibit multiple concurrent calls.

But with the kexec-tools utilizing xc_kexec_* that is no longer
the case and we need to protect against multiple concurrent
invocations.

Please see the patches and review at your convenience!

 try-crash.pl from bhavesh.da...@oracle.com 
#!/usr/bin/perl -w

use strict;
use warnings;
use threads;

sub threaded_task {
threads->create(sub {
my $thr_id = threads->self->tid;
#print "Starting load thread $thr_id\n";
system("/sbin/kexec  -p --command-line=\"placeholder 
root=/dev/mapper/nimbula-root ro rhbg console=tty0 console=hvc0 earlyprintk=xen 
nomodeset printk.time=1 irqpoll maxcpus=1 nr_cpus=1 reset_devices 
cgroup_disable=memory mce=off selinux=0 console=ttyS1,115200n8\" 
--initrd=/boot/initrd-4.1.12-61.1.9.el6uek.x86_64kdump.img 
/boot/vmlinuz-4.1.12-61.1.9.el6uek.x86_64");
#print "Ending load thread $thr_id\n";
threads->detach(); #End thread.
});
threads->create(sub {
my $thr_id = threads->self->tid;
#print "Starting unload thread $thr_id\n";
system("/sbin/kexec  -p -u");
#print "Ending unload thread $thr_id\n";
threads->detach(); #End thread.
});
}

for my $i (0..99)
{
threaded_task();
}


Eric DeVolder (2):
  kexec: use hypercall_create_continuation to protect KEXEC ops
  kexec: remove spinlock now that all KEXEC hypercall ops are protected
at the top-level

 xen/common/kexec.c | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 1/2] kexec: use hypercall_create_continuation to protect KEXEC ops

2017-04-17 Thread Eric DeVolder
When we concurrently try to unload and load crash
images we eventually get:

 Xen call trace:
[] machine_kexec_add_page+0x3a0/0x3fa
[] machine_kexec_load+0xdb/0x107
[] kexec.c#kexec_load_slot+0x11/0x42
[] kexec.c#kexec_load+0x119/0x150
[] kexec.c#do_kexec_op_internal+0xab/0xcf
[] do_kexec_op+0xe/0x1e
[] pv_hypercall+0x20a/0x44a
[] cpufreq.c#test_all_events+0/0x30

 Pagetable walk from 820040088320:
  L4[0x104] = 0002979d1063 
  L3[0x001] = 0002979d0063 
  L2[0x000] = 0002979c7063 
  L1[0x088] = 80037a91ede97063 

The interesting thing is that the page bits (063) look legit.

The operation on which we blow up is us trying to write
in the L1 and finding that the L2 entry points to some
bizzare MFN. It stinks of a race, and it looks like
the issue is due to no concurrency locks when dealing
with the crash kernel space.

Specifically we concurrently call kimage_alloc_crash_control_page
which iterates over the kexec_crash_area.start -> kexec_crash_area.size
and once found:

  if ( page )
  {
  image->next_crash_page = hole_end;
  clear_domain_page(_mfn(page_to_mfn(page)));
  }

clears. Since the parameters of what MFN to use are provided
by the callers (and the area to search is bounded) the the 'page'
is probably the same. So #1 we concurrently clear the
'control_code_page'.

The next step is us passing this 'control_code_page' to
machine_kexec_add_page. This function requires the MFNs:
page_to_maddr(image->control_code_page).

And this would always return the same virtual address, as
the MFN of the control_code_page is inside of the
kexec_crash_area.start -> kexec_crash_area.size area.

Then machine_kexec_add_page updates the L1 .. which can be done
concurrently and on subsequent calls we mangle it up.

This is all a theory at this time, but testing reveals
that adding the hypercall_create_continuation() at the
kexec hypercall fixes the crash.

NOTE: This patch follows 5c5216 (kexec: clear kexec_image slot
when unloading kexec image) to prevent crashes during
simultaneous load/unloads.

NOTE: Consideration was given to using the existing flag
KEXEC_FLAG_IN_PROGRESS to denote a kexec hypercall in
progress. This, however, overloads the original intent of
the flag which is to denote that we are about-to/have made
the jump to the crash path. The overloading would lead to
failures in existing checks on this flag as the flag would
always be set at the top level in do_kexec_op_internal().
For this reason, the new flag KEXEC_FLAG_HC_IN_PROGRESS
was introduced.

While at it, fixed the #define mismatched spacing

Signed-off-by: Eric DeVolder 
Reviewed-by: Bhavesh Davda 
Reviewed-by: Konrad Rzeszutek Wilk 
---
v2: 04/17/2017
 - Patch titled 'kexec: use hypercall_create_continuation to protect KEXEC ops'
 - Jan Beulich directed me to use a continuation method instead
   of spinlock.
 - Incorporated feedback from Daniel Kiper 
 - Incorporated feedback from Konrad Wilk 

v1: 04/10/2017
 - Patch titled 'kexec: Add spinlock for the whole hypercall'
 - Used spinlock in do_kexec_op_internal()
---
 xen/common/kexec.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 072cc8e..3f96eb2 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -50,9 +50,10 @@ static cpumask_t crash_saved_cpus;
 
 static struct kexec_image *kexec_image[KEXEC_IMAGE_NR];
 
-#define KEXEC_FLAG_DEFAULT_POS   (KEXEC_IMAGE_NR + 0)
-#define KEXEC_FLAG_CRASH_POS (KEXEC_IMAGE_NR + 1)
-#define KEXEC_FLAG_IN_PROGRESS   (KEXEC_IMAGE_NR + 2)
+#define KEXEC_FLAG_DEFAULT_POS(KEXEC_IMAGE_NR + 0)
+#define KEXEC_FLAG_CRASH_POS  (KEXEC_IMAGE_NR + 1)
+#define KEXEC_FLAG_IN_PROGRESS(KEXEC_IMAGE_NR + 2)
+#define KEXEC_FLAG_HC_IN_PROGRESS (KEXEC_IMAGE_NR + 3)
 
 static unsigned long kexec_flags = 0; /* the lowest bits are for 
KEXEC_IMAGE... */
 
@@ -1193,6 +1194,9 @@ static int do_kexec_op_internal(unsigned long op,
 if ( ret )
 return ret;
 
+if ( test_and_set_bit(KEXEC_FLAG_HC_IN_PROGRESS, _flags))
+return hypercall_create_continuation(__HYPERVISOR_kexec_op, "lh", op, 
uarg);
+
 switch ( op )
 {
 case KEXEC_CMD_kexec_get_range:
@@ -1227,6 +1231,8 @@ static int do_kexec_op_internal(unsigned long op,
 break;
 }
 
+clear_bit(KEXEC_FLAG_HC_IN_PROGRESS, _flags);
+
 return ret;
 }
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-arm-xen test] 107483: regressions - FAIL

2017-04-17 Thread osstest service owner
flight 107483 linux-arm-xen real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107483/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale   6 xen-boot fail REGR. vs. 107176

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   2 hosts-allocate broken in 107296 pass in 107483
 test-armhf-armhf-xl-credit2   6 xen-boot   fail pass in 107296
 test-armhf-armhf-libvirt-raw  6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-xsm   6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-multivcpu  6 xen-boot  fail pass in 107296
 test-armhf-armhf-libvirt  6 xen-boot   fail pass in 107296
 test-armhf-armhf-libvirt-xsm  6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl   6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-vhd   6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-rtds  6 xen-boot   fail pass in 107371

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-check fail in 107296 like 
107176
 test-armhf-armhf-libvirt 13 saverestore-support-check fail in 107296 like 
107176

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-credit2 13 saverestore-support-check fail in 107296 never 
pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check fail in 107296 never 
pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check fail in 107296 
never pass
 test-armhf-armhf-xl 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl 13 saverestore-support-check fail in 107296 never pass
 test-armhf-armhf-libvirt12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-xsm 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-xsm 13 saverestore-support-check fail in 107296 never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-libvirt-raw 12 saverestore-support-check fail in 107296 never 
pass
 test-armhf-armhf-xl-rtds12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-check fail in 107296 never pass
 test-armhf-armhf-xl-vhd 11 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-vhd 12 saverestore-support-check fail in 107296 never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-arm64-arm64-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-rtds 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-rtds 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 12 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-arm64-arm64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass

version targeted for testing:
 linux9ceff47026d8db55dc9f133a40ae4042c71fcb13
baseline version:
 linux6878b2fa7229c9208a02d45f280c71389cba0617

Last test of basis   107176  2017-04-04 09:44:38 Z   13 days
Failing since107256  2017-04-07 00:24:43 Z   10 days   14 attempts
Testing same since   107296  2017-04-08 07:12:44 Z9 days   13 attempts


10162 people touched revisions under test,
not listing them all

jobs:
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-arm64  pass
 build-armhf  pass
 build-arm64-libvirt   

[Xen-devel] [PATCH for-4.9] x86/vioapic: allow holes in the GSI range for PVH Dom0

2017-04-17 Thread Roger Pau Monne
The current vIO APIC for PVH Dom0 doesn't allow non-contiguous GSIs, which
means that all GSIs must belong to an IO APIC. This doesn't match reality,
where there are systems with non-contiguous GSIs.

In order to fix this add a base_gsi field to each hvm_vioapic struct, in order
to store the base GSI for each emulated IO APIC. For PVH Dom0 those values are
populated based on the hardware ones.

Signed-off-by: Roger Pau Monné 
Reported-by: Chao Gao 
---
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Julien Grall 
Cc: Chao Gao 
---
I think this is a good canditate to be included in 4.9, it's a bugfix, and it's
only used by PVH Dom0, so the risk is low IMHO.
---
 xen/arch/x86/hvm/vioapic.c| 41 +++
 xen/include/asm-x86/hvm/vioapic.h |  1 +
 2 files changed, 17 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 5157db7a4e..ec87a97651 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -64,37 +64,23 @@ static struct hvm_vioapic *addr_vioapic(const struct domain 
*d,
 struct hvm_vioapic *gsi_vioapic(const struct domain *d, unsigned int gsi,
 unsigned int *pin)
 {
-unsigned int i, base_gsi = 0;
+unsigned int i;
 
 for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
 {
 struct hvm_vioapic *vioapic = domain_vioapic(d, i);
 
-if ( gsi >= base_gsi && gsi < base_gsi + vioapic->nr_pins )
+if ( gsi >= vioapic->base_gsi &&
+ gsi < vioapic->base_gsi + vioapic->nr_pins )
 {
-*pin = gsi - base_gsi;
+*pin = gsi - vioapic->base_gsi;
 return vioapic;
 }
-
-base_gsi += vioapic->nr_pins;
 }
 
 return NULL;
 }
 
-static unsigned int base_gsi(const struct domain *d,
- const struct hvm_vioapic *vioapic)
-{
-unsigned int nr_vioapics = d->arch.hvm_domain.nr_vioapics;
-unsigned int base_gsi = 0, i = 0;
-const struct hvm_vioapic *tmp;
-
-while ( i < nr_vioapics && (tmp = domain_vioapic(d, i++)) != vioapic )
-base_gsi += tmp->nr_pins;
-
-return base_gsi;
-}
-
 static uint32_t vioapic_read_indirect(const struct hvm_vioapic *vioapic)
 {
 uint32_t result = 0;
@@ -180,7 +166,7 @@ static void vioapic_write_redirent(
 struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 union vioapic_redir_entry *pent, ent;
 int unmasked = 0;
-unsigned int gsi = base_gsi(d, vioapic) + idx;
+unsigned int gsi = vioapic->base_gsi + idx;
 
 spin_lock(>arch.hvm_domain.irq_lock);
 
@@ -340,7 +326,7 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, 
unsigned int pin)
 struct domain *d = vioapic_domain(vioapic);
 struct vlapic *target;
 struct vcpu *v;
-unsigned int irq = base_gsi(d, vioapic) + pin;
+unsigned int irq = vioapic->base_gsi + pin;
 
 ASSERT(spin_is_locked(>arch.hvm_domain.irq_lock));
 
@@ -451,7 +437,7 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
 {
 struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 union vioapic_redir_entry *ent;
-unsigned int i, base_gsi = 0;
+unsigned int i;
 
 ASSERT(has_vioapic(d));
 
@@ -473,19 +459,18 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
 if ( iommu_enabled )
 {
 spin_unlock(>arch.hvm_domain.irq_lock);
-hvm_dpci_eoi(d, base_gsi + pin, ent);
+hvm_dpci_eoi(d, vioapic->base_gsi + pin, ent);
 spin_lock(>arch.hvm_domain.irq_lock);
 }
 
 if ( (ent->fields.trig_mode == VIOAPIC_LEVEL_TRIG) &&
  !ent->fields.mask &&
- hvm_irq->gsi_assert_count[base_gsi + pin] )
+ hvm_irq->gsi_assert_count[vioapic->base_gsi + pin] )
 {
 ent->fields.remote_irr = 1;
 vioapic_deliver(vioapic, pin);
 }
 }
-base_gsi += vioapic->nr_pins;
 }
 
 spin_unlock(>arch.hvm_domain.irq_lock);
@@ -554,6 +539,7 @@ void vioapic_reset(struct domain *d)
 {
 vioapic->base_address = mp_ioapics[i].mpc_apicaddr;
 vioapic->id = mp_ioapics[i].mpc_apicid;
+vioapic->base_gsi = io_apic_gsi_base(i);
 }
 vioapic->nr_pins = nr_pins;
 vioapic->domain = d;
@@ -601,7 +587,12 @@ int vioapic_init(struct domain *d)
 nr_gsis += nr_pins;
 }
 
-ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis);
+/*
+ * NB: hvm_domain_irq(d)->nr_gsis is actually the highest GSI + 1, but
+ * there might be holes in this range (ie: GSIs that don't belong to any
+ * vIO APIC).
+ */
+ASSERT(hvm_domain_irq(d)->nr_gsis >= nr_gsis);
 
 d->arch.hvm_domain.nr_vioapics = nr_vioapics;
 vioapic_reset(d);
diff --git 

Re: [Xen-devel] [PATCH 4/6] xen/arm: platforms: Add Tegra platform to support basic IRQ routing

2017-04-17 Thread Chris Patterson
>> +static const char * const tegra_dt_compat[] __initconst =
>> +{
>> +"nvidia,tegra120",  /* Tegra K1 */
>
> This is still tegra120 (not tegra124), is that intended? If so, it is
> still missing from arch/arm*/boot/dts. Do you have a pointer?

It was not intended; thank you for catching it. I must have lost that
fixup somewhere along the way...

> Also, do we need both tegra_dt_compat and tegra_interrupt_compat? Can we
> keep only one?

The purpose of tegra_interrupt_compat is to maintain a tegra-specific
whitelist of interrupt controllers we know how to route.  Presumably,
there may be custom boards out there that may have additional
interrupt routing capabilities that this patch set would not support
as-is.  I'm not sure of an appropriate way to maintain that logic and
merge them.  However, I am certainly open to suggestion, if you have
any ideas.

Thanks for the review!
-Chris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH 15/23] X86/vioapic: Hook interrupt delivery of vIOAPIC

2017-04-17 Thread Konrad Rzeszutek Wilk
On Fri, Mar 17, 2017 at 07:27:15PM +0800, Lan Tianyu wrote:
> From: Chao Gao 
> 
> When irq remapping enabled, IOAPIC Redirection Entry maybe is in remapping
> format. If that, generate a irq_remapping_request and send it to domain.
> 
> Signed-off-by: Chao Gao 
> Signed-off-by: Lan Tianyu 
> ---
>  xen/arch/x86/Makefile  |  1 +
>  xen/arch/x86/hvm/vioapic.c | 10 ++
>  xen/arch/x86/viommu.c  | 30 ++
>  xen/include/asm-x86/viommu.h   |  3 +++
>  xen/include/public/arch-x86/hvm/save.h |  1 +
>  5 files changed, 45 insertions(+)
>  create mode 100644 xen/arch/x86/viommu.c
> 
> diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
> index f75eca0..d49f8c8 100644
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -66,6 +66,7 @@ obj-y += usercopy.o
>  obj-y += x86_emulate.o
>  obj-$(CONFIG_TBOOT) += tboot.o
>  obj-y += hpet.o
> +obj-y += viommu.o
>  obj-y += vm_event.o
>  obj-y += xstate.o
>  
> diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
> index fdbb21f..6a00644 100644
> --- a/xen/arch/x86/hvm/vioapic.c
> +++ b/xen/arch/x86/hvm/vioapic.c
> @@ -30,6 +30,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -285,9 +286,18 @@ static void vioapic_deliver(struct hvm_hw_vioapic 
> *vioapic, int irq)
>  struct domain *d = vioapic_domain(vioapic);
>  struct vlapic *target;
>  struct vcpu *v;
> +struct irq_remapping_request request;
>  
>  ASSERT(spin_is_locked(>arch.hvm_domain.irq_lock));
>  
> +if ( vioapic->redirtbl[irq].ir.format )
> +{
> +irq_request_ioapic_fill(, vioapic->id,
> +vioapic->redirtbl[irq].bits);
> +viommu_handle_irq_request(d, );
> +return;
> +}
> +
>  HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
>  "dest=%x dest_mode=%x delivery_mode=%x "
>  "vector=%x trig_mode=%x",
> diff --git a/xen/arch/x86/viommu.c b/xen/arch/x86/viommu.c
> new file mode 100644
> index 000..ef78d3b
> --- /dev/null
> +++ b/xen/arch/x86/viommu.c
> @@ -0,0 +1,30 @@
> +/*
> + * viommu.c
> + *
> + * virtualize IOMMU.
> + *
> + * Copyright (C) 2017 Chao Gao, Intel Corporation.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms and conditions of the GNU General Public
> + * License, version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public
> + * License along with this program; If not, see 
> .
> + */
> +
> +#include 
> +
> +void irq_request_ioapic_fill(struct irq_remapping_request *req,
> + uint32_t ioapic_id, uint64_t rte)
> +{
> +ASSERT(req);
> +req->type = VIOMMU_REQUEST_IRQ_APIC;
> +req->source_id = ioapic_id;
> +req->msg.rte = rte;

Considering we get 'req' from the stack and it may have garbage, would
it be good to fill out the rest of the entries with sensible values? Or
is there no need for that?
> +}

This being a new file, you should probably include the nice
editor configuration block.

> diff --git a/xen/include/asm-x86/viommu.h b/xen/include/asm-x86/viommu.h
> index 0b25f34..fcf3c24 100644
> --- a/xen/include/asm-x86/viommu.h
> +++ b/xen/include/asm-x86/viommu.h
> @@ -49,6 +49,9 @@ struct irq_remapping_request
>  } msg;
>  };
>  
> +void irq_request_ioapic_fill(struct irq_remapping_request *req,
> + uint32_t ioapic_id, uint64_t rte);
> +
>  static inline const struct viommu_ops *viommu_get_ops(void)
>  {
>  /*
> diff --git a/xen/include/public/arch-x86/hvm/save.h 
> b/xen/include/public/arch-x86/hvm/save.h
> index 6127f89..06be4a5 100644
> --- a/xen/include/public/arch-x86/hvm/save.h
> +++ b/xen/include/public/arch-x86/hvm/save.h
> @@ -401,6 +401,7 @@ struct hvm_hw_vioapic {
>  uint8_t reserved[4];
>  uint8_t dest_id;
>  } fields;
> +struct ir_ioapic_rte ir;
>  } redirtbl[VIOAPIC_NUM_PINS];
>  };
>  
> -- 
> 1.8.3.1
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH 00/23] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion on Intel platform

2017-04-17 Thread Konrad Rzeszutek Wilk
On Mon, Mar 20, 2017 at 02:23:02PM +, Roger Pau Monné wrote:
> On Fri, Mar 17, 2017 at 07:27:00PM +0800, Lan Tianyu wrote:
> > This patchset is to introduce vIOMMU framework and add virtual VTD's
> > interrupt remapping support according "Xen virtual IOMMU high level
> > design doc 
> > V3"(https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.xenproject.org_archives_html_xen-2Ddevel_=DwIGaQ=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10=wAkdPB9j1dAH7AI494B5wFV3Jws7EfB2Q3Sw-K-88Rk=7dZfaODS8zbwpYC0vm7gKQXyM8pBPxfGpz8QMDQzU2k=3hxzmHH4X0gz9Oz5_PYoOmWFTkyETYTFPCqJ9iXD910=
> >  
> > 2016-11/msg01391.html).

It would be awesome if that was as a patch in docs/misc/

Thanks.

> > 
> > - vIOMMU framework
> > New framework provides viommu_ops and help functions to abstract
> > vIOMMU operations(E,G create, destroy, handle irq remapping request
> > and so on). Vendors(Intel, ARM, AMD and son) can implement their
> > vIOMMU callbacks.
> > 
> > - Xen vIOMMU device model in Qemu 
> > It's in charge of create/destroy vIOMMU in hypervisor via new vIOMMU
> > DMOP hypercalls. It will be required to pass virtual devices DMA
> > request to hypervisor when enable IOVA(DMA request without PASID)
> > function.
> > 
> > - Virtual VTD
> > In this patchset, we enable irq remapping function and covers both
> > MSI and IOAPIC interrupts. Don't support post interrupt mode emulation
> > and post interrupt mode enabled on host with virtual VTD. Will add
> > later.   
> > 
> > Chao Gao (19):
> >   Tools/libxc: Add viommu operations in libxc
> >   Tools/libacpi: Add DMA remapping reporting (DMAR) ACPI table
> > structures
> >   Tools/libacpi: Add new fields in acpi_config to build DMAR table
> >   Tools/libacpi: Add a user configurable parameter to control vIOMMU
> > attributes
> >   Tools/libxl: Inform device model to create a guest with a vIOMMU
> > device
> >   x86/hvm: Introduce a emulated VTD for HVM
> >   X86/vvtd: Add MMIO handler for VVTD
> >   X86/vvtd: Set Interrupt Remapping Table Pointer through GCMD
> >   X86/vvtd: Process interrupt remapping request
> >   X86/vvtd: decode interrupt attribute from IRTE
> >   X86/vioapic: Hook interrupt delivery of vIOAPIC
> >   X86/vvtd: Enable Queued Invalidation through GCMD
> >   X86/vvtd: Enable Interrupt Remapping through GCMD
> >   x86/vpt: Get interrupt vector through a vioapic interface
> >   passthrough: move some fields of hvm_gmsi_info to a sub-structure
> >   Tools/libxc: Add a new interface to bind msi-ir with pirq
> >   X86/vmsi: Hook guest MSI injection
> >   X86/vvtd: Handle interrupt translation faults
> >   X86/vvtd: Add queued invalidation (QI) support
> > 
> > Lan Tianyu (4):
> >   VIOMMU: Add vIOMMU helper functions to create, destroy and query
> > capabilities
> >   DMOP: Introduce new DMOP commands for vIOMMU support
> >   VIOMMU: Add irq request callback to deal with irq remapping
> >   VIOMMU: Add get irq info callback to convert irq remapping request
> > 
> >  tools/libacpi/acpi2_0.h |   45 +
> >  tools/libacpi/build.c   |   58 ++
> >  tools/libacpi/libacpi.h |   12 +
> >  tools/libs/devicemodel/core.c   |   69 ++
> >  tools/libs/devicemodel/include/xendevicemodel.h |   35 +
> >  tools/libs/devicemodel/libxendevicemodel.map|3 +
> >  tools/libxc/include/xenctrl.h   |   17 +
> >  tools/libxc/include/xenctrl_compat.h|5 +
> >  tools/libxc/xc_devicemodel_compat.c |   18 +
> >  tools/libxc/xc_domain.c |   55 +
> >  tools/libxl/libxl_create.c  |   12 +-
> >  tools/libxl/libxl_dm.c  |9 +
> >  tools/libxl/libxl_dom.c |   85 ++
> >  tools/libxl/libxl_types.idl |8 +
> >  tools/xl/xl_parse.c |   54 +
> >  xen/arch/x86/Makefile   |1 +
> >  xen/arch/x86/hvm/Makefile   |1 +
> >  xen/arch/x86/hvm/dm.c   |   29 +
> >  xen/arch/x86/hvm/irq.c  |   10 +
> >  xen/arch/x86/hvm/vioapic.c  |   36 +
> >  xen/arch/x86/hvm/vmsi.c |   17 +-
> >  xen/arch/x86/hvm/vpt.c  |2 +-
> >  xen/arch/x86/hvm/vvtd.c | 1229 
> > +++
> >  xen/arch/x86/viommu.c   |   40 +
> >  xen/common/Makefile |1 +
> >  xen/common/domain.c |3 +
> >  xen/common/viommu.c |  119 +++
> >  xen/drivers/passthrough/io.c|  183 +++-
> >  xen/drivers/passthrough/vtd/iommu.h |  213 +++-
> >  xen/include/asm-arm/viommu.h|   38 +
> >  xen/include/asm-x86/hvm/vioapic.h   |1 +
> >  xen/include/asm-x86/msi.h   |3 +
> > 

Re: [Xen-devel] [RFC PATCH 4/23] VIOMMU: Add get irq info callback to convert irq remapping request

2017-04-17 Thread Konrad Rzeszutek Wilk
On Fri, Mar 17, 2017 at 07:27:04PM +0800, Lan Tianyu wrote:
> This patch is to add get_irq_info callback for platform implementation
> to convert irq remapping request to irq info (E,G vector, dest, dest_mode
> and so on).
> 
> Signed-off-by: Lan Tianyu 
> ---
>  xen/common/viommu.c  | 11 +++
>  xen/include/asm-arm/viommu.h |  4 
>  xen/include/asm-x86/viommu.h |  8 
>  xen/include/xen/viommu.h |  4 
>  4 files changed, 27 insertions(+)
> 
> diff --git a/xen/common/viommu.c b/xen/common/viommu.c
> index 62c66db..dbec692 100644
> --- a/xen/common/viommu.c
> +++ b/xen/common/viommu.c
> @@ -98,6 +98,17 @@ int viommu_handle_irq_request(struct domain *d,
>  return info->ops->handle_irq_request(d, request);
>  }
>  
> +int viommu_get_irq_info(struct domain *d, struct irq_remapping_request 
> *request,
> +struct irq_remapping_info *irq_info)
> +{
> +struct viommu_info *info = >viommu;
> +
> +if ( !info || !info->ops || !info->ops->get_irq_info)

Ahem.
> +return -EINVAL;
> +
> +return info->ops->get_irq_info(d, request, irq_info);
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/viommu.h b/xen/include/asm-arm/viommu.h
> index 6a81ecb..6ce4e0a 100644
> --- a/xen/include/asm-arm/viommu.h
> +++ b/xen/include/asm-arm/viommu.h
> @@ -22,6 +22,10 @@
>  
>  #include 
>  
> +struct irq_remapping_info
> +{
> +};
> +
>  struct irq_remapping_request
>  {
>  };
> diff --git a/xen/include/asm-x86/viommu.h b/xen/include/asm-x86/viommu.h
> index b6e01a5..43e446e 100644
> --- a/xen/include/asm-x86/viommu.h
> +++ b/xen/include/asm-x86/viommu.h
> @@ -23,6 +23,14 @@
>  #include 
>  #include 
>  
> +struct irq_remapping_info
> +{
> +u8  vector;
> +u32 dest;
> +u32 dest_mode:1;
> +u32 delivery_mode:3;
> +};
> +
>  struct irq_remapping_request
>  {
>  u8 type;
> diff --git a/xen/include/xen/viommu.h b/xen/include/xen/viommu.h
> index 246b29d..d733012 100644
> --- a/xen/include/xen/viommu.h
> +++ b/xen/include/xen/viommu.h
> @@ -42,6 +42,8 @@ struct viommu_ops {
>  int (*destroy)(struct viommu *viommu);
>  int (*handle_irq_request)(struct domain *d,
>struct irq_remapping_request *request);
> +int (*get_irq_info)(struct domain *d, struct irq_remapping_request 
> *request,
> +struct irq_remapping_info *info);
>  };
>  
>  struct viommu_info {
> @@ -56,6 +58,8 @@ int viommu_destroy(struct domain *d, u32 viommu_id);
>  u64 viommu_query_caps(struct domain *d);
>  int viommu_handle_irq_request(struct domain *d,
>struct irq_remapping_request *request);
> +int viommu_get_irq_info(struct domain *d, struct irq_remapping_request 
> *request,
> +struct irq_remapping_info *irq_info);
>  
>  #endif /* __XEN_VIOMMU_H__ */
>  
> -- 
> 1.8.3.1
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH 3/23] VIOMMU: Add irq request callback to deal with irq remapping

2017-04-17 Thread Konrad Rzeszutek Wilk
On Fri, Mar 17, 2017 at 07:27:03PM +0800, Lan Tianyu wrote:
> This patch is to add irq request callback for platform implementation
> to deal with irq remapping request.
> 
> Signed-off-by: Lan Tianyu 
> ---
>  xen/common/viommu.c  | 11 +++
>  xen/include/asm-arm/viommu.h |  4 
>  xen/include/asm-x86/viommu.h | 15 +++
>  xen/include/xen/viommu.h |  8 
>  4 files changed, 38 insertions(+)
> 
> diff --git a/xen/common/viommu.c b/xen/common/viommu.c
> index 4c1c788..62c66db 100644
> --- a/xen/common/viommu.c
> +++ b/xen/common/viommu.c
> @@ -87,6 +87,17 @@ u64 viommu_query_caps(struct domain *d)
>  return info->ops->query_caps(d);
>  }
>  
> +int viommu_handle_irq_request(struct domain *d,
> +struct irq_remapping_request *request)
> +{
> +struct viommu_info *info = >viommu;
> +
> +if ( !info || !info->ops || !info->ops->handle_irq_request)

You are missing an space at the end.
> +return -EINVAL;
> +
> +return info->ops->handle_irq_request(d, request);
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/viommu.h b/xen/include/asm-arm/viommu.h
> index ef6a60b..6a81ecb 100644
> --- a/xen/include/asm-arm/viommu.h
> +++ b/xen/include/asm-arm/viommu.h
> @@ -22,6 +22,10 @@
>  
>  #include 
>  
> +struct irq_remapping_request
> +{
> +};
> +
>  static inline const struct viommu_ops *viommu_get_ops(void)
>  {
>  return NULL;
> diff --git a/xen/include/asm-x86/viommu.h b/xen/include/asm-x86/viommu.h
> index efb435f..b6e01a5 100644
> --- a/xen/include/asm-x86/viommu.h
> +++ b/xen/include/asm-x86/viommu.h
> @@ -23,6 +23,21 @@
>  #include 
>  #include 
>  
> +struct irq_remapping_request
> +{
> +u8 type;
> +u16 source_id;
> +union {
> +/* MSI */
> +struct {
> +u64 addr;
> +u32 data;
> +} msi;
> +/* Redirection Entry in IOAPIC */
> +u64 rte;
> +} msg;
> +};

Will this work right? As in with the default padding and such?
> +
>  static inline const struct viommu_ops *viommu_get_ops(void)
>  {
>  return NULL;
> diff --git a/xen/include/xen/viommu.h b/xen/include/xen/viommu.h
> index a0abbdf..246b29d 100644
> --- a/xen/include/xen/viommu.h
> +++ b/xen/include/xen/viommu.h
> @@ -24,6 +24,10 @@
>  
>  #define NR_VIOMMU_PER_DOMAIN 1
>  
> +/* IRQ request type */
> +#define VIOMMU_REQUEST_IRQ_MSI  0
> +#define VIOMMU_REQUEST_IRQ_APIC 1

What is this used for?
> +
>  struct viommu {
>  u64 base_address;
>  u64 length;
> @@ -36,6 +40,8 @@ struct viommu_ops {
>  u64 (*query_caps)(struct domain *d);
>  int (*create)(struct domain *d, struct viommu *viommu);
>  int (*destroy)(struct viommu *viommu);
> +int (*handle_irq_request)(struct domain *d,
> +  struct irq_remapping_request *request);
>  };
>  
>  struct viommu_info {
> @@ -48,6 +54,8 @@ int viommu_init_domain(struct domain *d);
>  int viommu_create(struct domain *d, u64 base_address, u64 length, u64 caps);
>  int viommu_destroy(struct domain *d, u32 viommu_id);
>  u64 viommu_query_caps(struct domain *d);
> +int viommu_handle_irq_request(struct domain *d,
> +  struct irq_remapping_request *request);
>  
>  #endif /* __XEN_VIOMMU_H__ */
>  
> -- 
> 1.8.3.1
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH 2/23] DMOP: Introduce new DMOP commands for vIOMMU support

2017-04-17 Thread Konrad Rzeszutek Wilk
On Fri, Mar 17, 2017 at 07:27:02PM +0800, Lan Tianyu wrote:
> This patch is to introduce create, destroy and query capabilities
> command for vIOMMU. vIOMMU layer will deal with requests and call
> arch vIOMMU ops.
> 
> Signed-off-by: Lan Tianyu 
> ---
>  xen/arch/x86/hvm/dm.c  | 29 +
>  xen/include/public/hvm/dm_op.h | 39 +++
>  2 files changed, 68 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> index 2122c45..2b28f70 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -491,6 +491,35 @@ static int dm_op(domid_t domid,
>  break;
>  }
>  
> +case XEN_DMOP_create_viommu:
> +{
> +struct xen_dm_op_create_viommu *data =
> +_viommu;
> +
> +rc = viommu_create(d, data->base_address, data->length, 
> data->capabilities);
> +if (rc >= 0) {

The style guide is is to have a space here and { on a newline.

> +data->viommu_id = rc;
> +rc = 0;
> +}
> +break;
> +}

Newline here..


> +case XEN_DMOP_destroy_viommu:
> +{
> +const struct xen_dm_op_destroy_viommu *data =
> +_viommu;
> +
> +rc = viommu_destroy(d, data->viommu_id);
> +break;
> +}

Ahem?
> +case XEN_DMOP_query_viommu_caps:
> +{
> +struct xen_dm_op_query_viommu_caps *data =
> +_viommu_caps;
> +
> +data->caps = viommu_query_caps(d);
> +rc = 0;
> +break;
> +}

And here.
>  default:
>  rc = -EOPNOTSUPP;
>  break;
> diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
> index f54cece..b8c7359 100644
> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -318,6 +318,42 @@ struct xen_dm_op_inject_msi {
>  uint64_aligned_t addr;
>  };
>  
> +/*
> + * XEN_DMOP_create_viommu: Create vIOMMU device.
> + */
> +#define XEN_DMOP_create_viommu 15
> +
> +struct xen_dm_op_create_viommu {
> +/* IN - MMIO base address of vIOMMU */

Any limit? Can it be zero?

> +uint64_t base_address;
> +/* IN - Length of MMIO region */

Any restrictions? Can it be say 2 bytes? Or is this in page-size granularity?

> +uint64_t length;
> +/* IN - Capabilities with which we want to create */
> +uint64_t capabilities;

That sounds like some form of flags?

> +/* OUT - vIOMMU identity */
> +uint32_t viommu_id;
> +};
> +
> +/*
> + * XEN_DMOP_destroy_viommu: Destroy vIOMMU device.
> + */
> +#define XEN_DMOP_destroy_viommu 16
> +
> +struct xen_dm_op_destroy_viommu {
> +/* OUT - vIOMMU identity */

Out? Not in?

> +uint32_t viommu_id;
> +};
> +
> +/*
> + * XEN_DMOP_q_viommu: Query vIOMMU capabilities.
> + */
> +#define XEN_DMOP_query_viommu_caps 17
> +
> +struct xen_dm_op_query_viommu_caps {
> +/* OUT - vIOMMU Capabilities*/

Don't you need to also mention which vIOMMU? As you
could have potentially many of them?

> +uint64_t caps;
> +};
> +
>  struct xen_dm_op {
>  uint32_t op;
>  uint32_t pad;
> @@ -336,6 +372,9 @@ struct xen_dm_op {
>  struct xen_dm_op_set_mem_type set_mem_type;
>  struct xen_dm_op_inject_event inject_event;
>  struct xen_dm_op_inject_msi inject_msi;
> +struct xen_dm_op_create_viommu create_viommu;
> +struct xen_dm_op_destroy_viommu destroy_viommu;
> +struct xen_dm_op_query_viommu_caps query_viommu_caps;
>  } u;
>  };
>  
> -- 
> 1.8.3.1
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Updates on the project

2017-04-17 Thread Gayathri Menakath
Hello Jesus,

I would like to thank you for the comments. I will look into the part where
it uploads the data to the Elasticsearch index and the jwzthreading.py. I
believe that I had mentioned in one of the IRC chats that I would be
reusing the jwzthreading.py. I am sorry if I hadn't mentioned it. However,
should I be making any changes to it?

On Mon, Apr 17, 2017 at 4:48 AM, Jesus M. Gonzalez-Barahona <
j...@bitergia.com> wrote:

> On Sat, 2017-04-15 at 20:08 +0530, Gayathri Menakath wrote:
> > Hello Jesus,
> >
> > As my periodical exams were going on I could not spend much time on
> > writing the tests (2nd microtask). I will resume the work soon and
> > will send the updates. Along with my proposal, I have uploaded an
> > official letter from my university which states that I would not be
> > having any academic commitments for at least 8 weeks during the
> > coding period. I hope with that I would be able to meet the
> > eligibility criteria for Outreachy. I had sent a copy of the letter
> > to the Outreachy coordinators and Lars too.
>
> Thanks a lot for the update.
>
> > Meanwhile, may I know if you had reviewed the first microtask's code?
>
> Yes. I did. Some of comments:
>
> * I've tested it with some mboxes, and seem to work pretty well. A bit
> weird that you have to produce a JSON file, and then upload it to ES,
> instead of just uploading it to ES directly. But otherwise, it seems to
> work with the tests I did.
>
> * However, you had hardwired a path in jwzthreading.py, with (I
> presume) the directory where you store the mboxes. After changing it to
> mind, worked like a charm.
>
> * BTW, I don't remember that you commented that you were using
> jwzthreading.py. That's not bad (reusing code which works is always a
> good option to consider), but makes the exercise different, since the
> implementation of the threading algorithm is in it.
>
> * The readme.md explains well how to run the scripts.
>
> Saludos,
>
> Jesus.
>
> > --
> > Yours Sincerely,
> > Gayathri.P.Menakath
> > B-Tech 3rd year,
> > Amrita University
> > blog | Github
> --
> Bitergia: http://bitergia.com
> /me at Twitter: https://twitter.com/jgbarah
>
>


-- 
Yours Sincerely,
Gayathri.P.Menakath
B-Tech 3rd year,
Amrita University 
blog  | Github

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 01:57:10PM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 01:47:47PM +0800, Chao Gao wrote:
>> On Mon, Apr 17, 2017 at 01:21:01PM +0100, Roger Pau Monné wrote:
>> >On Mon, Apr 17, 2017 at 01:12:27PM +0800, Chao Gao wrote:
>> >[...]
>> >> It works. I can test for you when you send out a formal patch.
>> >
>> >Thanks for the testing, will send formal patch shortly.
>> >
>> >Have you been able to reproduce the IOMMU issue with that, or you just hit 
>> >the
>> >panic at the end of PVH Dom0 build?
>> 
>> No, I haven't. The output is some like ELFxxx not found, I think, due to lack
>> of pvh domain0 kernel. As mentioned before, my platform is skylake.
>
>Right, if you get to the ELF stuff it means the IOMMU has been initialized
>successfully. Skylake is post-haswell, so I don't think it's going to exhibit
>those issues. Is there any chance you can test on something older
>(pre-haswell?).

I am not sure that I can find a pre-haswell machine. Will try later.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [ovmf baseline-only test] 71199: all pass

2017-04-17 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 71199 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71199/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 51a1db9b24d850c785d240da599c4bf9ba1c0fd3
baseline version:
 ovmf 0c9fc4b1679946f59efa1aaf11e2e9e1acab303d

Last test of basis71194  2017-04-14 16:48:53 Z2 days
Testing same since71199  2017-04-17 10:49:02 Z0 days1 attempts


People who touched revisions under test:
  Ruiyu Ni 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


commit 51a1db9b24d850c785d240da599c4bf9ba1c0fd3
Author: Ruiyu Ni 
Date:   Tue Apr 11 10:07:43 2017 +0800

MdeModulePkg/BootManagerMenu: Add assertion to indicate no DIV by 0

BootMenuSelectItem() contains code to DIV BootMenuData->ItemCount.
When BootMenuData->ItemCount can be 0, the DIV operation may
trigger CPU exception.
But in logic, this case won't happen. So add assertion to indicate
it.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ruiyu Ni 
Reviewed-by: Hao A Wu 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Roger Pau Monné
On Mon, Apr 17, 2017 at 01:47:47PM +0800, Chao Gao wrote:
> On Mon, Apr 17, 2017 at 01:21:01PM +0100, Roger Pau Monné wrote:
> >On Mon, Apr 17, 2017 at 01:12:27PM +0800, Chao Gao wrote:
> >[...]
> >> It works. I can test for you when you send out a formal patch.
> >
> >Thanks for the testing, will send formal patch shortly.
> >
> >Have you been able to reproduce the IOMMU issue with that, or you just hit 
> >the
> >panic at the end of PVH Dom0 build?
> 
> No, I haven't. The output is some like ELFxxx not found, I think, due to lack
> of pvh domain0 kernel. As mentioned before, my platform is skylake.

Right, if you get to the ELF stuff it means the IOMMU has been initialized
successfully. Skylake is post-haswell, so I don't think it's going to exhibit
those issues. Is there any chance you can test on something older
(pre-haswell?).

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 01:21:01PM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 01:12:27PM +0800, Chao Gao wrote:
>[...]
>> It works. I can test for you when you send out a formal patch.
>
>Thanks for the testing, will send formal patch shortly.
>
>Have you been able to reproduce the IOMMU issue with that, or you just hit the
>panic at the end of PVH Dom0 build?

No, I haven't. The output is some like ELFxxx not found, I think, due to lack
of pvh domain0 kernel. As mentioned before, my platform is skylake.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [TEST] 4.9-rc1 testing results

2017-04-17 Thread Konrad Rzeszutek Wilk
Hey,

Over the week I built from scratch and installed Xen 4.9-rc1 on
Fedora Core 20.

I had one issue that I hadn't yet dug completely in - that is the
xenstored.service would not start. Invoking it manually made it
work.

But Fedora Core 20 is ancient, and I am planning to update that machine
to Fedora Core 25 so we can ignore that for now.

In terms of functionality - I am running PV and HVM guests.
The HVM and PV guests are doing PCI passthrough - and it all works!

(The PCI passthrough is via pci=XYZ parameter).

Thanks!

P.S.
SuperMicro X10SAE, E3-1245v3 CPU.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.9 test] 107482: regressions - FAIL

2017-04-17 Thread osstest service owner
flight 107482 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107482/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2   6 xen-boot fail REGR. vs. 107358

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt  9 debian-install   fail in 107477 pass in 107482
 test-amd64-amd64-i386-pvgrub 21 leak-check/check fail in 107477 pass in 107482
 test-armhf-armhf-xl-cubietruck 15 guest-start/debian.repeat fail in 107477 
pass in 107482
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail pass in 107477

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-xsm   6 xen-boot fail  like 107358
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 107358
 test-armhf-armhf-xl-rtds  6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt-xsm  6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt-raw  6 xen-boot fail  like 107358
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail like 107358
 test-armhf-armhf-xl-vhd   6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt  6 xen-boot fail  like 107358

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-arm64-arm64-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-rtds 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-rtds 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-intel 13 xen-boot/l1 fail never pass
 test-amd64-amd64-qemuu-nested-amd 13 xen-boot/l1   fail never pass
 test-arm64-arm64-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 12 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail never pass
 test-armhf-armhf-xl-arndale   6 xen-boot fail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass

version targeted for testing:
 linuxcf2586e60ede2217d7f53a0585e27e1cca693600
baseline version:
 linux37feaf8095d352014555b82adb4a04609ca17d3f

Last test of basis   107358  2017-04-10 19:42:52 Z6 days
Testing same since   107396  2017-04-12 11:15:19 Z5 days   10 attempts


People who touched revisions under test:
  Adrian Hunter 
  Alan Stern 
  Alberto Aguirre 
  Alex Deucher 
  Alex Williamson 
  Alex Wood 
  Alexander Polakov 
  Alexander Polyakov 
  Andrew Morton 
  Andrey Smetanin 
  Andy Gross 
  Andy Shevchenko 
  Arend van Spriel 
  Arnd Bergmann 
  Aurelien Aptel 
  Baoyou Xie 
  Bartosz Golaszewski 
  Bastien Nocera 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Roger Pau Monné
On Mon, Apr 17, 2017 at 01:12:27PM +0800, Chao Gao wrote:
[...]
> It works. I can test for you when you send out a formal patch.

Thanks for the testing, will send formal patch shortly.

Have you been able to reproduce the IOMMU issue with that, or you just hit the
panic at the end of PVH Dom0 build?

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 11:38:33AM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 10:49:45AM +0800, Chao Gao wrote:
>> On Mon, Apr 17, 2017 at 09:38:54AM +0100, Roger Pau Monné wrote:
>> >On Mon, Apr 17, 2017 at 09:03:12AM +0800, Chao Gao wrote:
>> >> On Mon, Apr 17, 2017 at 08:47:48AM +0100, Roger Pau Monné wrote:
>> >> >On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
>> >> >> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
>> >> >> >Hello,
>> >> >> >
>> >> >> >Although PVHv2 Dom0 is not yet finished, I've been trying the current 
>> >> >> >code on
>> >> >> >different hardware, and found that with pre-Haswell Intel hardware 
>> >> >> >PVHv2 Dom0
>> >> >> >completely freezes the box when calling iommu_hwdom_init in 
>> >> >> >dom0_construct_pvh.
>> >> >> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or 
>> >> >> >newer).
>> >> >> >
>> >> >> >I'm not able to debug that in any meaningful way because the box 
>> >> >> >seems to lock
>> >> >> >up completely, even the watchdog NMI stops working. Here is the boot 
>> >> >> >log, up to
>> >> >> >the point where it freezes:
>> >> >> 
>> >> >> I try "dom0=pvh" with my skylake. An assertion failed. Is it a 
>> >> >> software bug?
>> >> >> 
>> >
>> >It seems like we are not properly adding/accounting the vIO APICs, but I 
>> >cannot
>> >really see how. I have another patch for you to try below.
>> >
>> >Thanks, Roger.
>> >
>> >---8<---
>> >diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
>> >index 527ac2aadd..40075e2756 100644
>> >--- a/xen/arch/x86/hvm/vioapic.c
>> >+++ b/xen/arch/x86/hvm/vioapic.c
>> >@@ -610,11 +610,15 @@ int vioapic_init(struct domain *d)
>> >xzalloc_array(struct hvm_vioapic *, nr_vioapics)) == NULL) )
>> > return -ENOMEM;
>> > 
>> >+printk("Adding %u vIO APICs\n", nr_vioapics);
>> >+
>> > for ( i = 0; i < nr_vioapics; i++ )
>> > {
>> > unsigned int nr_pins = is_hardware_domain(d) ? 
>> > nr_ioapic_entries[i] :
>> > ARRAY_SIZE(domain_vioapic(d, 0)->domU.redirtbl);
>> > 
>> >+printk("vIO APIC %u has %u pins\n", i, nr_pins);
>> >+
>> > if ( (domain_vioapic(d, i) =
>> >   xmalloc_bytes(hvm_vioapic_size(nr_pins))) == NULL )
>> > {
>> >@@ -623,8 +627,12 @@ int vioapic_init(struct domain *d)
>> > }
>> > domain_vioapic(d, i)->nr_pins = nr_pins;
>> > nr_gsis += nr_pins;
>> >+printk("nr_gsis: %u\n", nr_gsis);
>> > }
>> > 
>> >+printk("domain nr_gsis: %u vioapic gsis: %u nr_irqs_gsi: %u 
>> >highest_gsi: %u\n",
>> >+   hvm_domain_irq(d)->nr_gsis, nr_gsis, nr_irqs_gsi, 
>> >highest_gsi());
>> >+
>> > ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis);
>> > 
>> > d->arch.hvm_domain.nr_vioapics = nr_vioapics;
>> >
>> 
>> Please Cc or To me.  Is there some holes in all physical IOAPICs gsi ranges?
>
>That's weird, my MUA (Mutt) seems to automatically remove your address from the
>"To:" field. I have no idea why it does that.
>
>So yes, your box has as GSI gap which is not handled by any IO APIC. TBH, I
>didn't even knew that was possible. In any case, patch below should solve it.
>
>---8<---
>commit f52d05fca03440d771eb56077c9d60bb630eb423
>diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
>index 5157db7a4e..ec87a97651 100644
>--- a/xen/arch/x86/hvm/vioapic.c
>+++ b/xen/arch/x86/hvm/vioapic.c
>@@ -64,37 +64,23 @@ static struct hvm_vioapic *addr_vioapic(const struct 
>domain *d,
> struct hvm_vioapic *gsi_vioapic(const struct domain *d, unsigned int gsi,
> unsigned int *pin)
> {
>-unsigned int i, base_gsi = 0;
>+unsigned int i;
> 
> for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
> {
> struct hvm_vioapic *vioapic = domain_vioapic(d, i);
> 
>-if ( gsi >= base_gsi && gsi < base_gsi + vioapic->nr_pins )
>+if ( gsi >= vioapic->base_gsi &&
>+ gsi < vioapic->base_gsi + vioapic->nr_pins )
> {
>-*pin = gsi - base_gsi;
>+*pin = gsi - vioapic->base_gsi;
> return vioapic;
> }
>-
>-base_gsi += vioapic->nr_pins;
> }
> 
> return NULL;
> }
> 
>-static unsigned int base_gsi(const struct domain *d,
>- const struct hvm_vioapic *vioapic)
>-{
>-unsigned int nr_vioapics = d->arch.hvm_domain.nr_vioapics;
>-unsigned int base_gsi = 0, i = 0;
>-const struct hvm_vioapic *tmp;
>-
>-while ( i < nr_vioapics && (tmp = domain_vioapic(d, i++)) != vioapic )
>-base_gsi += tmp->nr_pins;
>-
>-return base_gsi;
>-}
>-
> static uint32_t vioapic_read_indirect(const struct hvm_vioapic *vioapic)
> {
> uint32_t result = 0;
>@@ -180,7 +166,7 @@ static void vioapic_write_redirent(
> struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> union vioapic_redir_entry *pent, ent;
> int unmasked = 0;
>-unsigned int gsi = base_gsi(d, vioapic) + idx;
>+

Re: [Xen-devel] [RFC PATCH 5/23] Tools/libxc: Add viommu operations in libxc

2017-04-17 Thread Lan Tianyu
On 2017年04月17日 19:08, Wei Liu wrote:
> On Fri, Apr 14, 2017 at 11:38:15PM +0800, Lan, Tianyu wrote:
>> Hi Paul:
>>  Sorry for later response.
>>
>> On 3/31/2017 3:57 AM, Chao Gao wrote:
>>> On Wed, Mar 29, 2017 at 09:08:06AM +, Paul Durrant wrote:
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
> Chao Gao
> Sent: 29 March 2017 01:40
> To: Wei Liu 
> Cc: Lan Tianyu ; Kevin Tian ;
> Ian Jackson ; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 5/23] Tools/libxc: Add viommu
> operations in libxc
>
> Tianyu is on vacation this two weeks, so I will try to address
> some comments on this series.
>
> On Tue, Mar 28, 2017 at 05:24:03PM +0100, Wei Liu wrote:
>> On Fri, Mar 17, 2017 at 07:27:05PM +0800, Lan Tianyu wrote:
>>> From: Chao Gao 
>>>
>>> In previous patch, we introduce a common vIOMMU layer. In our design,
>>> we create/destroy vIOMMU through DMOP interface instead of creating
> it
>>> according to a config flag of domain. It makes it is possible
>>> to create vIOMMU in device model or in tool stack.
>>>

 I've not been following this closely so apologies if this has already been 
 asked...

 Why would you need to create a vIOMMU instance in an external device model.
 Since the toolstack should be in control of the device model configuration 
 why would it not know in advance that one was required?
>>>
>>> I assume your question is why we don't create a vIOMMU instance via 
>>> hypercall in toolstack.
>>> I think creating in toolstack is also ok and is easier to be reused by pvh.
>>>
>>> If Tianyu has no concern about this, will move this part to toolstack.
>>
>> We can move create/destroy vIOMMU in the tool stack but we still need to add
>> such dummy vIOMMU device model in Qemu to pass virtual device's DMA request
>> into Xen hypervisor. Qemu is required to use DMOP hypercall and tool stack
>> may use domctl hyercall. vIOMMU hypercalls will be divided into two part.
>>
>> Domctl:
>>  create, destroy and query.
>> DMOP:
>>  vDev's DMA related operations.
>>
>> Is this OK?
>>
> 
> Why are they divided into two libraries? Can't they be in DMOP at the
> same time?

Yes, we can use DMOP for all vIOMMU hyercalls if it's necessary to keep
unified vIOMMU hyercall type. In theory, DMOP dedicates to be used by
Qemu but we also can use it in tool stack. If we move create, destroy
and query operation to tool stack, it isn't necessary to use DMOP for
them since only tool stack will call them. This is why I said we could
use domctl for these operations. Both two ways will not affect function
implementation. Which one it's better from your view? :)


> 
> Just asking questions, not suggesting it should be done one way or the
> other.  Sorry if there are some obvious reasons that I missed.
> 
> Wei.
> 


-- 
Best regards
Tianyu Lan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus test] 107480: regressions - trouble: broken/fail/pass

2017-04-17 Thread osstest service owner
flight 107480 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107480/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm   3 host-install(3) broken REGR. vs. 59254
 test-armhf-armhf-xl-credit2  11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-xl-multivcpu 11 guest-start  fail REGR. vs. 59254
 test-armhf-armhf-xl-cubietruck 11 guest-start fail REGR. vs. 59254
 test-armhf-armhf-xl  11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-libvirt 11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-xl-arndale  11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-libvirt-xsm 11 guest-start   fail REGR. vs. 59254

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 11 guest-start   fail REGR. vs. 59254
 test-amd64-amd64-xl-rtds  9 debian-installfail REGR. vs. 59254
 test-armhf-armhf-xl-vhd   9 debian-di-install   fail baseline untested
 test-armhf-armhf-libvirt-raw  9 debian-di-install   fail baseline untested
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 59254
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 59254
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 59254
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 59254

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-multivcpu 11 guest-start  fail  never pass
 test-arm64-arm64-xl-xsm  11 guest-start  fail   never pass
 test-arm64-arm64-libvirt-xsm 11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 11 guest-start  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl  11 guest-start  fail   never pass
 test-arm64-arm64-xl-credit2  11 guest-start  fail   never pass
 test-arm64-arm64-xl-rtds 11 guest-start  fail   never pass
 test-arm64-arm64-libvirt-qcow2  9 debian-di-installfail never pass

version targeted for testing:
 linux4f7d029b9bf009fbee76bb10c0c4351a1870d2f3
baseline version:
 linux45820c294fe1b1a9df495d57f40585ef2d069a39

Last test of basis59254  2015-07-09 04:20:48 Z  648 days
Failing since 59348  2015-07-10 04:24:05 Z  647 days  393 attempts
Testing same since   107480  2017-04-17 00:47:59 Z0 days1 attempts


8168 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumprun  pass
 build-i386-rumprun   pass
 test-amd64-amd64-xl  pass
 test-arm64-arm64-xl  fail
 test-armhf-armhf-xl  fail
 test-amd64-i386-xl   pass
 

Re: [Xen-devel] [RFC PATCH 5/23] Tools/libxc: Add viommu operations in libxc

2017-04-17 Thread Wei Liu
On Fri, Apr 14, 2017 at 11:38:15PM +0800, Lan, Tianyu wrote:
> Hi Paul:
>   Sorry for later response.
> 
> On 3/31/2017 3:57 AM, Chao Gao wrote:
> > On Wed, Mar 29, 2017 at 09:08:06AM +, Paul Durrant wrote:
> > > > -Original Message-
> > > > From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
> > > > Chao Gao
> > > > Sent: 29 March 2017 01:40
> > > > To: Wei Liu 
> > > > Cc: Lan Tianyu ; Kevin Tian 
> > > > ;
> > > > Ian Jackson ; xen-devel@lists.xen.org
> > > > Subject: Re: [Xen-devel] [RFC PATCH 5/23] Tools/libxc: Add viommu
> > > > operations in libxc
> > > > 
> > > > Tianyu is on vacation this two weeks, so I will try to address
> > > > some comments on this series.
> > > > 
> > > > On Tue, Mar 28, 2017 at 05:24:03PM +0100, Wei Liu wrote:
> > > > > On Fri, Mar 17, 2017 at 07:27:05PM +0800, Lan Tianyu wrote:
> > > > > > From: Chao Gao 
> > > > > > 
> > > > > > In previous patch, we introduce a common vIOMMU layer. In our 
> > > > > > design,
> > > > > > we create/destroy vIOMMU through DMOP interface instead of creating
> > > > it
> > > > > > according to a config flag of domain. It makes it is possible
> > > > > > to create vIOMMU in device model or in tool stack.
> > > > > > 
> > > 
> > > I've not been following this closely so apologies if this has already 
> > > been asked...
> > > 
> > > Why would you need to create a vIOMMU instance in an external device 
> > > model.
> > > Since the toolstack should be in control of the device model 
> > > configuration why would it not know in advance that one was required?
> > 
> > I assume your question is why we don't create a vIOMMU instance via 
> > hypercall in toolstack.
> > I think creating in toolstack is also ok and is easier to be reused by pvh.
> > 
> > If Tianyu has no concern about this, will move this part to toolstack.
> 
> We can move create/destroy vIOMMU in the tool stack but we still need to add
> such dummy vIOMMU device model in Qemu to pass virtual device's DMA request
> into Xen hypervisor. Qemu is required to use DMOP hypercall and tool stack
> may use domctl hyercall. vIOMMU hypercalls will be divided into two part.
> 
> Domctl:
>   create, destroy and query.
> DMOP:
>   vDev's DMA related operations.
> 
> Is this OK?
> 

Why are they divided into two libraries? Can't they be in DMOP at the
same time?

Just asking questions, not suggesting it should be done one way or the
other.  Sorry if there are some obvious reasons that I missed.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 107481: tolerable FAIL

2017-04-17 Thread osstest service owner
flight 107481 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107481/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-localmigrate/x10 fail pass in 
107468

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 107468
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 107468
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 107468
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 107468
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 107468
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 107468
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 107468
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 107468

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 build-arm64-pvops 5 kernel-build fail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  e412c03be25dee8202a440b973561afd8ab6d868
baseline version:
 xen  e412c03be25dee8202a440b973561afd8ab6d868

Last test of basis   107481  2017-04-17 01:57:14 Z0 days
Testing same since0  1970-01-01 00:00:00 Z 17273 days0 attempts

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Roger Pau Monné
On Mon, Apr 17, 2017 at 10:49:45AM +0800, Chao Gao wrote:
> On Mon, Apr 17, 2017 at 09:38:54AM +0100, Roger Pau Monné wrote:
> >On Mon, Apr 17, 2017 at 09:03:12AM +0800, Chao Gao wrote:
> >> On Mon, Apr 17, 2017 at 08:47:48AM +0100, Roger Pau Monné wrote:
> >> >On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
> >> >> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
> >> >> >Hello,
> >> >> >
> >> >> >Although PVHv2 Dom0 is not yet finished, I've been trying the current 
> >> >> >code on
> >> >> >different hardware, and found that with pre-Haswell Intel hardware 
> >> >> >PVHv2 Dom0
> >> >> >completely freezes the box when calling iommu_hwdom_init in 
> >> >> >dom0_construct_pvh.
> >> >> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or 
> >> >> >newer).
> >> >> >
> >> >> >I'm not able to debug that in any meaningful way because the box seems 
> >> >> >to lock
> >> >> >up completely, even the watchdog NMI stops working. Here is the boot 
> >> >> >log, up to
> >> >> >the point where it freezes:
> >> >> 
> >> >> I try "dom0=pvh" with my skylake. An assertion failed. Is it a software 
> >> >> bug?
> >> >> 
> >
> >It seems like we are not properly adding/accounting the vIO APICs, but I 
> >cannot
> >really see how. I have another patch for you to try below.
> >
> >Thanks, Roger.
> >
> >---8<---
> > diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
> >index 527ac2aadd..40075e2756 100644
> >--- a/xen/arch/x86/hvm/vioapic.c
> >+++ b/xen/arch/x86/hvm/vioapic.c
> >@@ -610,11 +610,15 @@ int vioapic_init(struct domain *d)
> >xzalloc_array(struct hvm_vioapic *, nr_vioapics)) == NULL) )
> > return -ENOMEM;
> > 
> >+printk("Adding %u vIO APICs\n", nr_vioapics);
> >+
> > for ( i = 0; i < nr_vioapics; i++ )
> > {
> > unsigned int nr_pins = is_hardware_domain(d) ? nr_ioapic_entries[i] 
> > :
> > ARRAY_SIZE(domain_vioapic(d, 0)->domU.redirtbl);
> > 
> >+printk("vIO APIC %u has %u pins\n", i, nr_pins);
> >+
> > if ( (domain_vioapic(d, i) =
> >   xmalloc_bytes(hvm_vioapic_size(nr_pins))) == NULL )
> > {
> >@@ -623,8 +627,12 @@ int vioapic_init(struct domain *d)
> > }
> > domain_vioapic(d, i)->nr_pins = nr_pins;
> > nr_gsis += nr_pins;
> >+printk("nr_gsis: %u\n", nr_gsis);
> > }
> > 
> >+printk("domain nr_gsis: %u vioapic gsis: %u nr_irqs_gsi: %u 
> >highest_gsi: %u\n",
> >+   hvm_domain_irq(d)->nr_gsis, nr_gsis, nr_irqs_gsi, highest_gsi());
> >+
> > ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis);
> > 
> > d->arch.hvm_domain.nr_vioapics = nr_vioapics;
> >
> 
> Please Cc or To me.  Is there some holes in all physical IOAPICs gsi ranges?

That's weird, my MUA (Mutt) seems to automatically remove your address from the
"To:" field. I have no idea why it does that.

So yes, your box has as GSI gap which is not handled by any IO APIC. TBH, I
didn't even knew that was possible. In any case, patch below should solve it.

---8<---
commit f52d05fca03440d771eb56077c9d60bb630eb423
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 5157db7a4e..ec87a97651 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -64,37 +64,23 @@ static struct hvm_vioapic *addr_vioapic(const struct domain 
*d,
 struct hvm_vioapic *gsi_vioapic(const struct domain *d, unsigned int gsi,
 unsigned int *pin)
 {
-unsigned int i, base_gsi = 0;
+unsigned int i;
 
 for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
 {
 struct hvm_vioapic *vioapic = domain_vioapic(d, i);
 
-if ( gsi >= base_gsi && gsi < base_gsi + vioapic->nr_pins )
+if ( gsi >= vioapic->base_gsi &&
+ gsi < vioapic->base_gsi + vioapic->nr_pins )
 {
-*pin = gsi - base_gsi;
+*pin = gsi - vioapic->base_gsi;
 return vioapic;
 }
-
-base_gsi += vioapic->nr_pins;
 }
 
 return NULL;
 }
 
-static unsigned int base_gsi(const struct domain *d,
- const struct hvm_vioapic *vioapic)
-{
-unsigned int nr_vioapics = d->arch.hvm_domain.nr_vioapics;
-unsigned int base_gsi = 0, i = 0;
-const struct hvm_vioapic *tmp;
-
-while ( i < nr_vioapics && (tmp = domain_vioapic(d, i++)) != vioapic )
-base_gsi += tmp->nr_pins;
-
-return base_gsi;
-}
-
 static uint32_t vioapic_read_indirect(const struct hvm_vioapic *vioapic)
 {
 uint32_t result = 0;
@@ -180,7 +166,7 @@ static void vioapic_write_redirent(
 struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 union vioapic_redir_entry *pent, ent;
 int unmasked = 0;
-unsigned int gsi = base_gsi(d, vioapic) + idx;
+unsigned int gsi = vioapic->base_gsi + idx;
 
 spin_lock(>arch.hvm_domain.irq_lock);
 
@@ -340,7 +326,7 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, 
unsigned int pin)
 struct domain 

[Xen-devel] [ovmf test] 107484: all pass - PUSHED

2017-04-17 Thread osstest service owner
flight 107484 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107484/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 51a1db9b24d850c785d240da599c4bf9ba1c0fd3
baseline version:
 ovmf 0c9fc4b1679946f59efa1aaf11e2e9e1acab303d

Last test of basis   107447  2017-04-14 09:11:40 Z3 days
Testing same since   107484  2017-04-17 08:16:13 Z0 days1 attempts


People who touched revisions under test:
  Ruiyu Ni 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=ovmf
+ revision=51a1db9b24d850c785d240da599c4bf9ba1c0fd3
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push ovmf 
51a1db9b24d850c785d240da599c4bf9ba1c0fd3
+ branch=ovmf
+ revision=51a1db9b24d850c785d240da599c4bf9ba1c0fd3
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=ovmf
+ xenbranch=xen-unstable
+ '[' xovmf = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable
+ prevxenbranch=xen-4.8-testing
+ '[' x51a1db9b24d850c785d240da599c4bf9ba1c0fd3 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 09:38:54AM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 09:03:12AM +0800, Chao Gao wrote:
>> On Mon, Apr 17, 2017 at 08:47:48AM +0100, Roger Pau Monné wrote:
>> >On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
>> >> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
>> >> >Hello,
>> >> >
>> >> >Although PVHv2 Dom0 is not yet finished, I've been trying the current 
>> >> >code on
>> >> >different hardware, and found that with pre-Haswell Intel hardware PVHv2 
>> >> >Dom0
>> >> >completely freezes the box when calling iommu_hwdom_init in 
>> >> >dom0_construct_pvh.
>> >> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or 
>> >> >newer).
>> >> >
>> >> >I'm not able to debug that in any meaningful way because the box seems 
>> >> >to lock
>> >> >up completely, even the watchdog NMI stops working. Here is the boot 
>> >> >log, up to
>> >> >the point where it freezes:
>> >> 
>> >> I try "dom0=pvh" with my skylake. An assertion failed. Is it a software 
>> >> bug?
>> >> 
>
>It seems like we are not properly adding/accounting the vIO APICs, but I cannot
>really see how. I have another patch for you to try below.
>
>Thanks, Roger.
>
>---8<---
>   diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
>index 527ac2aadd..40075e2756 100644
>--- a/xen/arch/x86/hvm/vioapic.c
>+++ b/xen/arch/x86/hvm/vioapic.c
>@@ -610,11 +610,15 @@ int vioapic_init(struct domain *d)
>xzalloc_array(struct hvm_vioapic *, nr_vioapics)) == NULL) )
> return -ENOMEM;
> 
>+printk("Adding %u vIO APICs\n", nr_vioapics);
>+
> for ( i = 0; i < nr_vioapics; i++ )
> {
> unsigned int nr_pins = is_hardware_domain(d) ? nr_ioapic_entries[i] :
> ARRAY_SIZE(domain_vioapic(d, 0)->domU.redirtbl);
> 
>+printk("vIO APIC %u has %u pins\n", i, nr_pins);
>+
> if ( (domain_vioapic(d, i) =
>   xmalloc_bytes(hvm_vioapic_size(nr_pins))) == NULL )
> {
>@@ -623,8 +627,12 @@ int vioapic_init(struct domain *d)
> }
> domain_vioapic(d, i)->nr_pins = nr_pins;
> nr_gsis += nr_pins;
>+printk("nr_gsis: %u\n", nr_gsis);
> }
> 
>+printk("domain nr_gsis: %u vioapic gsis: %u nr_irqs_gsi: %u highest_gsi: 
>%u\n",
>+   hvm_domain_irq(d)->nr_gsis, nr_gsis, nr_irqs_gsi, highest_gsi());
>+
> ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis);
> 
> d->arch.hvm_domain.nr_vioapics = nr_vioapics;
>

Please Cc or To me.  Is there some holes in all physical IOAPICs gsi ranges?

with the above patch,

(XEN) [   14.262237] Dom0 has maximum 1448 PIRQs
(XEN) [   14.264413] Adding 9 vIO APICs
(XEN) [   14.265827] vIO APIC 0 has 24 pins
(XEN) [   14.267256] nr_gsis: 24
(XEN) [   14.268673] vIO APIC 1 has 8 pins
(XEN) [   14.270175] nr_gsis: 32
(XEN) [   14.271589] vIO APIC 2 has 8 pins
(XEN) [   14.273011] nr_gsis: 40
(XEN) [   14.274434] vIO APIC 3 has 8 pins
(XEN) [   14.275864] nr_gsis: 48
(XEN) [   14.277283] vIO APIC 4 has 8 pins
(XEN) [   14.278709] nr_gsis: 56
(XEN) [   14.280127] vIO APIC 5 has 8 pins
(XEN) [   14.281561] nr_gsis: 64
(XEN) [   14.282986] vIO APIC 6 has 8 pins
(XEN) [   14.284417] nr_gsis: 72
(XEN) [   14.285837] vIO APIC 7 has 8 pins
(XEN) [   14.287262] nr_gsis: 80
(XEN) [   14.288683] vIO APIC 8 has 8 pins
(XEN) [   14.290114] nr_gsis: 88
(XEN) [   14.291538] domain nr_gsis: 104 vioapic gsis: 88 nr_irqs_gsi: 104 
highest_gsi: 103
(XEN) [   14.294417] Assertion 'hvm_domain_irq(d)->nr_gsis == nr_gsis' failed 
at vioapic.c:608
(XEN) [   14.297282] [ Xen-4.9-unstable  x86_64  debug=y   Not tainted ]
(XEN) [   14.298743] CPU:0
(XEN) [   14.300161] RIP:e008:[] vioapic_init+0x186/0x1dd
(XEN) [   14.301633] RFLAGS: 00010287   CONTEXT: hypervisor
(XEN) [   14.303094] rax: 830837c7ea00   rbx: 0009   rcx: 

(XEN) [   14.305976] rdx: 82d080457fff   rsi: 000a   rdi: 
82d08044d6b8
(XEN) [   14.308851] rbp: 82d080457d28   rsp: 82d080457ce8   r8:  
83083e00
(XEN) [   14.311781] r9:  0006   r10: 000472d2   r11: 
0006
(XEN) [   14.314654] r12: 0008   r13: 830837d2e000   r14: 
0058
(XEN) [   14.317528] r15: 830837c7eb20   cr0: 8005003b   cr4: 
003526e0
(XEN) [   14.320403] cr3: 6f84c000   cr2: 
(XEN) [   14.321855] ds:    es:    fs:    gs:    ss:    cs: 
e008
(XEN) [   14.324734] Xen code around  
(vioapic_init+0x186/0x1dd):
(XEN) [   14.327591]  00 00 44 3b 70 40 74 02 <0f> 0b 8b 45 cc 41 89 85 b0 02 
00 00 4c 89 ef e8
(XEN) [   14.330458] Xen stack trace from rsp=82d080457ce8:
(XEN) [   14.331908]82d08029e7de 000937c7e010 82d080457d08 
830837d2e000
(XEN) [   14.334790]0068 0001  

(XEN) [   14.337661]82d080457d48 82d0802de276 830837d2e000 

[Xen-devel] [distros-debian-sid test] 71198: tolerable trouble: blocked/broken/fail/pass

2017-04-17 Thread Platform Team regression test user
flight 71198 distros-debian-sid real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71198/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-i386-sid-netboot-pvgrub 10 guest-start fail like 71167
 test-amd64-amd64-i386-sid-netboot-pygrub 10 guest-startfail like 71167
 test-amd64-amd64-amd64-sid-netboot-pvgrub 10 guest-start   fail like 71167
 test-armhf-armhf-armhf-sid-netboot-pygrub  9 debian-di-install fail like 71167

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-armhf-sid-netboot-pygrub  1 build-check(1)blocked n/a
 build-arm64-pvops 2 hosts-allocate   broken never pass
 build-arm64   2 hosts-allocate   broken never pass
 build-arm64-pvops 3 capture-logs broken never pass
 build-arm64   3 capture-logs broken never pass

baseline version:
 flight   71167

jobs:
 build-amd64  pass
 build-arm64  broken  
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-arm64-pvopsbroken  
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-sid-netboot-pvgrubfail
 test-amd64-i386-i386-sid-netboot-pvgrub  fail
 test-amd64-i386-amd64-sid-netboot-pygrub pass
 test-arm64-arm64-armhf-sid-netboot-pygrubblocked 
 test-armhf-armhf-armhf-sid-netboot-pygrubfail
 test-amd64-amd64-i386-sid-netboot-pygrub fail



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Outreachy project - Xen Code Review Dashboard

2017-04-17 Thread Jesus M. Gonzalez-Barahona
On Sun, 2017-04-16 at 21:26 -0700, Heather Booker wrote:
> Hi Jesus!
> 
> I appreciate the info on the unicode error. I might have missed it,
> but I also asked about the general microtask specifications. Here
> was my original inquiry:
> > And to clarify, my understanding is that the final result of
> this task
> > is an index of Xen data, with two types: commits and messages.
> > Each commit document should contain its original information
> > from git, plus the name of the branch it was developed in. And
> > should only the mbox messages which appear to be associated
> > with a specific commit exist in the final index? Is there some
> > key information in messages that is supposed to indicate the
> > association of a given commit with a git branch? I would be
> > grateful if you could specify the end goal a little more. :D
> 
> Yeah, so overall I'm not sure I understand the relationship of
> branches to the mailing list messages. Is this to be a simple
> string parsing task wherein I should scan the message body
> for the word "branch"? (I am guessing not ;P)

I'm sorry, I understood that text was about the project, not about the
microtask. The microtask is about either:

* Producing an ES index with messages labeled by thread (by applying a
threading algorithm to messages retrieved from archives), or

* Producing an ES index with commits labeled by branch (by following
refes, and parents information in the output produced by Perceval).

In the complete project, both will be used to produce the final indexes
that power the code review dashboard.

> I will be happy to get back on developing once I better grasp
> the goal! :)

More clear now?

If you want, let's schedule some IRC slot for clarifying whatever is
not clear.

Jesus.

> Thanks!
> 
> Heather
> 
> On Sun, Apr 16, 2017 at 4:23 PM, Jesus M. Gonzalez-Barahona  rgia.com> wrote:
> > On Thu, 2017-04-13 at 00:47 -0700, Heather Booker wrote:
> > > Hi,
> > >
> > > I submitted an application for this code review dashboard and
> > > would love to keep working on the microtask once I get some
> > > more info. :)
> > 
> > Great! I answered your message, could you progress with the task?
> > 
> > > I also came up with a general idea of how the project might be
> > > split up - any feedback on this would be welcome! I wrote:
> > >
> > > "As said by Jesus, the big picture of this project will be
> > porting
> > > everything behind the current code review dashboard to use
> > > Grimoire Lab tools, from the current state of using
> > > MetricsGrimoire and custom scripts. I expect this would involve
> > > Perceval for analyzing data, and Grimoire Elk may be useful in
> > > further stages, or may be too general - this is something I would
> > > wish to explore.
> > > This project will also involve a migration from SQL to
> > Elasticsearch
> > > - because I believe the relevant data is mostly / all available
> > in
> > > places online, I am unsure whether this would need to be a direct
> > > migration. However, looking at the current SQL setup would be
> > > beneficial to understanding the desired format of the
> > Elasticsearch
> > > indexes.
> > > I would love to dive into this project and have 3 main parts -
> > > getting
> > > data into ES, turning it into dashboard displays, and then fine
> > > tuning
> > > and perhaps augmenting the dashboard to improve its usefulness.
> > > Getting data into ES may seem simple but I believe that once it
> > > needs to be used for the dashboard, many realizations will pop up
> > > - thus I’d like to leave maybe 2-3 weeks for that first step, 6-7
> > > weeks
> > > for the visualizations (which will include querying the data),
> > and
> > > the
> > > final 3 weeks for touch ups and improvements."
> > 
> > The plan could be sound, but would need some tweaks, once your
> > skills
> > in Python are clear, which could be the main blocker for the first
> > stages.
> > 
> > > Does this sound like an accurate summary and reasonable
> > timeline? 
> > > And I am guessing that from Jesus's involvement with the threads
> > > that Jesus would be the mentor, is that correct? :)
> > 
> > Yes, I would be ;-)
> > 
> >         Jesus.
> > 
> > > Thanks!
> > >
> > > Heather
> > >
> > >
> > > On Sun, Apr 9, 2017 at 9:50 PM, Heather Booker  > gmai
> > > l.com> wrote:
> > > > Hi Jesus,
> > > >
> > > > While using the Elasticsearch python library
> > > > (https://elasticsearch-py.readthedocs.io/en/master/) to add
> > mbox
> > > > messages to an index, I would get a UnicodeEncodeError:
> > > > "'utf-8' codec can't encode character '\udca0' in position 767:
> > > > surrogates not allowed".
> > > >
> > > > Investigating in Grimoire elk https://github.com/grim
> > > > oirelab/GrimoireELK/blob/96b00bc682485976104a6825ca63ae0
> > > > 8639deacc/grimoire_elk/elk/mbox.py#L200 seems to show that 
> > > > perhaps that tool instead uses Latin-1 encoding, but I found
> > that
> > > > to then produce a serialization error 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 08:47:48AM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
>> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
>> >Hello,
>> >
>> >Although PVHv2 Dom0 is not yet finished, I've been trying the current code 
>> >on
>> >different hardware, and found that with pre-Haswell Intel hardware PVHv2 
>> >Dom0
>> >completely freezes the box when calling iommu_hwdom_init in 
>> >dom0_construct_pvh.
>> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or newer).
>> >
>> >I'm not able to debug that in any meaningful way because the box seems to 
>> >lock
>> >up completely, even the watchdog NMI stops working. Here is the boot log, 
>> >up to
>> >the point where it freezes:
>> 
>> I try "dom0=pvh" with my skylake. An assertion failed. Is it a software bug?
>> 
>---8<---
>diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
>index 527ac2aadd..1df7710041 100644
>--- a/xen/arch/x86/hvm/vioapic.c
>+++ b/xen/arch/x86/hvm/vioapic.c
>@@ -625,6 +625,9 @@ int vioapic_init(struct domain *d)
> nr_gsis += nr_pins;
> }
> 
>+printk("domain nr_gsis: %u vioapic gsis: %u nr_irqs_gsi: %u highest_gsi: 
>%u\n",
>+   hvm_domain_irq(d)->nr_gsis, nr_gsis, nr_irqs_gsi, highest_gsi());
>+
> ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis);
> 
> d->arch.hvm_domain.nr_vioapics = nr_vioapics;

With the above patch,
(XEN) [   10.420001] PCI: MCFG area at 8000 reserved in E820
(XEN) [   10.426854] PCI: Using MCFG for segment  bus 00-ff
(XEN) [   10.433952] Intel VT-d iommu 6 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.441856] Intel VT-d iommu 5 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.449759] Intel VT-d iommu 4 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.457671] Intel VT-d iommu 3 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.465585] Intel VT-d iommu 2 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.473485] Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.481394] Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.489299] Intel VT-d iommu 7 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.497196] Intel VT-d Snoop Control enabled.
(XEN) [   10.503196] Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) [   10.510145] Intel VT-d Queued Invalidation enabled.
(XEN) [   10.516646] Intel VT-d Interrupt Remapping enabled.
(XEN) [   10.523173] Intel VT-d Posted Interrupt not enabled.
(XEN) [   10.529775] Intel VT-d Shared EPT tables enabled.
(XEN) [   10.548815] I/O virtualisation enabled
(XEN) [   10.554186]  - Dom0 mode: Relaxed
(XEN) [   10.559264] Interrupt remapping enabled
(XEN) [   10.564854] nr_sockets: 5
(XEN) [   10.569231] Enabled directed EOI with ioapic_ack_old on!
(XEN) [   10.577294] ENABLING IO-APIC IRQs
(XEN) [   10.582245]  -> Using old ACK method
(XEN) [   10.587967] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) [   10.797645] TSC deadline timer enabled
(XEN) [   10.887286] Defaulting to alternative key handling; send 'A' to switch 
to normal mode.
(XEN) [   10.897864] mwait-idle: MWAIT substates: 0x2020
(XEN) [   10.899335] mwait-idle: v0.4.1 model 0x55
(XEN) [   10.900799] mwait-idle: lapic_timer_reliable_states 0x
(XEN) [   10.902304] VMX: Supported advanced features:
(XEN) [   10.903781]  - APIC MMIO access virtualisation
(XEN) [   10.905258]  - APIC TPR shadow
(XEN) [   10.907138]  - Extended Page Tables (EPT)
(XEN) [   10.908782]  - Virtual-Processor Identifiers (VPID)
(XEN) [   10.910262]  - Virtual NMI
(XEN) [   10.911719]  - MSR direct-access bitmap
(XEN) [   10.913188]  - Unrestricted Guest
(XEN) [   10.914650]  - APIC Register Virtualization
(XEN) [   10.916126]  - Virtual Interrupt Delivery
(XEN) [   10.917596]  - Posted Interrupt Processing
(XEN) [   10.919066]  - VMCS shadowing
(XEN) [   10.920519]  - VM Functions
(XEN) [   10.921976]  - Virtualisation Exceptions
(XEN) [   10.923448]  - Page Modification Logging
(XEN) [   10.924918]  - TSC Scaling
(XEN) [   10.926371] HVM: ASIDs enabled.
(XEN) [   10.927829] HVM: VMX enabled
(XEN) [   10.929278] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [   10.930762] HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) [0.00] CMCI: threshold 0x2 too large for CPU56 bank 6, using 0x1
(XEN) [0.00] CMCI: threshold 0x2 too large for CPU56 bank 9, using 0x1
(XEN) [0.00] CMCI: threshold 0x2 too large for CPU56 bank 10, using 0x1
(XEN) [0.00] CMCI: threshold 0x2 too large for CPU56 bank 11, using 0x1
(XEN) [   13.216648] Brought up 112 CPUs
(XEN) [   13.739330] build-id: dc4540250abe5d96614d340c67069e390c37c21c
(XEN) [   13.740816] Running stub recovery selftests...
(XEN) [   13.742258] traps.c:3466: GPF (): 82d0b041 
[82d0b041] -> 82d080359cf2
(XEN) [   13.745155] traps.c:813: Trap 12: 82d0b040 [82d0b040] 
-> 82d080359cf2
(XEN) [   13.748046] traps.c:1215: Trap 3: 82d0b041 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Roger Pau Monné
On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
> >Hello,
> >
> >Although PVHv2 Dom0 is not yet finished, I've been trying the current code on
> >different hardware, and found that with pre-Haswell Intel hardware PVHv2 Dom0
> >completely freezes the box when calling iommu_hwdom_init in 
> >dom0_construct_pvh.
> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or newer).
> >
> >I'm not able to debug that in any meaningful way because the box seems to 
> >lock
> >up completely, even the watchdog NMI stops working. Here is the boot log, up 
> >to
> >the point where it freezes:
> 
> I try "dom0=pvh" with my skylake. An assertion failed. Is it a software bug?
> 
[...]
> (XEN) [0.00] ACPI: IOAPIC (id[0x08] address[0xfec0] gsi_base[0])
> (XEN) [0.00] IOAPIC[0]: apic_id 8, version 32, address 0xfec0, 
> GSI 0-23
> (XEN) [0.00] ACPI: IOAPIC (id[0x09] address[0xfec01000] gsi_base[24])
> (XEN) [0.00] IOAPIC[1]: apic_id 9, version 32, address 0xfec01000, 
> GSI 24-31
> (XEN) [0.00] ACPI: IOAPIC (id[0x0a] address[0xfec08000] gsi_base[32])
> (XEN) [0.00] IOAPIC[2]: apic_id 10, version 32, address 0xfec08000, 
> GSI 32-39
> (XEN) [0.00] ACPI: IOAPIC (id[0x0b] address[0xfec1] gsi_base[40])
> (XEN) [0.00] IOAPIC[3]: apic_id 11, version 32, address 0xfec1, 
> GSI 40-47
> (XEN) [0.00] ACPI: IOAPIC (id[0x0c] address[0xfec18000] gsi_base[48])
> (XEN) [0.00] IOAPIC[4]: apic_id 12, version 32, address 0xfec18000, 
> GSI 48-55
> (XEN) [0.00] ACPI: IOAPIC (id[0x0f] address[0xfec2] gsi_base[72])
> (XEN) [0.00] IOAPIC[5]: apic_id 15, version 32, address 0xfec2, 
> GSI 72-79
> (XEN) [0.00] ACPI: IOAPIC (id[0x10] address[0xfec28000] gsi_base[80])
> (XEN) [0.00] IOAPIC[6]: apic_id 16, version 32, address 0xfec28000, 
> GSI 80-87
> (XEN) [0.00] ACPI: IOAPIC (id[0x11] address[0xfec3] gsi_base[88])
> (XEN) [0.00] IOAPIC[7]: apic_id 17, version 32, address 0xfec3, 
> GSI 88-95
> (XEN) [0.00] ACPI: IOAPIC (id[0x12] address[0xfec38000] gsi_base[96])
> (XEN) [0.00] IOAPIC[8]: apic_id 18, version 32, address 0xfec38000, 
> GSI 96-103
[...]
> (XEN) [0.00] IRQ limits: 104 GSI, 21416 MSI/MSI-X
[...]
> (XEN) [   14.147217] Dom0 has maximum 1448 PIRQs
> (XEN) [   14.151527] Assertion 'hvm_domain_irq(d)->nr_gsis == nr_gsis' failed 
> at vioapic.c:600
> (XEN) [   14.154404] [ Xen-4.9-unstable  x86_64  debug=y   Not tainted 
> ]
> (XEN) [   14.155867] CPU:0
> (XEN) [   14.157286] RIP:e008:[] 
> vioapic_init+0x110/0x167
> (XEN) [   14.158750] RFLAGS: 00010287   CONTEXT: hypervisor
> (XEN) [   14.160203] rax: 830837c7fa00   rbx: 0009   rcx: 
> c8381c70
> (XEN) [   14.163073] rdx: 0071   rsi: 830837c7e400   rdi: 
> 83083fff7868
> (XEN) [   14.165937] rbp: 82d080457d28   rsp: 82d080457ce8   r8:  
> 82e0
> (XEN) [   14.168797] r9:  0381   r10: 82d08045f400   r11: 
> 
> (XEN) [   14.171657] r12: 0008   r13: 830837d29000   r14: 
> 0058
> (XEN) [   14.174568] r15: 830837c7fb20   cr0: 8005003b   cr4: 
> 003526e0
> (XEN) [   14.177437] cr3: 6f84c000   cr2: 
> (XEN) [   14.178887] ds:    es:    fs:    gs:    ss:    
> cs: e008
> (XEN) [   14.181753] Xen code around  
> (vioapic_init+0x110/0x167):
> (XEN) [   14.184609]  00 00 44 3b 70 40 74 02 <0f> 0b 8b 45 cc 41 89 85 b0 02 
> 00 00 4c 89 ef e8
> (XEN) [   14.187473] Xen stack trace from rsp=82d080457ce8:
> (XEN) [   14.188916]82d08029e7de 000937c7f010 82d080457d08 
> 830837d29000
> (XEN) [   14.191784]0068 0001  
> 
> (XEN) [   14.194645]82d080457d48 82d0802de276 830837d29000 
> 
> (XEN) [   14.197507]82d080457d78 82d08026d593 82d080457d78 
> 830837d29000
> (XEN) [   14.200371]001f 0007 82d080457de8 
> 82d080205226
> (XEN) [   14.203234]82d0804380e0 0004 82d080457eb4 
> 
> (XEN) [   14.206097]82d080457dc8 f7fa32231fcbfbff 01212c100800 
> 00e0
> (XEN) [   14.208956]830838543850 00e0 82d08043b780 
> 006f
> (XEN) [   14.211817]82d080457f08 82d0803ee1be 0028fe80 
> 015c
> (XEN) [   14.214739]01df 0002 0002 
> 0002
> (XEN) [   14.217598]0002 0001 0001 
> 0001
> (XEN) [   14.220459]0001  82d080429a90 
> 0017
> (XEN) [   14.223317]001075ec7000 013b7000 0108 
> 

Re: [Xen-devel] [PATCH for-4.9 v3 3/3] x86/atomic: fix cmpxchg16b inline assembly to work with clang

2017-04-17 Thread Roger Pau Monne
On Mon, Apr 10, 2017 at 02:34:35PM +0100, Roger Pau Monne wrote:
> clang doesn't understand the "=A" register constrain when used with 64bits
> assembly and spits out an internal error:
> 
> fatal error: error in backend: Cannot select: 0x7f9fb89c9390: i64 = 
> build_pair 0x7f9fb89c92b0,
>   0x7f9fb89c9320
>   0x7f9fb89c92b0: i32,ch,glue = CopyFromReg 0x7f9fb89c9240, Register:i32 
> %EAX, 0x7f9fb89c9240:1
> 0x7f9fb89c8c20: i32 = Register %EAX
> 0x7f9fb89c9240: ch,glue = inlineasm 0x7f9fb89c90f0,
> TargetExternalSymbol:i64'lock; cmpxchg16b $1', MDNode:ch<0x7f9fb8476c38>,
> TargetConstant:i64<25>, TargetConstant:i32<18>, Register:i32 %EAX, 
> Register:i32
> %EDX, TargetConstant:i32<196622>, 0x7f9fb89c87c0, TargetConstant:i32<9>,
> Register:i64 %RCX, TargetConstant:i32<9>, Register:i64 %RBX,
> TargetConstant:i32<9>, Register:i64 %RDX, TargetConstant:i32<9>, Register:i64
> %RAX, TargetConstant:i32<196622>, 0x7f9fb89c87c0, TargetConstant:i32<12>,
> Register:i32 %EFLAGS, 0x7f9fb89c90f0:1
>   0x7f9fb89c8a60: i64 = TargetExternalSymbol'lock; cmpxchg16b $1'
>   0x7f9fb89c8b40: i64 = TargetConstant<25>
>   0x7f9fb89c8bb0: i32 = TargetConstant<18>
>   0x7f9fb89c8c20: i32 = Register %EAX
>   0x7f9fb89c8c90: i32 = Register %EDX
>   0x7f9fb89c8d00: i32 = TargetConstant<196622>
>   0x7f9fb89c87c0: i64,ch = load 0x7f9fb9053da0, 
> FrameIndex:i64<1>, undef:i64
> 0x7f9fb9053a90: i64 = FrameIndex<1>
> 0x7f9fb9053e80: i64 = undef
>   0x7f9fb89c8e50: i32 = TargetConstant<9>
>   0x7f9fb89c8d70: i64 = Register %RCX
>   0x7f9fb89c8e50: i32 = TargetConstant<9>
>   0x7f9fb89c8ec0: i64 = Register %RBX
>   0x7f9fb89c8e50: i32 = TargetConstant<9>
>   0x7f9fb89c8fa0: i64 = Register %RDX
>   0x7f9fb89c8e50: i32 = TargetConstant<9>
>   0x7f9fb89c9080: i64 = Register %RAX
> [...]
> 
> Fix this by specifying "rdx:rax" manually using the "d" and "a" constraints.
> 
> Signed-off-by: Roger Pau Monné 
> ---
> Cc: Jan Beulich 
> Cc: Andrew Cooper 
> ---
> Changes since v2:
>  - New in this version.
> 
> ---
> NB: this is the only usage of "=A" on 64bit assembly in Xen. I will send a bug
> report upstream to get this fixed, so that clang properly understands "=A" for
> 64bit assembly as "RDX:RAX", but in the meantime I would like to get this
> patch accepted so the clang build can be functional again.
> 
> Upstream bug report can be found at: 
> http://bugs.llvm.org/show_bug.cgi?id=32594

And this has now been fixed upstream, for the record:

http://llvm.org/viewvc/llvm-project?view=revision=300404

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
>Hello,
>
>Although PVHv2 Dom0 is not yet finished, I've been trying the current code on
>different hardware, and found that with pre-Haswell Intel hardware PVHv2 Dom0
>completely freezes the box when calling iommu_hwdom_init in dom0_construct_pvh.
>OTOH the same doesn't happen when using a newer CPU (ie: haswell or newer).
>
>I'm not able to debug that in any meaningful way because the box seems to lock
>up completely, even the watchdog NMI stops working. Here is the boot log, up to
>the point where it freezes:

I try "dom0=pvh" with my skylake. An assertion failed. Is it a software bug?

 Xen 4.9-unstable
(XEN) [0.00] Xen version 4.9-unstable (r...@sh.intel.com) (gcc (GCC) 
4.8.5 20150623 (Red Hat 4.8.5-11)) debug=y  Mon Apr 17 04:41:08 CST 2017
   
(XEN) [0.00] Latest ChangeSet: Mon Apr 10 17:32:01 2017 +0200 
git:17cd662 
  
(XEN) [0.00] Bootloader: GRUB 2.02~beta2
(XEN) [0.00] Command line: conring_size=16m iommu=verbose,debug 
loglvl=all guest_loglvl=all com1=115200,8n1,0x3f8,4 console=com1,vga 
console_timestamps=boot vvtd_debug=0x3a dom0_mem=10G dom0=pvh   
   
(XEN) [0.00] Xen image load base address: 0 
(XEN) [0.00] Video information: 
(XEN) [0.00]  VGA is text mode 80x25, font 8x16 
(XEN) [0.00]  VBE/DDC methods: none; EDID transfer time: 1 seconds  
(XEN) [0.00]  EDID info not retrieved because no DDC retrieval method 
detected
  
(XEN) [0.00] Disc information:  
(XEN) [0.00]  Found 1 MBR signatures
(XEN) [0.00]  Found 1 EDD information structures
(XEN) [0.00] Xen-e820 RAM map:  
(XEN) [0.00]   - 00099800 (usable)  
(XEN) [0.00]  00099800 - 000a (reserved)
(XEN) [0.00]  000e - 0010 (reserved)
(XEN) [0.00]  0010 - 67b3b000 (usable)
(XEN) [0.00]  67b3b000 - 67d62000 (reserved)
(XEN) [0.00]  67d62000 - 681fc000 (usable)
(XEN) [0.00]  681fc000 - 6829f000 (ACPI data)
(XEN) [0.00]  6829f000 - 6908a000 (usable)
(XEN) [0.00]  6908a000 - 6a08a000 (reserved)
(XEN) [0.00]  6a08a000 - 6b6e6000 (usable)
(XEN) [0.00]  6b6e6000 - 6b9e6000 (reserved)
(XEN) [0.00]  6b9e6000 - 6c416000 (ACPI NVS)
(XEN) [0.00]  6c416000 - 6c516000 (ACPI data)
(XEN) [0.00]  6c516000 - 6fb0 (usable)
(XEN) [0.00]  6fb0 - 9000 (reserved)
(XEN) [0.00]  fd00 - fe80 (reserved)
(XEN) [0.00]  fec0 - fec01000 (reserved)
(XEN) [0.00]  fec8 - fed01000 (reserved)
(XEN) [0.00]  ff80 - 000100c0 (reserved)
(XEN) [0.00]  000100c0 - 00108000 (usable)
(XEN) [0.00] New Xen image base address: 0x6f40
(XEN) [0.00] ACPI: RSDP 000F0510, 0024 (r2 INTEL )
(XEN) [0.00] ACPI: XSDT 6C42C188, 0104 (r1 INTEL  S2600WF 0 
INTL 20091013)
(XEN) [0.00] ACPI: FACP 6C512000, 010C (r5 INTEL  S2600WF 0 
INTL 20091013)
(XEN) [0.00] ACPI: DSDT 6C4B4000, 36756 (r2 INTEL  S2600WF 3 
INTL 20091013)
(XEN) [0.00] ACPI: FACS 6C38E000, 0040
(XEN) [0.00] ACPI: SSDT 6C513000, 04B0 (r2 INTEL  S2600WF 0 
MSFT  10D)
(XEN) [0.00] ACPI: UEFI 6C405000, 0042 (r1 INTEL  S2600WF 2 
INTL 20091013)
(XEN) [0.00] ACPI: UEFI 6C39, 005C (r1  INTEL RstUefiV0 
0)
(XEN) [0.00] ACPI: HPET 6C511000, 0038 (r1 INTEL  S2600WF 1 
INTL 20091013)
(XEN) [0.00] ACPI: APIC 6C50F000, 16DE (r3 INTEL  S2600WF 0 
INTL 20091013)
(XEN) [0.00] ACPI: MCFG 6C50E000, 003C (r1 INTEL  S2600WF 1 
INTL 20091013)
(XEN) [0.00] ACPI: MSCT 6C50D000, 0090 (r1 INTEL  S2600WF 1 
INTL 20091013)
(XEN) [0.00] ACPI: NFIT 6C4F4000, 18028 (r10
 0)
(XEN) [0.00] ACPI: PCAT 6C4F3000, 0048 (r1 INTEL  S2600WF 2 
INTL 20091013)
(XEN) [0.00] ACPI: PCCT 6C4F2000, 00AC (r1 INTEL  S2600WF 2 
INTL 20091013)
(XEN) [0.00] ACPI: RASF 6C4F1000, 0030 (r1 INTEL  S2600WF 1 
INTL 20091013)
(XEN) [0.00] ACPI: SLIT 6C4F, 006C (r1 INTEL  S2600WF  

[Xen-devel] [linux-arm-xen test] 107479: regressions - FAIL

2017-04-17 Thread osstest service owner
flight 107479 linux-arm-xen real [real]
http://logs.test-lab.xenproject.org/osstest/logs/107479/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale   6 xen-boot fail REGR. vs. 107176

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   2 hosts-allocate broken in 107296 pass in 107479
 test-armhf-armhf-xl-credit2   6 xen-boot   fail pass in 107296
 test-armhf-armhf-libvirt-raw  6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-xsm   6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-multivcpu  6 xen-boot  fail pass in 107296
 test-armhf-armhf-libvirt  6 xen-boot   fail pass in 107296
 test-armhf-armhf-libvirt-xsm  6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl   6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-vhd   6 xen-boot   fail pass in 107296
 test-armhf-armhf-xl-rtds  6 xen-boot   fail pass in 107371

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-check fail in 107296 like 
107176
 test-armhf-armhf-libvirt 13 saverestore-support-check fail in 107296 like 
107176

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-credit2 13 saverestore-support-check fail in 107296 never 
pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check fail in 107296 never 
pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check fail in 107296 
never pass
 test-armhf-armhf-xl 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl 13 saverestore-support-check fail in 107296 never pass
 test-armhf-armhf-libvirt12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-xsm 12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-xsm 13 saverestore-support-check fail in 107296 never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-libvirt-raw 12 saverestore-support-check fail in 107296 never 
pass
 test-armhf-armhf-xl-rtds12 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-check fail in 107296 never pass
 test-armhf-armhf-xl-vhd 11 migrate-support-check fail in 107296 never pass
 test-armhf-armhf-xl-vhd 12 saverestore-support-check fail in 107296 never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-arm64-arm64-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-rtds 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-rtds 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 12 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-arm64-arm64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass

version targeted for testing:
 linux9ceff47026d8db55dc9f133a40ae4042c71fcb13
baseline version:
 linux6878b2fa7229c9208a02d45f280c71389cba0617

Last test of basis   107176  2017-04-04 09:44:38 Z   12 days
Failing since107256  2017-04-07 00:24:43 Z   10 days   13 attempts
Testing same since   107296  2017-04-08 07:12:44 Z8 days   12 attempts


10162 people touched revisions under test,
not listing them all

jobs:
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-arm64  pass
 build-armhf  pass
 build-arm64-libvirt