Re: [Xen-devel] [RFC] [Draft Design v2] ACPI/IORT Support in Xen.

2017-11-15 Thread Manish Jaggi



On 11/14/2017 6:53 PM, Julien Grall wrote:

Hi Manish,

Hey Julien,


On 08/11/17 14:38, Manish Jaggi wrote:

ACPI/IORT Support in Xen.
--
 Draft 2

Revision History:

Changes since v1-
- Modified IORT Parsing data structures.
- Added RID->StreamID and RID->DeviceID map as per Andre's suggestion.
- Added reference code which can be read along with this document.
- Removed domctl for DomU, it would be covered in PCI-PT design.

Introduction:
-

I had sent out patch series [0] to hide smmu from Dom0 IORT.
This document is a rework of the series as it:
(a) extends scope by adding parsing of IORT table once
and storing it in in-memory data structures, which can then be used
for querying. This would eliminate the need to parse complete iort
table multiple times.

(b) Generation of IORT for domains be independent using a set of
helper routines.

Index


1. What is IORT. What are its components ?
2. Current Support in Xen
3. IORT for Dom0
4. IORT for DomU
5. Parsing of IORT in Xen
6. Generation of IORT
7. Implementation Phases
8. References

1. IORT Structure ?

IORT refers to Input Output remapping table. It is essentially used 
to find
information about the IO topology (PCIRC-SMMU-ITS) and relationships 
between

devices.

A general structure of IORT [1]:
It has nodes for PCI RC, SMMU, ITS and Platform devices. Using an 
IORT table

relationship between RID -> StreamID -> DeviceId can be obtained.
Which device is behind which SMMU and which interrupt controller, 
topology

is described in IORT Table.

Some PCI RC may be not behind an SMMU, and directly map RID->DeviceID.

RID is a requester ID in PCI context,
StreamID is the ID of the device in SMMU context,
DeviceID is the ID programmed in ITS.

Each iort_node contains an ID map array to translate one ID into 
another.

IDmap Entry {input_range, output_range, output_node_ref, id_count}
This array is associated with PCI RC node, SMMU node, Named component 
node.

and can reference to a SMMU or ITS node.

2. Current Support of IORT
---
IORT is proposed to be used by Xen to setup SMMU's and platform devices
and for translating RID->StreamID and RID->DeviceID.


I am not sure to understand "to setup SMMU's and platform devices...". 
With IORT, a software can discover list of SMMUs and the IDs to 
configure the ITS and SMMUs for each device (e.g PCI, integrated...) 
on the platform. You will not be able to discover the list of platform 
devices through it.


Also, it is not really "proposed". It is the only way to get those 
information from ACPI.

ok, I will rephrase it.




It is proposed in this document to parse iort once and use the 
information

to translate RID without traversing IORT again and again.

Also Xen prepares an IORT table for dom0 based on host IORT.
For DomU IORT table proposed only in case of device passthrough.

3. IORT for Dom0
-
IORT for Dom0 is based on host iort. Few nodes could be removed or 
modified.

  For instance
- Host SMMU nodes should not be present as Xen should only touch it.
- platform nodes (named components) may be controlled by xen command 
line.


I am not sure where does this example come from? As I said, there are 
no plan to support Platform Device passthrough with ACPI. A better 
example here would removing PMCG.


It came from review comments on my previous IORT SMMU hiding patch. 
Andre suggested that Platform Nodes are needed.


After some brainstorming with Julien we found two problems:
1) This only covers RC nodes, but not "named components" (platform
devices), which we will need. ...

From: https://www.mail-archive.com/xen-devel@lists.xen.org/msg123434.html



4. IORT for DomU
-
IORT for DomU should be generated by toolstack. IORT table is only 
present

in case of device passthrough.

At a minimum domU IORT should include a single PCIRC and ITS Group.
Similar PCIRC can be added in DSDT.
The exact structure of DomU IORT would be covered along with PCI PT 
design.


5. Parsing of IORT in Xen
--
IORT nodes can be saved in structures so that IORT table parsing can 
be done
once and is reused by all xen subsystems like ITS / SMMU etc, domain 
creation.

Proposed are the structures to hold IORT information. [4]

struct rid_map_struct {
 void *pcirc_node;
 u16 ib; /* Input base */
 u32 ob; /* Output base */
 u16 idc; /* Id Count */
  struct list_head entry;
};

struct iort_ref
{
 struct list_head rid_streamId_map;
 struct list_head rid_deviceId_map;
}iortref;

5.1 Functions to query StreamID and DeviceID from RID.

void query_streamId(void *pcirc_node, u16 rid, u32 *streamId);
void query_deviceId(void *pcirc_node, u16 rid, u32 *deviceId);

Adding a mapping is done via helper functions

intadd_rid_streamId_map(void*pcirc_node, u32 ib, u32 ob, u32 idc) 
intadd_rid_deviceId_map(void*pcirc_node, u32 ib, u32 ob, u32 idc) - 
r

[Xen-devel] [linux-4.9 test] 116207: tolerable FAIL - PUSHED

2017-11-15 Thread osstest service owner
flight 116207 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116207/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115686
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 115686
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115686
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 115686
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 115686
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux9b609ba2c2df8290054e5c62be69101b43e2a976
baseline version:
 linux5caae9d1419914177994363218616b869659e871

Last test of basis   115686  2017-11-08 21:47:43 Z7 days
Testing same since   116207  2017-11-15 15:27:53 Z0 days1 attempts


People who touched revisions under test:
  Akinobu Mita 
  Alexander Stein 
  Alison Schofield 
  Amit Pundir 
  Andrey Ryabinin 
  Archit Taneja 
  Arend van Spriel 
  Bart Van Assche 
  Bartlomiej Zolnierkiewicz 
  Bhaskar Upadhaya 
  Bjorn Andersson 
  Bjorn Helgaas 
  Boris Ostrovsky 
  Borislav Petkov 
  Carlo Caione 
  Chanwoo Choi 
  Daniel Vetter 
  Darren Hart (VMware) 
  David Howells 
  David Lechner 
  David S. Miller 
  Dmitry Torokhov 
  Doug Ledford 
  Enrico Mioso 
  Erez Shitrit 
  Eric Biggers 
  Fengguang Wu 
  Feras Daoud 
  Frederic Barrat 
  Frederic Weisbecker 
  Gabriel Fernandez 
  Gerhard Bertelsmann 
  Gilad Ben-Yossef 
  Greg Kroah-Hartman 
  Gustavo A. R. Silva 
  Hans Verkuil 
  Harninder Rai 
  Heiko Carstens 
  Herbert Xu 
  Ilya Dryomov 
  Ingo Molnar 
  Jaedon Shin 
  James 

[Xen-devel] Need help know is there any mechanism available for to send data between user application running in 2 different domains.

2017-11-15 Thread sai krishna vp
  Hi,
  I am using Xen project 4.8.0. I need to communicate and send
data between User application running in 2 different domains. What are the
mechanism available for inter domain communication?. Can the Ring Buffer
mechanism for the communication between the Split Drivers be used in the
User Application?.Please help.

Regards
Sai Krishna vp
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-15 Thread Juergen Gross
On 15/11/17 22:20, Stefano Stabellini wrote:
> On Wed, 15 Nov 2017, Boris Ostrovsky wrote:
>> On 11/15/2017 02:09 PM, Stefano Stabellini wrote:
>>> On Wed, 15 Nov 2017, Juergen Gross wrote:
>>> while(mutex_is_locked(&map->active.in_mutex.owner) ||
>>>   mutex_is_locked(&map->active.out_mutex.owner))
>>> cpu_relax();
>>>
>>> ?
>> I'm not convinced there isn't a race.
>>
>> In pvcalls_front_recvmsg() sock->sk->sk_send_head is being read and only
>> then in_mutex is taken. What happens if pvcalls_front_release() resets
>> sk_send_head and manages to test the mutex before the mutex is locked?
>>
>> Even in case this is impossible: the whole construct seems to be rather
>> fragile.
>>> I agree it looks fragile, and I agree that it might be best to avoid the
>>> usage of in_mutex and out_mutex as refcounts. More comments on this
>>> below.
>>>
>>>  
> I think we can wait until pvcalls_refcount is 1 (i.e. it's only us) and
> not rely on mutex state.
 Yes, this would work.
>>> Yes, I agree it would work and for the sake of getting something in
>>> shape for the merge window I am attaching a patch for it. Please go
>>> ahead with it. Let me know if you need anything else immediately, and
>>> I'll work on it ASAP.
>>>
>>>
>>>
>>> However, I should note that this is a pretty big hammer we are using:
>>> the refcount is global, while we only need to wait until it's only us
>>> _on this specific socket_.
>>
>> Can you explain why socket is important?
> 
> Yes, of course: there are going to be many open sockets on a given
> pvcalls connection. pvcalls_refcount is global: waiting on
> pvcalls_refcount means waiting until any operations on any unrelated
> sockets stop. While we only need to wait until the operations on the one
> socket we want to close stop.
> 
> 
>>>
>>> We really need a per socket refcount. If we don't want to use the mutex
>>> internal counters, then we need another one.
>>>
>>> See the appended patch that introduces a per socket refcount. However,
>>> for the merge window, also using pvcalls_refcount is fine.
>>>
>>> The race Juergen is concerned about is only theoretically possible:
>>>
>>> recvmsg: release:
>>>   
>>>   test sk_send_head  clear sk_send_head
>>>   
>>>   grab in_mutex  
>>>  
>>>  test in_mutex
>>>
>>> Without kernel preemption is not possible for release to clear
>>> sk_send_head and test in_mutex after recvmsg tests sk_send_head and
>>> before recvmsg grabs in_mutex.
>>
>> Sorry, I don't follow --- what does preemption have to do with this? If
>> recvmsg and release happen on different processors the order of
>> operations can be
>>
>> CPU0   CPU1
>>
>> test sk_send_head
>> 
>> clear sk_send_head
>> 
>> 
>> test in_mutex
>> free everything
>> grab in_mutex
>>
>> I actually think RCU should take care of all of this.
> 
> Preemption could cause something very similar to happen, but your
> example is very good too, even better, because it could trigger the
> issue even with preemption disabled. I'll think more about this and
> submit a separate patch on top of the simple pvcalls_refcount patch
> below.

We are running as a guest. Even with interrupts off the vcpu could be
off the pcpu for several milliseconds!

Don't count on code length to avoid races!


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 116213: tolerable all pass - PUSHED

2017-11-15 Thread osstest service owner
flight 116213 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116213/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  ca4b2e52a894845f26fc5b784f465e31c4cef90b
baseline version:
 xen  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f

Last test of basis   116162  2017-11-14 17:01:41 Z1 days
Testing same since   116213  2017-11-16 02:14:31 Z0 days1 attempts


People who touched revisions under test:
  Julien Grall 
  Stefano Stabellini 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/xen.git
   b9ee1fd..ca4b2e5  ca4b2e52a894845f26fc5b784f465e31c4cef90b -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus test] 116202: regressions - trouble: blocked/broken/fail/pass

2017-11-15 Thread osstest service owner
flight 116202 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116202/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm  broken
 build-amd64-pvops 6 kernel-build fail REGR. vs. 115643

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-xsm   4 host-install(4)  broken pass in 116182
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-saverestore fail in 116182 pass 
in 116202
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-saverestore fail in 116182 pass 
in 116202

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-examine  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-xsm 13 migrate-support-check fail in 116182 never pass
 test-armhf-armhf-xl-xsm 14 saverestore-support-check fail in 116182 never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115643
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115643
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115643
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 115643
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115643
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saver

[Xen-devel] [xen-unstable test] 116199: tolerable FAIL - PUSHED

2017-11-15 Thread osstest service owner
flight 116199 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116199/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw 15 guest-start/debian.repeat fail in 116178 pass 
in 116199
 test-amd64-amd64-xl-pvhv2-amd  7 xen-boot  fail pass in 116178
 test-armhf-armhf-xl-arndale   6 xen-installfail pass in 116178
 test-amd64-amd64-rumprun-amd64 17 rumprun-demo-xenstorels/xenstorels.repeat 
fail pass in 116178
 test-armhf-armhf-xl-cubietruck 16 guest-start/debian.repeat fail pass in 116178

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop  fail blocked in 116161
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail in 116178 
like 116161
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail in 116178 never pass
 test-armhf-armhf-xl-arndale 13 migrate-support-check fail in 116178 never pass
 test-armhf-armhf-xl-arndale 14 saverestore-support-check fail in 116178 never 
pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116161
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116161
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 116161
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116161
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116161
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116161
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116161
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116161
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116161
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 xen  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
baseline version:
 xen  36c80e29e36eee02f20f18e7f32267442b18c8bd

Last test of basis   116161  2017-11-14 16:48:27 Z1 days
Testing same since   116178  2017-11-15 00:51:31 Z1 days2 attempts


People who touched revisions under test:
  Eric Chanudet 
  Min He 
  Yi Zhang 
  Yu Zhang 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  p

Re: [Xen-devel] [PATCH for-4.10 2/2] xen/arm: p2m: Add more debug in get_page_from_gva

2017-11-15 Thread Stefano Stabellini
On Wed, 15 Nov 2017, Julien Grall wrote:
> The function get_page_from_gva is used by copy_*_guest helpers to
> translate a guest virtual address to a machine physical address and take
> reference on the page.
> 
> There are a couple of errors path that will return the same value making
   ^ paths

> difficult to know the exact error. Add more debug in each error patch
^ it difficult


> only for debug-build.
> 
> This should help narrowing down the intermittent failure with the
> hypercall GNTTABOP_copy (see [1]).
> 
> [1] https://lists.xen.org/archives/html/xen-devel/2017-11/msg00942.html
> 
> Signed-off-by: Julien Grall 

Acked-by: Stefano Stabellini 

fixed on commit


> ---
>  xen/arch/arm/p2m.c | 13 +
>  1 file changed, 13 insertions(+)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index f6b3d8e421..417609ede2 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1428,16 +1428,29 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
> vaddr_t va,
>  par = gvirt_to_maddr(va, &maddr, flags);
>  
>  if ( par )
> +{
> +dprintk(XENLOG_G_DEBUG,
> +"%pv: gvirt_to_maddr failed va=%#"PRIvaddr" flags=0x%lx 
> par=%#"PRIx64"\n",
> +v, va, flags, par);
>  goto err;
> +}
>  
>  if ( !mfn_valid(maddr_to_mfn(maddr)) )
> +{
> +dprintk(XENLOG_G_DEBUG, "%pv: Invalid MFN %#"PRI_mfn"\n",
> +v, mfn_x(maddr_to_mfn(maddr)));
>  goto err;
> +}
>  
>  page = mfn_to_page(maddr_to_mfn(maddr));
>  ASSERT(page);
>  
>  if ( unlikely(!get_page(page, d)) )
> +{
> +dprintk(XENLOG_G_DEBUG, "%pv: Failing to acquire the MFN 
> %#"PRI_mfn"\n",
> +v, mfn_x(maddr_to_mfn(maddr)));
>  page = NULL;
> +}
>  
>  err:
>  if ( !page && p2m->mem_access_enabled )

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10 1/2] xen/arm: mm: Change the return value of gvirt_to_maddr

2017-11-15 Thread Stefano Stabellini
On Wed, 15 Nov 2017, Julien Grall wrote:
> Currently, gvirt_to_maddr return -EFAULT when the translation failed.
> It might be useful to return the PAR_EL1 (Physical Address Register)
> in such a case to get a better idea of the reason.
> 
> So modify the return value to use 0 on sucess or return the PAR on
  ^ success


> failure.
> 
> The callers are modified to reflect the change of the return value.
> 
> Note that with the change in gvirt_to_maddr, ma needs to be initialized
> to avoid GCC been confused (i.e value may be unitialized) with the new
 ^ uninitialized


> construction.
> 
> Signed-off-by: Julien Grall 

Acked-by: Stefano Stabellini 

I fixed on commit


> ---
>  xen/arch/arm/domain_build.c | 8 
>  xen/arch/arm/kernel.c   | 8 
>  xen/arch/arm/p2m.c  | 6 +++---
>  xen/include/asm-arm/mm.h| 9 +++--
>  4 files changed, 18 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index bf29299707..c74f4dd69d 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2002,15 +2002,15 @@ static void initrd_load(struct kernel_info *kinfo)
>  
>  for ( offs = 0; offs < len; )
>  {
> -int rc;
> -paddr_t s, l, ma;
> +uint64_t par;
> +paddr_t s, l, ma = 0;
>  void *dst;
>  
>  s = offs & ~PAGE_MASK;
>  l = min(PAGE_SIZE - s, len);
>  
> -rc = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
> -if ( rc )
> +par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
> +if ( par )
>  {
>  panic("Unable to translate guest address");
>  return;
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index c2755a9ab9..a6c6413712 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -167,15 +167,15 @@ static void kernel_zimage_load(struct kernel_info *info)
> paddr, load_addr, load_addr + len);
>  for ( offs = 0; offs < len; )
>  {
> -int rc;
> -paddr_t s, l, ma;
> +uint64_t par;
> +paddr_t s, l, ma = 0;
>  void *dst;
>  
>  s = offs & ~PAGE_MASK;
>  l = min(PAGE_SIZE - s, len);
>  
> -rc = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
> -if ( rc )
> +par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
> +if ( par )
>  {
>  panic("Unable to map translate guest address");
>  return;
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 68b488997d..f6b3d8e421 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1414,7 +1414,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
> vaddr_t va,
>  struct p2m_domain *p2m = p2m_get_hostp2m(d);
>  struct page_info *page = NULL;
>  paddr_t maddr = 0;
> -int rc;
> +uint64_t par;
>  
>  /*
>   * XXX: To support a different vCPU, we would need to load the
> @@ -1425,9 +1425,9 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
> vaddr_t va,
>  
>  p2m_read_lock(p2m);
>  
> -rc = gvirt_to_maddr(va, &maddr, flags);
> +par = gvirt_to_maddr(va, &maddr, flags);
>  
> -if ( rc )
> +if ( par )
>  goto err;
>  
>  if ( !mfn_valid(maddr_to_mfn(maddr)) )
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index cd6dfb54b9..ad2f2a43dc 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -266,11 +266,16 @@ static inline void *maddr_to_virt(paddr_t ma)
>  }
>  #endif
>  
> -static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa, unsigned int flags)
> +/*
> + * Translate a guest virtual address to a machine address.
> + * Return the fault information if the translation has failed else 0.
> + */
> +static inline uint64_t gvirt_to_maddr(vaddr_t va, paddr_t *pa,
> +  unsigned int flags)
>  {
>  uint64_t par = gva_to_ma_par(va, flags);
>  if ( par & PAR_F )
> -return -EFAULT;
> +return par;
>  *pa = (par & PADDR_MASK & PAGE_MASK) | ((unsigned long) va & ~PAGE_MASK);
>  return 0;
>  }
> -- 
> 2.11.0
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 03/12] ARM: VGIC: remove gic_clear_pending_irqs()

2017-11-15 Thread Stefano Stabellini
On Fri, 10 Nov 2017, Andre Przywara wrote:
> Hi,
> 
> On 26/10/17 01:14, Stefano Stabellini wrote:
> > On Thu, 19 Oct 2017, Andre Przywara wrote:
> >> gic_clear_pending_irqs() was not only misnamed, but also misplaced, as
> >> a function solely dealing with the GIC emulation should not live in gic.c.
> >> Move the functionality of this function into its only caller in vgic.c
> >>
> >> Signed-off-by: Andre Przywara 
> > 
> > The reason why gic_clear_pending_irqs is in gic.c is that lr_mask and
> > lr_pending are considered part of the gic driver (gic.c). On the other
> > end, inflight is part of the vgic.
> > 
> > As an example, the idea is that the code outside of gic.c (for example
> > vgic.c) shouldn't have to know, or have to care, whether a given IRQ is
> > in the lr_pending queue or actually in a LR register.
> 
> I can understand that the lr_pending queue *should* be logical
> continuation of the LR registers, something like spill-over LRs.
> Though I wasn't aware of this before ;-)
> So I can see that from a *logical* point of view it looks like it
> belongs to the hardware part of the GIC (more specifically gic-vgic.c),
> which deals with the actual LRs. But I guess this is somewhat of a grey
> area.
> 
> BUT:
> This is a design choice of the VGIC, and one which the KVM VGIC design
> for instance does *not* share. Also my earlier Xen VGIC rework patches
> got rid of this as well (because dealing with two lists is too complicated).
> Also, the name is misleading: gic_clear_pending_irqs() does not hint at
> all that this is dealing with the GIC emulation, I think it should read
> vgic_vcpu_clear_pending_irqs().
> And as it accesses VGIC specific data structures only, I don't think it
> belongs to gic.c, really.
> So I could live with moving it into the new gic-vgic.c, let me see if
> that works.
> 
> The need for this patch didn't come out of the blue, I actually need it
> to be able to reuse gic.c with *any* other VGIC implementation. And this
> applies to both a VGIC rework and the KVM VGIC port.
> These lr_queue and lr_pending queues are really an implementation detail
> of the existing *VGIC*, and, more importantly: they refer to the struct
> pending_irq, which is definitely a VGIC detail.
> 
> The rabbit to follow in this series is to strictly split the usage of
> struct pending_irq from the hardware GIC driver. The KVM VGIC does not
> have a "struct pending_irq", so we can't have anything mentioning that
> in code that should survive a KVM VGIC port.
> So short of replacing gic.c at all, moving everything mentioning
> pending_irq out of gic.c is the only option.

Could you at least retain gic_clear_pending_irqs as a separate function?

pending_irq is clearly separate from anything vgic and doesn't belong
there. Nonetheless, I can live with moving gic_clear_pending_irqs to
vgic.c to make future development easier, but at least let's keep
gic_clear_pending_irqs as is.


> > lr_mask and lr_pending are only accessed from gic.c. The only exception
> > is the initialization (INIT_LIST_HEAD(&v->arch.vgic.lr_pending)).
> > 
> > 
> >> ---
> >>  xen/arch/arm/gic.c| 11 ---
> >>  xen/arch/arm/vgic.c   |  4 +++-
> >>  xen/include/asm-arm/gic.h |  1 -
> >>  3 files changed, 3 insertions(+), 13 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> >> index ed363f6c37..75b2e0e0ca 100644
> >> --- a/xen/arch/arm/gic.c
> >> +++ b/xen/arch/arm/gic.c
> >> @@ -675,17 +675,6 @@ out:
> >>  spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> >>  }
> >>  
> >> -void gic_clear_pending_irqs(struct vcpu *v)
> >> -{
> >> -struct pending_irq *p, *t;
> >> -
> >> -ASSERT(spin_is_locked(&v->arch.vgic.lock));
> >> -
> >> -v->arch.lr_mask = 0;
> >> -list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
> >> -gic_remove_from_lr_pending(v, p);
> >> -}
> >> -
> >>  int gic_events_need_delivery(void)
> >>  {
> >>  struct vcpu *v = current;
> >> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> >> index d8acbbeaaa..451a306a98 100644
> >> --- a/xen/arch/arm/vgic.c
> >> +++ b/xen/arch/arm/vgic.c
> >> @@ -504,7 +504,9 @@ void vgic_clear_pending_irqs(struct vcpu *v)
> >>  spin_lock_irqsave(&v->arch.vgic.lock, flags);
> >>  list_for_each_entry_safe ( p, t, &v->arch.vgic.inflight_irqs, 
> >> inflight )
> >>  list_del_init(&p->inflight);
> >> -gic_clear_pending_irqs(v);
> >> +list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
> >> +gic_remove_from_lr_pending(v, p);
> >> +v->arch.lr_mask = 0;
> >>  spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> >>  }
> >>  
> >> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> >> index d3d7bda50d..2f248301ce 100644
> >> --- a/xen/include/asm-arm/gic.h
> >> +++ b/xen/include/asm-arm/gic.h
> >> @@ -236,7 +236,6 @@ int gic_remove_irq_from_guest(struct domain *d, 
> >> unsigned int virq,
> >>struct i

Re: [Xen-devel] [PATCH v3] xen-disk: use an IOThread per instance

2017-11-15 Thread Stefano Stabellini
On Wed, 15 Nov 2017, Paul Durrant wrote:
> Anthony, Stefano,
> 
>   Ping?

Acked-by: Stefano Stabellini 

Unless Anthony or somebody else object, I'll queue it up in my "next"
branch (which I'll send upstream after 2.11 is out).

Cheers,

Stefano


> > -Original Message-
> > From: Paul Durrant [mailto:paul.durr...@citrix.com]
> > Sent: 07 November 2017 10:47
> > To: qemu-de...@nongnu.org; xen-de...@lists.xenproject.org
> > Cc: Paul Durrant ; Stefano Stabellini
> > ; Anthony Perard ;
> > Kevin Wolf ; Max Reitz 
> > Subject: [PATCH v3] xen-disk: use an IOThread per instance
> > 
> > This patch allocates an IOThread object for each xen_disk instance and
> > sets the AIO context appropriately on connect. This allows processing
> > of I/O to proceed in parallel.
> > 
> > The patch also adds tracepoints into xen_disk to make it possible to
> > follow the state transtions of an instance in the log.
> > 
> > Signed-off-by: Paul Durrant 
> > ---
> > Cc: Stefano Stabellini 
> > Cc: Anthony Perard 
> > Cc: Kevin Wolf 
> > Cc: Max Reitz 
> > 
> > v3:
> >  - Use new iothread_create/destroy() functions
> > 
> > v2:
> >  - explicitly acquire and release AIO context in qemu_aio_complete() and
> >blk_bh()
> > ---
> >  hw/block/trace-events |  7 +++
> >  hw/block/xen_disk.c   | 53
> > ---
> >  2 files changed, 53 insertions(+), 7 deletions(-)
> > 
> > diff --git a/hw/block/trace-events b/hw/block/trace-events
> > index cb6767b3ee..962a3bfa24 100644
> > --- a/hw/block/trace-events
> > +++ b/hw/block/trace-events
> > @@ -10,3 +10,10 @@ virtio_blk_submit_multireq(void *vdev, void *mrb, int
> > start, int num_reqs, uint6
> >  # hw/block/hd-geometry.c
> >  hd_geometry_lchs_guess(void *blk, int cyls, int heads, int secs) "blk %p
> > LCHS %d %d %d"
> >  hd_geometry_guess(void *blk, uint32_t cyls, uint32_t heads, uint32_t secs,
> > int trans) "blk %p CHS %u %u %u trans %d"
> > +
> > +# hw/block/xen_disk.c
> > +xen_disk_alloc(char *name) "%s"
> > +xen_disk_init(char *name) "%s"
> > +xen_disk_connect(char *name) "%s"
> > +xen_disk_disconnect(char *name) "%s"
> > +xen_disk_free(char *name) "%s"
> > diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> > index e431bd89e8..f74fcd42d1 100644
> > --- a/hw/block/xen_disk.c
> > +++ b/hw/block/xen_disk.c
> > @@ -27,10 +27,12 @@
> >  #include "hw/xen/xen_backend.h"
> >  #include "xen_blkif.h"
> >  #include "sysemu/blockdev.h"
> > +#include "sysemu/iothread.h"
> >  #include "sysemu/block-backend.h"
> >  #include "qapi/error.h"
> >  #include "qapi/qmp/qdict.h"
> >  #include "qapi/qmp/qstring.h"
> > +#include "trace.h"
> > 
> >  /* - */
> > 
> > @@ -125,6 +127,9 @@ struct XenBlkDev {
> >  DriveInfo   *dinfo;
> >  BlockBackend*blk;
> >  QEMUBH  *bh;
> > +
> > +IOThread*iothread;
> > +AioContext  *ctx;
> >  };
> > 
> >  /* - */
> > @@ -596,9 +601,12 @@ static int ioreq_runio_qemu_aio(struct ioreq
> > *ioreq);
> >  static void qemu_aio_complete(void *opaque, int ret)
> >  {
> >  struct ioreq *ioreq = opaque;
> > +struct XenBlkDev *blkdev = ioreq->blkdev;
> > +
> > +aio_context_acquire(blkdev->ctx);
> > 
> >  if (ret != 0) {
> > -xen_pv_printf(&ioreq->blkdev->xendev, 0, "%s I/O error\n",
> > +xen_pv_printf(&blkdev->xendev, 0, "%s I/O error\n",
> >ioreq->req.operation == BLKIF_OP_READ ? "read" : 
> > "write");
> >  ioreq->aio_errors++;
> >  }
> > @@ -607,10 +615,10 @@ static void qemu_aio_complete(void *opaque, int
> > ret)
> >  if (ioreq->presync) {
> >  ioreq->presync = 0;
> >  ioreq_runio_qemu_aio(ioreq);
> > -return;
> > +goto done;
> >  }
> >  if (ioreq->aio_inflight > 0) {
> > -return;
> > +goto done;
> >  }
> > 
> >  if (xen_feature_grant_copy) {
> > @@ -647,16 +655,19 @@ static void qemu_aio_complete(void *opaque, int
> > ret)
> >  }
> >  case BLKIF_OP_READ:
> >  if (ioreq->status == BLKIF_RSP_OKAY) {
> > -block_acct_done(blk_get_stats(ioreq->blkdev->blk), 
> > &ioreq->acct);
> > +block_acct_done(blk_get_stats(blkdev->blk), &ioreq->acct);
> >  } else {
> > -block_acct_failed(blk_get_stats(ioreq->blkdev->blk), 
> > &ioreq->acct);
> > +block_acct_failed(blk_get_stats(blkdev->blk), &ioreq->acct);
> >  }
> >  break;
> >  case BLKIF_OP_DISCARD:
> >  default:
> >  break;
> >  }
> > -qemu_bh_schedule(ioreq->blkdev->bh);
> > +qemu_bh_schedule(blkdev->bh);
> > +
> > +done:
> > +aio_context_release(blkdev->ctx);
> >  }
> > 
> >  static bool blk_split_discard(struct ioreq *ioreq, blkif_sector_t
> > sector_number,
> > @@ -913,17 +924,29 @@ static void blk_handle_requests(struct XenBlkDev
> > *blkdev

[Xen-devel] [seabios test] 116204: regressions - FAIL

2017-11-15 Thread osstest service owner
flight 116204 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116204/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115539

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail pass in 
116187

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop  fail in 116187 like 115539
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115539
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115539
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 seabios  63451fca13c75870e1703eb3e20584d91179aebc
baseline version:
 seabios  0ca6d6277dfafc671a5b3718cbeb5c78e2a888ea

Last test of basis   115539  2017-11-03 20:48:58 Z   12 days
Testing same since   115733  2017-11-10 17:19:59 Z5 days   10 attempts


People who touched revisions under test:
  Kevin O'Connor 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 63451fca13c75870e1703eb3e20584d91179aebc
Author: Kevin O'Connor 
Date:   Fri Nov 10 11:49:19 2017 -0500

docs: Note v1.11.0 release

Signed-off-by: Kevin O'Connor 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [BUG] Error applying XSA240 update 5 on 4.8 and 4.9 (patch 3 references CONFIG_PV_LINEAR_PT, 3285e75dea89, x86/mm: Make PV linear pagetables optional)

2017-11-15 Thread John Thomson
Hi,

I am having trouble applying the patch 3 from XSA240 update 5 for xen
stable 4.8 and 4.9 
xsa240 0003 contains:

CONFIG_PV_LINEAR_PT

from:

x86/mm: Make PV linear pagetables optional
https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=3285e75dea89afb0ef5b3ee39bd15194bd7cc110

I cannot find this string in an XSA, nor is an XSA referenced in the
commit.
Am I missing a patch, or doing something wrong?

xsa240-4.9 0002 and  3285e75dea "x86/mm: Make PV linear pagetables
optional" conflict as is.
xsa240-4.9 0003 applies after 3285e75dea "x86/mm: Make PV linear
pagetables optional"

Could we also refer to the third patch in the XSA resolution section
please?

Thank you,
--
John Thomson

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path

2017-11-15 Thread Thomas Gleixner
On Wed, 15 Nov 2017, Peter Zijlstra wrote:

> On Mon, Nov 13, 2017 at 06:06:02PM +0800, Quan Xu wrote:
> > From: Yang Zhang 
> > 
> > Implement a generic idle poll which resembles the functionality
> > found in arch/. Provide weak arch_cpu_idle_poll function which
> > can be overridden by the architecture code if needed.
> 
> No, we want less of those magic hooks, not more.
> 
> > Interrupts arrive which may not cause a reschedule in idle loops.
> > In KVM guest, this costs several VM-exit/VM-entry cycles, VM-entry
> > for interrupts and VM-exit immediately. Also this becomes more
> > expensive than bare metal. Add a generic idle poll before enter
> > real idle path. When a reschedule event is pending, we can bypass
> > the real idle path.
> 
> Why not do a HV specific idle driver?

If I understand the problem correctly then he wants to avoid the heavy
lifting in tick_nohz_idle_enter() in the first place, but there is already
an interesting quirk there which makes it exit early.  See commit
3c5d92a0cfb5 ("nohz: Introduce arch_needs_cpu"). The reason for this commit
looks similar. But lets not proliferate that. I'd rather see that go away.

But the irq_timings stuff is heading into the same direction, with a more
complex prediction logic which should tell you pretty good how long that
idle period is going to be and in case of an interrupt heavy workload this
would skip the extra work of stopping and restarting the tick and provide a
very good input into a polling decision.

This can be handled either in a HV specific idle driver or even in the
generic core code. If the interrupt does not arrive then you can assume
within the predicted time then you can assume that the flood stopped and
invoke halt or whatever.

That avoids all of that 'tunable and tweakable' x86 specific hackery and
utilizes common functionality which is mostly there already.

Thanks,

tglx






___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-15 Thread Stefano Stabellini
On Wed, 15 Nov 2017, Boris Ostrovsky wrote:
> On 11/15/2017 04:50 PM, Stefano Stabellini wrote:
> >
> > Sorry, code style issue: one missing space in the comment. I'll send it
> > again separately
> 
> 
> I've already fixed this, no worries.

Thank you!!

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-15 Thread Stefano Stabellini
On Wed, 15 Nov 2017, Stefano Stabellini wrote:
> On Wed, 15 Nov 2017, Boris Ostrovsky wrote:
> > On 11/15/2017 02:09 PM, Stefano Stabellini wrote:
> > > On Wed, 15 Nov 2017, Juergen Gross wrote:
> > > while(mutex_is_locked(&map->active.in_mutex.owner) ||
> > >   mutex_is_locked(&map->active.out_mutex.owner))
> > > cpu_relax();
> > >
> > > ?
> >  I'm not convinced there isn't a race.
> > 
> >  In pvcalls_front_recvmsg() sock->sk->sk_send_head is being read and 
> >  only
> >  then in_mutex is taken. What happens if pvcalls_front_release() resets
> >  sk_send_head and manages to test the mutex before the mutex is locked?
> > 
> >  Even in case this is impossible: the whole construct seems to be rather
> >  fragile.
> > > I agree it looks fragile, and I agree that it might be best to avoid the
> > > usage of in_mutex and out_mutex as refcounts. More comments on this
> > > below.
> > >
> > >  
> > >>> I think we can wait until pvcalls_refcount is 1 (i.e. it's only us) and
> > >>> not rely on mutex state.
> > >> Yes, this would work.
> > > Yes, I agree it would work and for the sake of getting something in
> > > shape for the merge window I am attaching a patch for it. Please go
> > > ahead with it. Let me know if you need anything else immediately, and
> > > I'll work on it ASAP.
> > >
> > >
> > >
> > > However, I should note that this is a pretty big hammer we are using:
> > > the refcount is global, while we only need to wait until it's only us
> > > _on this specific socket_.
> > 
> > Can you explain why socket is important?
> 
> Yes, of course: there are going to be many open sockets on a given
> pvcalls connection. pvcalls_refcount is global: waiting on
> pvcalls_refcount means waiting until any operations on any unrelated
> sockets stop. While we only need to wait until the operations on the one
> socket we want to close stop.
> 
> 
> > >
> > > We really need a per socket refcount. If we don't want to use the mutex
> > > internal counters, then we need another one.
> > >
> > > See the appended patch that introduces a per socket refcount. However,
> > > for the merge window, also using pvcalls_refcount is fine.
> > >
> > > The race Juergen is concerned about is only theoretically possible:
> > >
> > > recvmsg: release:
> > >   
> > >   test sk_send_head  clear sk_send_head
> > >   
> > >   grab in_mutex  
> > >  
> > >  test in_mutex
> > >
> > > Without kernel preemption is not possible for release to clear
> > > sk_send_head and test in_mutex after recvmsg tests sk_send_head and
> > > before recvmsg grabs in_mutex.
> > 
> > Sorry, I don't follow --- what does preemption have to do with this? If
> > recvmsg and release happen on different processors the order of
> > operations can be
> > 
> > CPU0   CPU1
> > 
> > test sk_send_head
> > 
> > clear sk_send_head
> > 
> > 
> > test in_mutex
> > free everything
> > grab in_mutex
> > 
> > I actually think RCU should take care of all of this.
> 
> Preemption could cause something very similar to happen, but your
> example is very good too, even better, because it could trigger the
> issue even with preemption disabled. I'll think more about this and
> submit a separate patch on top of the simple pvcalls_refcount patch
> below.
> 
> 
> 
> > But for now I will take your refcount-based patch. However, it also
> > needs comment update.
> > 
> > How about
> > 
> > /*
> >  * We need to make sure that send/rcvmsg on this socket has not started
> >  * before we've cleared sk_send_head here. The easiest (though not optimal)
> >  * way to guarantee this is to see that no pvcall (other than us) is in
> > progress.
> >  */
> 
> Yes, this is the patch:
> 
> ---
> 
> 
> xen/pvcalls: fix potential endless loop in pvcalls-front.c
> 
> mutex_trylock() returns 1 if you take the lock and 0 if not. Assume you
> take in_mutex on the first try, but you can't take out_mutex. Next times
> you call mutex_trylock() in_mutex is going to fail. It's an endless
> loop.
> 
> Solve the problem by waiting until the global refcount is 1 instead (the
> refcount is 1 when the only active pvcalls frontend function is
> pvcalls_front_release).
> 
> Reported-by: Dan Carpenter 
> Signed-off-by: Stefano Stabellini 
> CC: boris.ostrov...@oracle.com
> CC: jgr...@suse.com
> 
> 
> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> index 0c1ec68..54c0fda 100644
> --- a/drivers/xen/pvcalls-front.c
> +++ b/drivers/xen/pvcalls-front.c
> @@ -1043,13 +1043,12 @@ int pvcalls_front_release(struct socket *so

Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-15 Thread Boris Ostrovsky
On 11/15/2017 04:50 PM, Stefano Stabellini wrote:
>
> Sorry, code style issue: one missing space in the comment. I'll send it
> again separately


I've already fixed this, no worries.

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10 2/2] xen/arm: p2m: Add more debug in get_page_from_gva

2017-11-15 Thread Julien Grall

Hi Andrew,

On 11/15/2017 07:43 PM, Andrew Cooper wrote:

On 15/11/17 19:34, Julien Grall wrote:

The function get_page_from_gva is used by copy_*_guest helpers to
translate a guest virtual address to a machine physical address and take
reference on the page.

There are a couple of errors path that will return the same value making
difficult to know the exact error. Add more debug in each error patch
only for debug-build.

This should help narrowing down the intermittent failure with the
hypercall GNTTABOP_copy (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-11/msg00942.html

Signed-off-by: Julien Grall 
---
  xen/arch/arm/p2m.c | 13 +
  1 file changed, 13 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f6b3d8e421..417609ede2 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1428,16 +1428,29 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
vaddr_t va,
  par = gvirt_to_maddr(va, &maddr, flags);
  
  if ( par )

+{
+dprintk(XENLOG_G_DEBUG,
+"%pv: gvirt_to_maddr failed va=%#"PRIvaddr" flags=0x%lx 
par=%#"PRIx64"\n",
+v, va, flags, par);


Given the long round-trip time on debugging output, how about trying to
dump the guest and/or second stage table walk?


I thought about it, however at the moment dump_s1_guest_walk() is very 
minimal and would be add much value here. Thought, Now that we have code 
to do first-stage walk (see guest_walk_tables), we might be able to get 
a better dump here. Thought I am not sure it would be 4.10 material.


However, I think we could try to translate the guest VA to a guest PA 
using hardware instruction and then do the second-stage walk using 
dump_p2m_lookup.


Let me have a look.

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC v3 0/6] x86/idle: add halt poll support

2017-11-15 Thread Konrad Rzeszutek Wilk
On Mon, Nov 13, 2017 at 06:05:59PM +0800, Quan Xu wrote:
> From: Yang Zhang 
> 
> Some latency-intensive workload have seen obviously performance
> drop when running inside VM. The main reason is that the overhead
> is amplified when running inside VM. The most cost I have seen is
> inside idle path.

Meaning an VMEXIT b/c it is an 'halt' operation ? And then going
back in guest (VMRESUME) takes time. And hence your latency gets
all whacked b/c of this?

So if I understand - you want to use your _full_ timeslice (of the guest)
without ever (or as much as possible) to go in the hypervisor?

Which means in effect you don't care about power-saving or CPUfreq
savings, you just want to eat the full CPU for snack?

> 
> This patch introduces a new mechanism to poll for a while before
> entering idle state. If schedule is needed during poll, then we
> don't need to goes through the heavy overhead path.

Schedule of what? The guest or the host?


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 2/2 v2] xen: Fix 16550 UART console for HP Moonshot (Aarch64) platform

2017-11-15 Thread Konrad Rzeszutek Wilk
On Thu, Nov 09, 2017 at 03:49:24PM +0530, Bhupinder Thakur wrote:
> The console was not working on HP Moonshot (HPE Proliant Aarch64) because
> the UART registers were accessed as 8-bit aligned addresses. However,
> registers are 32-bit aligned for HP Moonshot.
> 
> Since ACPI/SPCR table does not specify the register shift to be applied 
> to the
> register offset, this patch implements an erratum to correctly set the 
> register
> shift for HP Moonshot.
> 
> Similar erratum was implemented in linux:
> 
> commit 79a648328d2a604524a30523ca763fbeca0f70e3
> Author: Loc Ho 
> Date:   Mon Jul 3 14:33:09 2017 -0700
> 
> ACPI: SPCR: Workaround for APM X-Gene 8250 UART 32-alignment errata
> 
> APM X-Gene verion 1 and 2 have an 8250 UART with its register
> aligned to 32-bit. In addition, the latest released BIOS
> encodes the access field as 8-bit access instead 32-bit access.
> This causes no console with ACPI boot as the console
> will not match X-Gene UART port due to the lack of mmio32
> option.
> 
> Signed-off-by: Loc Ho 
> Acked-by: Greg Kroah-Hartman 
> Signed-off-by: Rafael J. Wysocki 

Any particular reason you offset this whole commit description by four spaces?

> 
> Signed-off-by: Bhupinder Thakur 
> ---
> CC: Andrew Cooper 
> CC: George Dunlap 
> CC: Ian Jackson 
> CC: Jan Beulich 
> CC: Konrad Rzeszutek Wilk 
> CC: Stefano Stabellini 
> CC: Tim Deegan 
> CC: Wei Liu 
> CC: Julien Grall 
> 
>  xen/drivers/char/ns16550.c | 42 --
>  1 file changed, 40 insertions(+), 2 deletions(-)

This is v2 posting, but I don't see what changed.

Usually you do something like this:

v1: New posting
v2: Nothing changed from v1.

or

v1: New posting
v2: Added more folks on CC
Added consts in XYZ..

> 
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index cf42fce..bb01c46 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -1517,6 +1517,33 @@ static int ns16550_init_dt(struct ns16550 *uart,
>  
>  #ifdef CONFIG_ACPI
>  #include 
> +/*
> + * APM X-Gene v1 and v2 UART hardware is an 16550 like device but has its
> + * register aligned to 32-bit. In addition, the BIOS also encoded the
> + * access width to be 8 bits. This function detects this errata condition.
> + */
> +static bool xgene_8250_erratum_present(struct acpi_table_spcr *tb)
> +{
> +bool xgene_8250 = false;
> +
> +if ( tb->interface_type != ACPI_DBG2_16550_COMPATIBLE )
> +return false;
> +
> +if ( memcmp(tb->header.oem_id, "APMC0D", ACPI_OEM_ID_SIZE) &&
> + memcmp(tb->header.oem_id, "HPE   ", ACPI_OEM_ID_SIZE) )
> +return false;
> +
> +if ( !memcmp(tb->header.oem_table_id, "XGENESPC",
> + ACPI_OEM_TABLE_ID_SIZE) && tb->header.oem_revision == 0 )
> +xgene_8250 = true;

Why not just 'return true' ?

> +
> +if ( !memcmp(tb->header.oem_table_id, "ProLiant",
> + ACPI_OEM_TABLE_ID_SIZE) && tb->header.oem_revision == 1 )
> +xgene_8250 = true;

And return true here too?
> +
> +return xgene_8250;

And then this is just 'return false' and you don't have xgen_8250 on the stack?

> +}
> +
>  static int ns16550_init_acpi(struct ns16550 *uart,
>   const void *data)
>  {
> @@ -1539,9 +1566,20 @@ static int ns16550_init_acpi(struct ns16550 *uart,
>  uart->io_base = spcr->serial_port.address;
>  uart->irq = spcr->interrupt;
>  uart->reg_width = spcr->serial_port.bit_width / 8;
> -uart->reg_shift = 0;
> -uart->io_size = UART_MAX_REG << uart->reg_shift;
>  
> +if ( xgene_8250_erratum_present(spcr) )
> +{
> +/*
> + * for xgene v1 and v2 the registers are 32-bit and so a

s/for/For/
> + * register shift of 2 has to be applied to get the
> + * correct register offset.
> + */
> +uart->reg_shift = 2;
> +}
> +else
> +uart->reg_shift = 0;
> +
> +uart->io_size = UART_MAX_REG << uart->reg_shift;
>  irq_set_type(spcr->interrupt, spcr->interrupt_type);
>  
>  return 0;
> -- 
> 2.7.4
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-15 Thread Stefano Stabellini
On Wed, 15 Nov 2017, Boris Ostrovsky wrote:
> On 11/15/2017 02:09 PM, Stefano Stabellini wrote:
> > On Wed, 15 Nov 2017, Juergen Gross wrote:
> > while(mutex_is_locked(&map->active.in_mutex.owner) ||
> >   mutex_is_locked(&map->active.out_mutex.owner))
> > cpu_relax();
> >
> > ?
>  I'm not convinced there isn't a race.
> 
>  In pvcalls_front_recvmsg() sock->sk->sk_send_head is being read and only
>  then in_mutex is taken. What happens if pvcalls_front_release() resets
>  sk_send_head and manages to test the mutex before the mutex is locked?
> 
>  Even in case this is impossible: the whole construct seems to be rather
>  fragile.
> > I agree it looks fragile, and I agree that it might be best to avoid the
> > usage of in_mutex and out_mutex as refcounts. More comments on this
> > below.
> >
> >  
> >>> I think we can wait until pvcalls_refcount is 1 (i.e. it's only us) and
> >>> not rely on mutex state.
> >> Yes, this would work.
> > Yes, I agree it would work and for the sake of getting something in
> > shape for the merge window I am attaching a patch for it. Please go
> > ahead with it. Let me know if you need anything else immediately, and
> > I'll work on it ASAP.
> >
> >
> >
> > However, I should note that this is a pretty big hammer we are using:
> > the refcount is global, while we only need to wait until it's only us
> > _on this specific socket_.
> 
> Can you explain why socket is important?

Yes, of course: there are going to be many open sockets on a given
pvcalls connection. pvcalls_refcount is global: waiting on
pvcalls_refcount means waiting until any operations on any unrelated
sockets stop. While we only need to wait until the operations on the one
socket we want to close stop.


> >
> > We really need a per socket refcount. If we don't want to use the mutex
> > internal counters, then we need another one.
> >
> > See the appended patch that introduces a per socket refcount. However,
> > for the merge window, also using pvcalls_refcount is fine.
> >
> > The race Juergen is concerned about is only theoretically possible:
> >
> > recvmsg: release:
> >   
> >   test sk_send_head  clear sk_send_head
> >   
> >   grab in_mutex  
> >  
> >  test in_mutex
> >
> > Without kernel preemption is not possible for release to clear
> > sk_send_head and test in_mutex after recvmsg tests sk_send_head and
> > before recvmsg grabs in_mutex.
> 
> Sorry, I don't follow --- what does preemption have to do with this? If
> recvmsg and release happen on different processors the order of
> operations can be
> 
> CPU0   CPU1
> 
> test sk_send_head
> 
> clear sk_send_head
> 
> 
> test in_mutex
> free everything
> grab in_mutex
> 
> I actually think RCU should take care of all of this.

Preemption could cause something very similar to happen, but your
example is very good too, even better, because it could trigger the
issue even with preemption disabled. I'll think more about this and
submit a separate patch on top of the simple pvcalls_refcount patch
below.



> But for now I will take your refcount-based patch. However, it also
> needs comment update.
> 
> How about
> 
> /*
>  * We need to make sure that send/rcvmsg on this socket has not started
>  * before we've cleared sk_send_head here. The easiest (though not optimal)
>  * way to guarantee this is to see that no pvcall (other than us) is in
> progress.
>  */

Yes, this is the patch:

---


xen/pvcalls: fix potential endless loop in pvcalls-front.c

mutex_trylock() returns 1 if you take the lock and 0 if not. Assume you
take in_mutex on the first try, but you can't take out_mutex. Next times
you call mutex_trylock() in_mutex is going to fail. It's an endless
loop.

Solve the problem by waiting until the global refcount is 1 instead (the
refcount is 1 when the only active pvcalls frontend function is
pvcalls_front_release).

Reported-by: Dan Carpenter 
Signed-off-by: Stefano Stabellini 
CC: boris.ostrov...@oracle.com
CC: jgr...@suse.com


diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
index 0c1ec68..54c0fda 100644
--- a/drivers/xen/pvcalls-front.c
+++ b/drivers/xen/pvcalls-front.c
@@ -1043,13 +1043,12 @@ int pvcalls_front_release(struct socket *sock)
wake_up_interruptible(&map->active.inflight_conn_req);
 
/*
-* Wait until there are no more waiters on the mutexes.
-* We know that no new waiters can be added because sk_send_head
-* is set to NULL -- we only need to wait for the 

[Xen-devel] [linux-3.18 test] 116193: tolerable FAIL - PUSHED

2017-11-15 Thread osstest service owner
flight 116193 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116193/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-rumprun-i386 17 rumprun-demo-xenstorels/xenstorels.repeat fail 
REGR. vs. 116140

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116140
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116140
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116140
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116140
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116140
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116140
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116140
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux37cdf969145fd1590b1590df5b3695245e17f01b
baseline version:
 linux943dc0b3ef9f0168494d6dca305cd0cf53a0b3d4

Last test of basis   116140  2017-11-13 13:24:22 Z2 days
Testing same since   116193  2017-11-15 09:23:03 Z0 days1 attempts


People who touched revisions under test:
  Akinobu Mita 
  Alison Schofield 
  Andrey Ryabinin 
  Bartlomiej Zolnierkiewicz 
  Boris Ostrovsky 
  Borislav Petkov 
  Daniel Vetter 
  David Howells 
  David Lechner 
  David S. Miller 
  Dmitry Torokhov 
  Doug Ledford 
  Erez Shitrit 
  Eric Biggers 
  Fengguang Wu 
  Feras Daoud 
  Gilad Ben-Yossef 
  Greg Kroah-Hartman 
  Gustavo A. R. Silva 
  Herbert Xu 
  Ilya Dryomov 
  James Hogan 
  James Morris 
  Jonas Gorski 
  Jonathan Cameron 
  Juergen Gross 
  Laurent Pinchart 
  Leon Romanovsky 
  Maciej W. Rozycki 
  Magnus Öberg 
  Marc Kleine-Budde 
  Mark Rutland 
  Noralf Trønnes 
  Oswald Buddenh

Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-15 Thread Boris Ostrovsky
On 11/15/2017 02:50 PM, Boris Ostrovsky wrote:
> On 11/15/2017 02:09 PM, Stefano Stabellini wrote:
>>
>> However, I should note that this is a pretty big hammer we are using:
>> the refcount is global, while we only need to wait until it's only us
>> _on this specific socket_.
> Can you explain why socket is important?

Nevermind. I was thinking about *processor* socket (as in cores,
threads, packages etc. I am right now looking at a bug that deals with
core behavior ;-))

-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-15 Thread Boris Ostrovsky
On 11/15/2017 02:09 PM, Stefano Stabellini wrote:
> On Wed, 15 Nov 2017, Juergen Gross wrote:
> while(mutex_is_locked(&map->active.in_mutex.owner) ||
>   mutex_is_locked(&map->active.out_mutex.owner))
> cpu_relax();
>
> ?
 I'm not convinced there isn't a race.

 In pvcalls_front_recvmsg() sock->sk->sk_send_head is being read and only
 then in_mutex is taken. What happens if pvcalls_front_release() resets
 sk_send_head and manages to test the mutex before the mutex is locked?

 Even in case this is impossible: the whole construct seems to be rather
 fragile.
> I agree it looks fragile, and I agree that it might be best to avoid the
> usage of in_mutex and out_mutex as refcounts. More comments on this
> below.
>
>  
>>> I think we can wait until pvcalls_refcount is 1 (i.e. it's only us) and
>>> not rely on mutex state.
>> Yes, this would work.
> Yes, I agree it would work and for the sake of getting something in
> shape for the merge window I am attaching a patch for it. Please go
> ahead with it. Let me know if you need anything else immediately, and
> I'll work on it ASAP.
>
>
>
> However, I should note that this is a pretty big hammer we are using:
> the refcount is global, while we only need to wait until it's only us
> _on this specific socket_.

Can you explain why socket is important?

>
> We really need a per socket refcount. If we don't want to use the mutex
> internal counters, then we need another one.
>
> See the appended patch that introduces a per socket refcount. However,
> for the merge window, also using pvcalls_refcount is fine.
>
> The race Juergen is concerned about is only theoretically possible:
>
> recvmsg: release:
>   
>   test sk_send_head  clear sk_send_head
>   
>   grab in_mutex  
>  
>  test in_mutex
>
> Without kernel preemption is not possible for release to clear
> sk_send_head and test in_mutex after recvmsg tests sk_send_head and
> before recvmsg grabs in_mutex.

Sorry, I don't follow --- what does preemption have to do with this? If
recvmsg and release happen on different processors the order of
operations can be

CPU0   CPU1

test sk_send_head

clear sk_send_head


test in_mutex
free everything
grab in_mutex

I actually think RCU should take care of all of this.

But for now I will take your refcount-based patch. However, it also
needs comment update.

How about

/*
 * We need to make sure that send/rcvmsg on this socket has not started
 * before we've cleared sk_send_head here. The easiest (though not optimal)
 * way to guarantee this is to see that no pvcall (other than us) is in
progress.
 */

-boris


>
> But maybe we need to disable kernel preemption in recvmsg and sendmsg to
> stay on the safe side?
>
> The patch below introduces a per active socket refcount, so that we
> don't have to rely on in_mutex and out_mutex for refcounting. It also
> disables preemption in sendmsg and recvmsg in the region described
> above.
>
> I don't think this patch should go in immediately. We can take our time
> to figure out the best fix.
>
>
> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> index 0c1ec68..8c1030b 100644
> --- a/drivers/xen/pvcalls-front.c
> +++ b/drivers/xen/pvcalls-front.c
> @@ -68,6 +68,7 @@ struct sock_mapping {
>   struct pvcalls_data data;
>   struct mutex in_mutex;
>   struct mutex out_mutex;
> + atomic_t sock_refcount;
>  
>   wait_queue_head_t inflight_conn_req;
>   } active;
> @@ -497,15 +498,20 @@ int pvcalls_front_sendmsg(struct socket *sock, struct 
> msghdr *msg,
>   }
>   bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
>  
> + preempt_disable();
>   map = (struct sock_mapping *) sock->sk->sk_send_head;
>   if (!map) {
> + preempt_enable();
>   pvcalls_exit();
>   return -ENOTSOCK;
>   }
>  
> + atomic_inc(&map->active.sock_refcount);
>   mutex_lock(&map->active.out_mutex);
> + preempt_enable();
>   if ((flags & MSG_DONTWAIT) && !pvcalls_front_write_todo(map)) {
>   mutex_unlock(&map->active.out_mutex);
> + atomic_dec(&map->active.sock_refcount);
>   pvcalls_exit();
>   return -EAGAIN;
>   }
> @@ -528,6 +534,7 @@ int pvcalls_front_sendmsg(struct socket *sock, struct 
> msghdr *msg,
>   tot_sent = sent;
>  
>   mutex_unlock(&map->active.out_mutex);
> + atomic_dec(&map->active.sock_refcount

Re: [Xen-devel] [PATCH for-4.10 2/2] xen/arm: p2m: Add more debug in get_page_from_gva

2017-11-15 Thread Andrew Cooper
On 15/11/17 19:34, Julien Grall wrote:
> The function get_page_from_gva is used by copy_*_guest helpers to
> translate a guest virtual address to a machine physical address and take
> reference on the page.
>
> There are a couple of errors path that will return the same value making
> difficult to know the exact error. Add more debug in each error patch
> only for debug-build.
>
> This should help narrowing down the intermittent failure with the
> hypercall GNTTABOP_copy (see [1]).
>
> [1] https://lists.xen.org/archives/html/xen-devel/2017-11/msg00942.html
>
> Signed-off-by: Julien Grall 
> ---
>  xen/arch/arm/p2m.c | 13 +
>  1 file changed, 13 insertions(+)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index f6b3d8e421..417609ede2 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1428,16 +1428,29 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
> vaddr_t va,
>  par = gvirt_to_maddr(va, &maddr, flags);
>  
>  if ( par )
> +{
> +dprintk(XENLOG_G_DEBUG,
> +"%pv: gvirt_to_maddr failed va=%#"PRIvaddr" flags=0x%lx 
> par=%#"PRIx64"\n",
> +v, va, flags, par);

Given the long round-trip time on debugging output, how about trying to
dump the guest and/or second stage table walk?

~Andrew

>  goto err;
> +}
>  
>  if ( !mfn_valid(maddr_to_mfn(maddr)) )
> +{
> +dprintk(XENLOG_G_DEBUG, "%pv: Invalid MFN %#"PRI_mfn"\n",
> +v, mfn_x(maddr_to_mfn(maddr)));
>  goto err;
> +}
>  
>  page = mfn_to_page(maddr_to_mfn(maddr));
>  ASSERT(page);
>  
>  if ( unlikely(!get_page(page, d)) )
> +{
> +dprintk(XENLOG_G_DEBUG, "%pv: Failing to acquire the MFN 
> %#"PRI_mfn"\n",
> +v, mfn_x(maddr_to_mfn(maddr)));
>  page = NULL;
> +}
>  
>  err:
>  if ( !page && p2m->mem_access_enabled )


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH for-4.10 0/2] xen/arm: Add more debug in get_page_from_gva

2017-11-15 Thread Julien Grall
Hi all,

It looks like get_page_from_gva intermittenly fails on the Arndale (see [1])
leading to Dom0 crashing.

At the moment it is a bit hard to know why given the hypervisor does not
provide much information.

This series add more debug in get_page_from_gva to hopefully narrow down
the issue.

I think the 2 patches are good candidate for Xen 4.10 as this may help fixing
a dom0 crash on the Arndale.

Cheers,

[1] https://lists.xen.org/archives/html/xen-devel/2017-11/msg00942.html

Julien Grall (2):
  xen/arm: mm: Change the return value of gvirt_to_maddr
  xen/arm: p2m: Add more debug in get_page_from_gva

 xen/arch/arm/domain_build.c |  8 
 xen/arch/arm/kernel.c   |  8 
 xen/arch/arm/p2m.c  | 19 ---
 xen/include/asm-arm/mm.h|  9 +++--
 4 files changed, 31 insertions(+), 13 deletions(-)

-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH for-4.10 1/2] xen/arm: mm: Change the return value of gvirt_to_maddr

2017-11-15 Thread Julien Grall
Currently, gvirt_to_maddr return -EFAULT when the translation failed.
It might be useful to return the PAR_EL1 (Physical Address Register)
in such a case to get a better idea of the reason.

So modify the return value to use 0 on sucess or return the PAR on
failure.

The callers are modified to reflect the change of the return value.

Note that with the change in gvirt_to_maddr, ma needs to be initialized
to avoid GCC been confused (i.e value may be unitialized) with the new
construction.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/domain_build.c | 8 
 xen/arch/arm/kernel.c   | 8 
 xen/arch/arm/p2m.c  | 6 +++---
 xen/include/asm-arm/mm.h| 9 +++--
 4 files changed, 18 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index bf29299707..c74f4dd69d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2002,15 +2002,15 @@ static void initrd_load(struct kernel_info *kinfo)
 
 for ( offs = 0; offs < len; )
 {
-int rc;
-paddr_t s, l, ma;
+uint64_t par;
+paddr_t s, l, ma = 0;
 void *dst;
 
 s = offs & ~PAGE_MASK;
 l = min(PAGE_SIZE - s, len);
 
-rc = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
-if ( rc )
+par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
+if ( par )
 {
 panic("Unable to translate guest address");
 return;
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index c2755a9ab9..a6c6413712 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -167,15 +167,15 @@ static void kernel_zimage_load(struct kernel_info *info)
paddr, load_addr, load_addr + len);
 for ( offs = 0; offs < len; )
 {
-int rc;
-paddr_t s, l, ma;
+uint64_t par;
+paddr_t s, l, ma = 0;
 void *dst;
 
 s = offs & ~PAGE_MASK;
 l = min(PAGE_SIZE - s, len);
 
-rc = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
-if ( rc )
+par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
+if ( par )
 {
 panic("Unable to map translate guest address");
 return;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 68b488997d..f6b3d8e421 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1414,7 +1414,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
vaddr_t va,
 struct p2m_domain *p2m = p2m_get_hostp2m(d);
 struct page_info *page = NULL;
 paddr_t maddr = 0;
-int rc;
+uint64_t par;
 
 /*
  * XXX: To support a different vCPU, we would need to load the
@@ -1425,9 +1425,9 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
vaddr_t va,
 
 p2m_read_lock(p2m);
 
-rc = gvirt_to_maddr(va, &maddr, flags);
+par = gvirt_to_maddr(va, &maddr, flags);
 
-if ( rc )
+if ( par )
 goto err;
 
 if ( !mfn_valid(maddr_to_mfn(maddr)) )
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index cd6dfb54b9..ad2f2a43dc 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -266,11 +266,16 @@ static inline void *maddr_to_virt(paddr_t ma)
 }
 #endif
 
-static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa, unsigned int flags)
+/*
+ * Translate a guest virtual address to a machine address.
+ * Return the fault information if the translation has failed else 0.
+ */
+static inline uint64_t gvirt_to_maddr(vaddr_t va, paddr_t *pa,
+  unsigned int flags)
 {
 uint64_t par = gva_to_ma_par(va, flags);
 if ( par & PAR_F )
-return -EFAULT;
+return par;
 *pa = (par & PADDR_MASK & PAGE_MASK) | ((unsigned long) va & ~PAGE_MASK);
 return 0;
 }
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH for-4.10 2/2] xen/arm: p2m: Add more debug in get_page_from_gva

2017-11-15 Thread Julien Grall
The function get_page_from_gva is used by copy_*_guest helpers to
translate a guest virtual address to a machine physical address and take
reference on the page.

There are a couple of errors path that will return the same value making
difficult to know the exact error. Add more debug in each error patch
only for debug-build.

This should help narrowing down the intermittent failure with the
hypercall GNTTABOP_copy (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-11/msg00942.html

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f6b3d8e421..417609ede2 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1428,16 +1428,29 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
vaddr_t va,
 par = gvirt_to_maddr(va, &maddr, flags);
 
 if ( par )
+{
+dprintk(XENLOG_G_DEBUG,
+"%pv: gvirt_to_maddr failed va=%#"PRIvaddr" flags=0x%lx 
par=%#"PRIx64"\n",
+v, va, flags, par);
 goto err;
+}
 
 if ( !mfn_valid(maddr_to_mfn(maddr)) )
+{
+dprintk(XENLOG_G_DEBUG, "%pv: Invalid MFN %#"PRI_mfn"\n",
+v, mfn_x(maddr_to_mfn(maddr)));
 goto err;
+}
 
 page = mfn_to_page(maddr_to_mfn(maddr));
 ASSERT(page);
 
 if ( unlikely(!get_page(page, d)) )
+{
+dprintk(XENLOG_G_DEBUG, "%pv: Failing to acquire the MFN 
%#"PRI_mfn"\n",
+v, mfn_x(maddr_to_mfn(maddr)));
 page = NULL;
+}
 
 err:
 if ( !page && p2m->mem_access_enabled )
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-15 Thread Stefano Stabellini
On Wed, 15 Nov 2017, Juergen Gross wrote:
> >>> while(mutex_is_locked(&map->active.in_mutex.owner) ||
> >>>   mutex_is_locked(&map->active.out_mutex.owner))
> >>> cpu_relax();
> >>>
> >>> ?
> >> I'm not convinced there isn't a race.
> >>
> >> In pvcalls_front_recvmsg() sock->sk->sk_send_head is being read and only
> >> then in_mutex is taken. What happens if pvcalls_front_release() resets
> >> sk_send_head and manages to test the mutex before the mutex is locked?
> >>
> >> Even in case this is impossible: the whole construct seems to be rather
> >> fragile.

I agree it looks fragile, and I agree that it might be best to avoid the
usage of in_mutex and out_mutex as refcounts. More comments on this
below.

 
> > I think we can wait until pvcalls_refcount is 1 (i.e. it's only us) and
> > not rely on mutex state.
> 
> Yes, this would work.

Yes, I agree it would work and for the sake of getting something in
shape for the merge window I am attaching a patch for it. Please go
ahead with it. Let me know if you need anything else immediately, and
I'll work on it ASAP.



However, I should note that this is a pretty big hammer we are using:
the refcount is global, while we only need to wait until it's only us
_on this specific socket_.

We really need a per socket refcount. If we don't want to use the mutex
internal counters, then we need another one.

See the appended patch that introduces a per socket refcount. However,
for the merge window, also using pvcalls_refcount is fine.

The race Juergen is concerned about is only theoretically possible:

recvmsg: release:
  
  test sk_send_head  clear sk_send_head
  
  grab in_mutex  
 
 test in_mutex

Without kernel preemption is not possible for release to clear
sk_send_head and test in_mutex after recvmsg tests sk_send_head and
before recvmsg grabs in_mutex.

But maybe we need to disable kernel preemption in recvmsg and sendmsg to
stay on the safe side?

The patch below introduces a per active socket refcount, so that we
don't have to rely on in_mutex and out_mutex for refcounting. It also
disables preemption in sendmsg and recvmsg in the region described
above.

I don't think this patch should go in immediately. We can take our time
to figure out the best fix.


diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
index 0c1ec68..8c1030b 100644
--- a/drivers/xen/pvcalls-front.c
+++ b/drivers/xen/pvcalls-front.c
@@ -68,6 +68,7 @@ struct sock_mapping {
struct pvcalls_data data;
struct mutex in_mutex;
struct mutex out_mutex;
+   atomic_t sock_refcount;
 
wait_queue_head_t inflight_conn_req;
} active;
@@ -497,15 +498,20 @@ int pvcalls_front_sendmsg(struct socket *sock, struct 
msghdr *msg,
}
bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
 
+   preempt_disable();
map = (struct sock_mapping *) sock->sk->sk_send_head;
if (!map) {
+   preempt_enable();
pvcalls_exit();
return -ENOTSOCK;
}
 
+   atomic_inc(&map->active.sock_refcount);
mutex_lock(&map->active.out_mutex);
+   preempt_enable();
if ((flags & MSG_DONTWAIT) && !pvcalls_front_write_todo(map)) {
mutex_unlock(&map->active.out_mutex);
+   atomic_dec(&map->active.sock_refcount);
pvcalls_exit();
return -EAGAIN;
}
@@ -528,6 +534,7 @@ int pvcalls_front_sendmsg(struct socket *sock, struct 
msghdr *msg,
tot_sent = sent;
 
mutex_unlock(&map->active.out_mutex);
+   atomic_dec(&map->active.sock_refcount);
pvcalls_exit();
return tot_sent;
 }
@@ -600,13 +607,17 @@ int pvcalls_front_recvmsg(struct socket *sock, struct 
msghdr *msg, size_t len,
}
bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
 
+   preempt_disable();
map = (struct sock_mapping *) sock->sk->sk_send_head;
if (!map) {
+   preempt_enable();
pvcalls_exit();
return -ENOTSOCK;
}
 
+   atomic_inc(&map->active.sock_refcount);
mutex_lock(&map->active.in_mutex);
+   preempt_enable();
if (len > XEN_FLEX_RING_SIZE(PVCALLS_RING_ORDER))
len = XEN_FLEX_RING_SIZE(PVCALLS_RING_ORDER);
 
@@ -625,6 +636,7 @@ int pvcalls_front_recvmsg(struct socket *sock, struct 
msghdr *msg, size_t len,
ret = 0;
 
mutex_unlock(&map->active.in_mutex);
+   atomic_dec(&map->active.sock_refcount);
pvcalls_exit();
return ret;
 }
@@ -1048,8 +1060,7 @@ int pvcalls_front_release(struct socket *sock)
 * is set to NULL -- we only need to wait for the

[Xen-devel] [linux-next test] 116194: regressions - FAIL

2017-11-15 Thread osstest service owner
flight 116194 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116194/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 116164
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot fail REGR. vs. 116164
 test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 116164
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-boot  fail REGR. vs. 116164
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
116164
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 116164
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 116164
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 116164
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 116164
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 116164
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 116164
 test-amd64-i386-libvirt-qcow2  7 xen-bootfail REGR. vs. 116164
 test-armhf-armhf-xl-xsm   6 xen-install  fail REGR. vs. 116164
 test-amd64-i386-libvirt-xsm   7 xen-boot fail REGR. vs. 116164
 test-amd64-i386-libvirt-pair 10 xen-boot/src_hostfail REGR. vs. 116164
 test-amd64-i386-libvirt-pair 11 xen-boot/dst_hostfail REGR. vs. 116164
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-boot  fail REGR. vs. 116164
 test-amd64-i386-freebsd10-amd64  7 xen-boot  fail REGR. vs. 116164

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-examine  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116164
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116164
 build-amd64-pvops 6 kernel-build fail  like 116164
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116164
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116164
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116164
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-x

Re: [Xen-devel] [PATCH] xen/pvcalls: Add MODULE_LICENSE()

2017-11-15 Thread Stefano Stabellini
On Wed, 15 Nov 2017, Boris Ostrovsky wrote:
> Since commit ba1029c9cbc5 ("modpost: detect modules without a
> MODULE_LICENSE") modules without said macro will generate
> 
> WARNING: modpost: missing MODULE_LICENSE() in 
> 
> While at it, also add module description and attribution.
> 
> Signed-off-by: Boris Ostrovsky 

Ack. Thank you!


> ---
>  drivers/xen/pvcalls-back.c  | 4 
>  drivers/xen/pvcalls-front.c | 4 
>  2 files changed, 8 insertions(+)
> 
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index b209cd4..02cd33c 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -1238,3 +1238,7 @@ static void __exit pvcalls_back_fin(void)
>  }
>  
>  module_exit(pvcalls_back_fin);
> +
> +MODULE_DESCRIPTION("Xen PV Calls backend driver");
> +MODULE_AUTHOR("Stefano Stabellini ");
> +MODULE_LICENSE("GPL");
> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> index 2925b2f..9e40c2c 100644
> --- a/drivers/xen/pvcalls-front.c
> +++ b/drivers/xen/pvcalls-front.c
> @@ -1273,3 +1273,7 @@ static int __init pvcalls_frontend_init(void)
>  }
>  
>  module_init(pvcalls_frontend_init);
> +
> +MODULE_DESCRIPTION("Xen PV Calls frontend driver");
> +MODULE_AUTHOR("Stefano Stabellini ");
> +MODULE_LICENSE("GPL");
> -- 
> 2.7.5
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] Xen Security Advisory 243 (CVE-2017-15592) - x86: Incorrect handling of self-linear shadow mappings with translated guests

2017-11-15 Thread Xen . org security team
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Xen Security Advisory CVE-2017-15592 / XSA-243
   version 5

 x86: Incorrect handling of self-linear shadow mappings with translated guests

UPDATES IN VERSION 5


New final patch, addressing a hypervisor crash the original fix caused,
which by itself represents another security issue (DoS).

ISSUE DESCRIPTION
=

The shadow pagetable code uses linear mappings to inspect and modify the
shadow pagetables.  A linear mapping which points back to itself is known as
self-linear.  For translated guests, the shadow linear mappings (being in a
separate address space) are not intended to be self-linear.  For
non-translated guests, the shadow linear mappings (being the same
address space) are intended to be self-linear.

When constructing a monitor pagetable for Xen to run on a vcpu with, the shadow
linear slot is filled with a self-linear mapping, and for translated guests,
shortly thereafter replaced with a non-self-linear mapping, when the guest's
%cr3 is shadowed.

However when writeable heuristics are used, the shadow mappings are used as
part of shadowing %cr3, causing the heuristics to be applied to Xen's
pagetables, not the guest shadow pagetables.

While investigating, it was also identified that PV auto-translate mode was
insecure.  This mode was removed in Xen 4.7 due to being unused, unmaintained
and presumed broken.  We are not aware of any guest implementation of PV
auto-translate mode.

IMPACT
==

A malicious or buggy HVM guest may cause a hypervisor crash, resulting in a
Denial of Service (DoS) affecting the entire host, or cause hypervisor memory
corruption.  We cannot rule out a guest being able to escalate its privilege.

VULNERABLE SYSTEMS
==

All versions of Xen are vulnerable.

HVM guests using shadow mode paging can exploit this vulnerability.
HVM guests using Hardware Assisted Paging (HAP) as well as PV guests
cannot exploit this vulnerability.

ARM systems are not vulnerable.

MITIGATION
==

Running only PV guests will avoid this vulnerability.

Where the HVM guest is explicitly configured to use shadow paging (eg
via the `hap=0' xl domain configuration file parameter), changing to
HAP (eg by setting `hap=1') will avoid exposing the vulnerability to
those guests.  HAP is the default (in upstream Xen), where the
hardware supports it; so this mitigation is only applicable if HAP has
been disabled by configuration.

CREDITS
===

This issue was discovered by Andrew Cooper of Citrix.

RESOLUTION
==

Applying the appropriate attached set of patches resolves this issue.

xsa243-[12].patchxen-unstable, Xen 4.9.x
xsa243-{4.8-1,2}.patch   Xen 4.8.x
xsa243-{4.7-1,2}.patch   Xen 4.7.x
xsa243-{4.6-[12],2}.patchXen 4.6.x
xsa243-4.{6-1,5-[23]}.patch  Xen 4.5.x

$ sha256sum xsa243*
a5b484db80346f7e75c7921ee4780567f04b9f9b4620c0cde4bfa1df3ac0f87f  xsa243-1.patch
013cff90312305b7f4ce6818a25760bcfca61bfadd860b694afa04d56e60c563  xsa243-2.patch
79e1c5e088eee8e78aa67895a29d611352c64251854e4c5129e33c85988a47a5  
xsa243-4.5-2.patch
b838f387747c6e45314f44202c018ad907a8119bb7d8330fc875dc4243626e78  
xsa243-4.5-3.patch
722073aad1e734e24b0b79d03a1957e491f3616fe6e244a89050f7a50f8f356b  
xsa243-4.6-1.patch
94cb346c486f88f2f4f701564017e1997e518a5a14218f0e38ff882c60fb382c  
xsa243-4.6-2.patch
465ba9e3293591a3c84c122ffd73474fe96483f5e21565440d5fbc207fa4c4a9  
xsa243-4.7-1.patch
f8e471b42502905a442d43934ac339663a6124118c9762b31f2ad930fd532e64  
xsa243-4.8-1.patch
$

DEPLOYMENT DURING EMBARGO
=

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBCAAGBQJaDHWmAAoJEIP+FMlX6CvZbKgH/RsntzKBpEJQfElzpN15+eMM
Kakfq3Mzad4JuaOb5dVy4fhE88gHgE344mmiUqu/h+pwRKofC/a3DvS4GPO8NJAI
Zdu1CCkuZ3/L3IpbtdGsLMw1EZGQLXNsQGWCgDB3sNAT6Ue+FvmJbiP0RkIO+qXw
7KSCfs2NtMvkj17jt5ZYj2Y43d0IvWirR3LHkJIDR0ZPYkX5WagAmuOom3bj57lt
0Q/GC40x+kO9lQSw299CZxuHTi34zu0V4/HRtfSSVph5Gbcb+4kxMqv8e3wRfgg9
kBF6FD12oLJkArIeb/J72m13RTiIJDiG3VltS9B2Vmm9+LZOhBvbsfILrePk0qE

[Xen-devel] [qemu-mainline test] 116190: tolerable FAIL - PUSHED

2017-11-15 Thread osstest service owner
flight 116190 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116190/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116126
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116126
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116126
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116126
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116126
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116126
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuu1fa0f627d03cd0d0755924247cafeb42969016bf
baseline version:
 qemuu4ffa88c99c54d2a30f79e3dbecec50b023eff1c8

Last test of basis   116126  2017-11-13 00:49:34 Z2 days
Failing since116146  2017-11-13 18:53:48 Z1 days4 attempts
Testing same since   116173  2017-11-14 22:20:27 Z0 days2 attempts


People who touched revisions under test:
  Alberto Garcia 
  Alex Bennée 
  Alexey Kardashevskiy 
  Alistair Francis 
  Christian Borntraeger 
  Cornelia Huck 
  David Gibson 
  Emilio G. Cota 
  Eric Blake 
  Fam Zheng 
  Gerd Hoffmann 
  Greg Kurz 
  Jason Wang 
  Jeff Cody 
  Jens Freimann 
  Mao Zhongyi 
  Max Reitz 
  Mike Nawrocki 
  Peter Maydell 
  Philippe Mathieu-Daudé 
  Prasad J Pandit 
  Richard Henderson 
  Sam Bobroff 
  Samuel Thibault 
  Sergio Lopez 
  Stefan Hajnoczi 
  Subbaraya Sundeep 
  Tao Wu 
  Vladimir Sementsov-Ogievskiy 
  Yi Min Zhao 
  Zhang Chen 
  Zhengui 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt 

Re: [Xen-devel] [xen-unstable test] 116178: regressions - FAIL

2017-11-15 Thread Julien Grall
Hi,

On 11/15/2017 11:29 AM, osstest service owner wrote:
> flight 116178 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/116178/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>   test-armhf-armhf-libvirt-raw 15 guest-start/debian.repeat fail REGR. vs. 
> 116161

The kernel is hitting a BUG() in gnttab_batch_copy() (see stack trace). This 
seems to
be because GNTTABOP_copy is failing. Looking at the serial log, this seems to 
happen 
time to time on the Arndale (not on the cubietruck) with different version of 
the kernel.

I have reported a similar error last year ([1]), and still have no clue why 
page-table 
translation might fail...

I am going to send a patch adding a bit more debug in the function doing the
translation from the guest PA to the host PA. Hopefully, it might tell us a bit 
more
what's going on.

Cheers,

[1] https://lists.xen.org/archives/html/xen-devel/2016-07/msg02571.htm

Nov 15 05:23:47.715172 [ 2156.529661] [ cut here ]

Nov 15 05:24:04.483235 [ 2156.532899] kernel BUG at 
drivers/xen/grant-table.c:770!

Nov 15 05:24:04.491191 [ 2156.538281] Internal error: Oops - BUG: 0 [#1] SMP ARM

Nov 15 05:24:04.491233 [ 2156.543488] Modules linked in: xen_gntalloc 
snd_soc_i2s snd_soc_idma snd_soc_s3c_dma snd_soc_core snd_pcm_dmaengine snd_pcm 
wm8994_regulator snd_timer snd s5p_mfc wm8994 soundcore ac97_bus 
videobuf2_dma_contig videobuf2_memops pwm_samsung videobuf2_v4l2 videobuf2_core 
v4l2_common videodev media s5p_sss usb3503 rtc_s3c dwc3 dwc3_exynos clk_s2mps11 
s5m8767 dw_mmc_exynos dw_mmc_pltfm dw_mmc phy_exynos5250_sata phy_exynos_usb2 
ohci_exynos ehci_exynos phy_exynos5_usbdrd

Nov 15 05:24:04.531197 [ 2156.584721] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 
4.9.20+ #1

Nov 15 05:24:04.539232 [ 2156.590793] Hardware name: SAMSUNG EXYNOS (Flattened 
Device Tree)

Nov 15 05:24:04.547155 [ 2156.596957] task: c1207540 task.stack: c120

Nov 15 05:24:04.547179 [ 2156.601564] PC is at gnttab_batch_copy+0xd0/0xe4

Nov 15 05:24:04.555165 [ 2156.606246] LR is at gnttab_batch_copy+0x1c/0xe4

Nov 15 05:24:04.563145 [ 2156.610932] pc : []lr : []
psr: a113

Nov 15 05:24:04.563169 [ 2156.610932] sp : c1201d98  ip : deadbeef  fp : 
e160c000

Nov 15 05:24:04.57 [ 2156.622563] r10: c1201e90  r9 : 0040  r8 : 
0040

Nov 15 05:24:04.579105 [ 2156.627858] r7 : e160c000  r6 : 0001  r5 : 
e1610e00  r4 : e160e6d8

Nov 15 05:24:04.587154 [ 2156.634453] r3 : e160e6d8  r2 : deadbeef  r1 : 
deadbeef  r0 : fff2

Nov 15 05:24:04.587189 [ 2156.641054] Flags: NzCv  IRQs on  FIQs on  Mode 
SVC_32  ISA ARM  Segment none

Nov 15 05:24:04.595168 [ 2156.648256] Control: 10c5387d  Table: 7684006a  DAC: 
0051

Nov 15 05:24:04.603139 [ 2156.654072] Process swapper/0 (pid: 0, stack limit = 
0xc1200220)

Nov 15 05:24:04.611224 [ 2156.660147] Stack: (0xc1201d98 to 0xc1202000)

Nov 15 05:24:04.611279 [ 2156.664574] 1d80: 
  e160e6d8 

Nov 15 05:24:04.619248 [ 2156.672824] 1da0: e1610e00  e160c000 c08b65a0 
da3cb580  0002 c039c1f8

Nov 15 05:24:04.627228 [ 2156.681080] 1dc0: c8c27db0 c039c85c 000f4240  
c1201df4 e160e6d8 0e29ffbb 01f6

Nov 15 05:24:04.635141 [ 2156.689314] 1de0: f26f65f2 2193  c8c27d48 
0008  0001 005f

Nov 15 05:24:04.643124 [ 2156.697561] 1e00: c8c27dfc 6193 c8c27d48 c090be6c 
000f4240   c0372be4

Nov 15 05:24:04.651211 [ 2156.705807] 1e20: c03103bc c058eec4 0002d51e c0752c50 
c8c27d48 0f268479 0001 e160c020

Nov 15 05:24:04.667186 [ 2156.714053] 1e40: e160c020  e160c000 0040 
0040 c1201e90 192b8000 c08b9124

Nov 15 05:24:04.675174 [ 2156.722298] 1e60: e160c020 0001 0002d520 012c 
c1202d00 c0a5586c 0008 da3ce740

Nov 15 05:24:04.683182 [ 2156.730545] 1e80: c1116740 c131473e c1204f20 c1204f20 
c1201e90 c1201e90 c1201e98 c1201e98

Nov 15 05:24:04.691270 [ 2156.738791] 1ea0:   0003 c120208c 
c120 c1202080 0100 c1202080

Nov 15 05:24:04.699158 [ 2156.747037] 1ec0: 4003 c0348760 df003000 c11151a8 
c1201ec8 c131b200 000a 0002d51f

Nov 15 05:24:04.707231 [ 2156.755282] 1ee0: c1202d00 00200100 d9808000 c1113e04 
  0001 d9808000

Nov 15 05:24:04.715132 [ 2156.763529] 1f00: df003000 c11151a8 c12030a0 c0348b7c 
0095 c038aa74 c123f3c8 c1203440

Nov 15 05:24:04.723132 [ 2156.771785] 1f20: df00200c c1201f50 df002000 c0301754 
c030928c c0309290 6013 

Nov 15 05:24:04.731168 [ 2156.780021] 1f40: c1201f84  c120 c030d10c 
0001  0001 c031c520

Nov 15 05:24:04.739171 [ 2156.788266] 1f60: c120 c1203034 c1203098 0001 
  c11151a8 c12030a0

Nov 15 05:24:04.747165 [ 2156.796513] 1f80: 192b8000 c1201fa0 c030928c c0309290 
6013  0051 

Nov 15 05:24:04.755233 [ 

Re: [Xen-devel] [PATCH] xen/pvcalls: Add MODULE_LICENSE()

2017-11-15 Thread Juergen Gross
On 15/11/17 17:37, Boris Ostrovsky wrote:
> Since commit ba1029c9cbc5 ("modpost: detect modules without a
> MODULE_LICENSE") modules without said macro will generate
> 
> WARNING: modpost: missing MODULE_LICENSE() in 
> 
> While at it, also add module description and attribution.
> 
> Signed-off-by: Boris Ostrovsky 

Reviewed-by: Juergen Gross 


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] xen/pvcalls: Add MODULE_LICENSE()

2017-11-15 Thread Boris Ostrovsky
Since commit ba1029c9cbc5 ("modpost: detect modules without a
MODULE_LICENSE") modules without said macro will generate

WARNING: modpost: missing MODULE_LICENSE() in 

While at it, also add module description and attribution.

Signed-off-by: Boris Ostrovsky 
---
 drivers/xen/pvcalls-back.c  | 4 
 drivers/xen/pvcalls-front.c | 4 
 2 files changed, 8 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index b209cd4..02cd33c 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -1238,3 +1238,7 @@ static void __exit pvcalls_back_fin(void)
 }
 
 module_exit(pvcalls_back_fin);
+
+MODULE_DESCRIPTION("Xen PV Calls backend driver");
+MODULE_AUTHOR("Stefano Stabellini ");
+MODULE_LICENSE("GPL");
diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
index 2925b2f..9e40c2c 100644
--- a/drivers/xen/pvcalls-front.c
+++ b/drivers/xen/pvcalls-front.c
@@ -1273,3 +1273,7 @@ static int __init pvcalls_frontend_init(void)
 }
 
 module_init(pvcalls_frontend_init);
+
+MODULE_DESCRIPTION("Xen PV Calls frontend driver");
+MODULE_AUTHOR("Stefano Stabellini ");
+MODULE_LICENSE("GPL");
-- 
2.7.5


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH 00/31] CPUFreq on ARM

2017-11-15 Thread Jassi Brar
On 15 November 2017 at 18:58, Andre Przywara  wrote:
> Hi,
>
> On 15/11/17 03:03, Jassi Brar wrote:
>> On 15 November 2017 at 02:16, Oleksandr Tyshchenko  
>> wrote:
>>> On Tue, Nov 14, 2017 at 12:49 PM, Andre Przywara
>>>  wrote:
>>>
>>
>>> 3. Direct ported SCPI protocol, mailbox infrastructure and the ARM SMC 
>>> triggered mailbox driver. All components except mailbox driver are in 
>>> mainline Linux.
>>
>> Why do you actually need this mailbox framework?
>>>
>> It is unnecessary if you are always going to use one particular signal
>> mechanism, say SMC. However ...
>>
>> Actually I just
>> proposed the SMC driver the make it fit into the Linux framework. All we
>> actually need for SCPI is to write a simple command into some memory and
>> "press a button". I don't see a need to import the whole Linux
>> framework, especially as our mailbox usage is actually just a corner
>> case of the mailbox's capability (namely a "single-bit" doorbell).
>> The SMC use case is trivial to implement, and I believe using the Juno
>> mailbox is similarly simple, for instance.
>>>
>> ... Its going to be SMC and MHU now... and you talk about Rockchip as
>> well later. That becomes unwieldy.
>>
>>

> Protocol relies on mailbox feature, so I ported mailbox too. I think,
> it would be much more easy for me to just add
> a few required commands handling with issuing SMC call and without any
> mailbox infrastructure involved.
> But, I want to show what is going on and what place these things come 
> from.

 I appreciate that, but I think we already have enough "bloated" Linux +
 glue code in Xen. And in particular the Linux mailbox framework is much
 more powerful than we need for SCPI, so we have a lot of unneeded
 functionality.
>>>
>> That is a painful misconception.
>> Mailbox api is designed to be (almost) as light weight as being
>> transparent. Please have a look at mbox_send_message() and see how
>> negligible overhead it adds for "SMC controller" that you compare
>> against here. just integer manipulations protected by a spinlock.
>> Of course if your protocol needs async messaging, you pay the price
>> but only fair.
>
> Normally I would agree on importing some well designed code rather than
> hacking up something yourself.
>
> BUT: This is Xen, which is meant to be lean, micro-kernel like
> hypervisor. If we now add code from Linux, there must be a good
> rationale why we need it. And this is why we need to make sure that
> CPUFreq is really justified in the first place.
> So I am a bit wary that pulling some rather unrelated Linux *framework*
> into Xen bloats it up and introduces more burden to the trusted code
> base. With SCPI being the only user, this controller - client
> abstraction is not really needed. And to just trigger an interrupt on
> the SCP side we just need to:
> writel(BIT(channel), base_addr + CPU_INTR_H_SET);
>
> I expect other mailboxes to be similarly simple.
> The only other code needed is some DT parsing.
>
> That being said I haven't look too closely how much code this actually
> pulls in, it is just my gut feeling that it's a bit over the top,
> conceptually.
>
Please do have a look and let me know how it drags the SCP down.

Thanks

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH 00/31] CPUFreq on ARM

2017-11-15 Thread Andre Przywara
Hi,

On 14/11/17 20:46, Oleksandr Tyshchenko wrote:
> On Tue, Nov 14, 2017 at 12:49 PM, Andre Przywara
>  wrote:
>> Hi,
> Hi Andre
> 
>>
>> On 13/11/17 19:40, Oleksandr Tyshchenko wrote:
>>> On Mon, Nov 13, 2017 at 5:21 PM, Andre Przywara
>>>  wrote:
 Hi,
>>> Hi Andre,
>>>

 thanks very much for your work on this!
>>> Thank you for your comments.
>>>

 On 09/11/17 17:09, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko 
>
> Hi, all.
>
> The purpose of this RFC patch series is to add CPUFreq support to Xen on 
> ARM.
> Motivation of hypervisor based CPUFreq is to enable one of the main PM 
> use-cases in virtualized system powered by Xen hypervisor. Rationale 
> behind this activity is that CPU virtualization is done by hypervisor and 
> the guest OS doesn't actually know anything about physical CPUs because 
> it is running on virtual CPUs. It is quite clear that a decision about 
> frequency change should be taken by hypervisor as only it has information 
> about actual CPU load.

 Can you please sketch your usage scenario or workloads here? I can think
 of quite different scenarios (oversubscribed server vs. partitioning
 RTOS guests, for instance). The usefulness of CPUFreq and the trade-offs
 in the design are quite different between those.
>>> We keep embedded use-cases in mind. For example, it is a system with
>>> several domains,
>>> where one domain has most critical SW running on and other domain(s)
>>> are, let say, for entertainment purposes.
>>> I think, the CPUFreq is useful where power consumption is a question.
>>
>> Does the SoC you use allow different frequencies for each core? Or is it
>> one frequency for all cores? Most x86 CPU allow different frequencies
>> for each core, AFAIK. Just having the same OPP for the whole SoC might
>> limit the usefulness of this approach in general.
> Good question. All cores in a cluster share the same clock. It is
> impossible to set different frequencies on the cores inside one
> cluster.
> 
>>
 In general I doubt that a hypervisor scheduling vCPUs is in a good
 position to make a decision on the proper frequency physical CPUs should
 run with. From all I know it's already hard for an OS kernel to make
 that call. So I would actually expect that guests provide some input,
 for instance by signalling OPP change request up to the hypervisor. This
 could then decide to act on it - or not.
>>> Each running guest sees only part of the picture, but hypervisor has
>>> the whole picture, it knows all about CPU, measures CPU load and able
>>> to choose required CPU frequency to run on.
>>
>> But based on what data? All Xen sees is a vCPU trapping on MMIO, a
>> hypercall or on WFI, for that matter. It does not know much more about
>> the guest, especially it's rather clueless about what the guest OS
>> actually intended to do.
>> For instance Linux can track the actual utilization of a core by keeping
>> statistics of runnable processes and monitoring their time slice usage.
>> It can see that a certain process exhibits periodical, but bursty CPU
>> usage, which may hint that is could run at lower frequency. Xen does not
>> see this fine granular information.
>>
>>> I am wondering, does Xen
>>> need additional input from guests for make a decision?
>>
>> I very much believe so. The guest OS is in a much better position to
>> make that call.
>>
>>> BTW, currently guest domain on ARM doesn't even know how many physical
>>> CPUs the system has and what are these OPPs. When creating guest
>>> domain Xen inserts only dummy CPU nodes. All CPU info, such as clocks,
>>> OPPs, thermal, etc are not passed to guest.
>>
>> Sure, because this is what virtualization is about. And I am not asking
>> for unconditionally allowing any guest to change frequency.
>> But there could be certain use cases where this could be considered:
>> Think about your "critical SW" mentioned above, which is probably some
>> RTOS, also possibly running on pinned vCPUs. For that
>> (latency-sensitive) guest it might be well suited to run at a lower
>> frequency for some time, but how should Xen know about this?
>> "Normally" the best strategy to save power is to run as fast as
>> possible, finish all outstanding work, then put the core to sleep.
>> Because not running at all consumes much less energy than running at a
>> reduced frequency. But this may not be suitable for an RTOS.
> Saying "one domain has most critical SW running on" I meant hardware
> domain/driver domain or even other
> domain which perform some important tasks (disk, net, display, camera,
> whatever) which treated by the whole system as critical
> and must never fail. Other domains, for example, it might be Android
> as well, are not critical at all from the system point of view.
> Being honest, I haven't considered yet using CPUFreq in system where
> some RT guest is present.
> I think it is something that 

Re: [Xen-devel] [PATCH for-4.10 v2] x86/hvm: Fix altp2m_vcpu_enable_notify error handling

2017-11-15 Thread Andrew Cooper
On 15/11/17 14:10, Jan Beulich wrote:
 On 15.11.17 at 14:47,  wrote:
>> The altp2m_vcpu_enable_notify subop handler might skip calling
>> rcu_unlock_domain() after rcu_lock_current_domain().  Albeit since both
>> rcu functions are no-ops when run on the current domain, this doesn't
>> really have repercussions.
>>
>> The second change is adding a missing break that would have potentially
>> enabled #VE for the current domain even if it had intended to enable it
>> for another one (not a supported functionality).
> Thanks, much better.
>
>> Signed-off-by: Adrian Pop 
>> Reviewed-by: Andrew Cooper 
> Reviewed-by: Jan Beulich 

FOAD, Requesting a release ack for this change.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] x86/hvm: Fix altp2m_vcpu_enable_notify error handling

2017-11-15 Thread Jan Beulich
>>> On 15.11.17 at 14:47,  wrote:
> The altp2m_vcpu_enable_notify subop handler might skip calling
> rcu_unlock_domain() after rcu_lock_current_domain().  Albeit since both
> rcu functions are no-ops when run on the current domain, this doesn't
> really have repercussions.
> 
> The second change is adding a missing break that would have potentially
> enabled #VE for the current domain even if it had intended to enable it
> for another one (not a supported functionality).

Thanks, much better.

> Signed-off-by: Adrian Pop 
> Reviewed-by: Andrew Cooper 

Reviewed-by: Jan Beulich 

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2] x86/hvm: Fix altp2m_vcpu_enable_notify error handling

2017-11-15 Thread Adrian Pop
The altp2m_vcpu_enable_notify subop handler might skip calling
rcu_unlock_domain() after rcu_lock_current_domain().  Albeit since both
rcu functions are no-ops when run on the current domain, this doesn't
really have repercussions.

The second change is adding a missing break that would have potentially
enabled #VE for the current domain even if it had intended to enable it
for another one (not a supported functionality).

Signed-off-by: Adrian Pop 
Reviewed-by: Andrew Cooper 
---
changes in v2:
- reword the commit message
---
 xen/arch/x86/hvm/hvm.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 205b4cb685..0af498a312 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4534,12 +4534,18 @@ static int do_altp2m_op(
 
 if ( a.u.enable_notify.pad || a.domain != DOMID_SELF ||
  a.u.enable_notify.vcpu_id != curr->vcpu_id )
+{
 rc = -EINVAL;
+break;
+}
 
 if ( !gfn_eq(vcpu_altp2m(curr).veinfo_gfn, INVALID_GFN) ||
  mfn_eq(get_gfn_query_unlocked(curr->domain,
 a.u.enable_notify.gfn, &p2mt), INVALID_MFN) )
-return -EINVAL;
+{
+rc = -EINVAL;
+break;
+}
 
 vcpu_altp2m(curr).veinfo_gfn = _gfn(a.u.enable_notify.gfn);
 altp2m_vcpu_update_vmfunc_ve(curr);
-- 
2.15.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH 00/31] CPUFreq on ARM

2017-11-15 Thread Andre Przywara
Hi,

On 15/11/17 03:03, Jassi Brar wrote:
> On 15 November 2017 at 02:16, Oleksandr Tyshchenko  
> wrote:
>> On Tue, Nov 14, 2017 at 12:49 PM, Andre Przywara
>>  wrote:
>>
> 
>> 3. Direct ported SCPI protocol, mailbox infrastructure and the ARM SMC 
>> triggered mailbox driver. All components except mailbox driver are in 
>> mainline Linux.
>
> Why do you actually need this mailbox framework?
>>
> It is unnecessary if you are always going to use one particular signal
> mechanism, say SMC. However ...
> 
> Actually I just
> proposed the SMC driver the make it fit into the Linux framework. All we
> actually need for SCPI is to write a simple command into some memory and
> "press a button". I don't see a need to import the whole Linux
> framework, especially as our mailbox usage is actually just a corner
> case of the mailbox's capability (namely a "single-bit" doorbell).
> The SMC use case is trivial to implement, and I believe using the Juno
> mailbox is similarly simple, for instance.
>>
> ... Its going to be SMC and MHU now... and you talk about Rockchip as
> well later. That becomes unwieldy.
> 
> 
>>>
 Protocol relies on mailbox feature, so I ported mailbox too. I think,
 it would be much more easy for me to just add
 a few required commands handling with issuing SMC call and without any
 mailbox infrastructure involved.
 But, I want to show what is going on and what place these things come from.
>>>
>>> I appreciate that, but I think we already have enough "bloated" Linux +
>>> glue code in Xen. And in particular the Linux mailbox framework is much
>>> more powerful than we need for SCPI, so we have a lot of unneeded
>>> functionality.
>>
> That is a painful misconception.
> Mailbox api is designed to be (almost) as light weight as being
> transparent. Please have a look at mbox_send_message() and see how
> negligible overhead it adds for "SMC controller" that you compare
> against here. just integer manipulations protected by a spinlock.
> Of course if your protocol needs async messaging, you pay the price
> but only fair.

Normally I would agree on importing some well designed code rather than
hacking up something yourself.

BUT: This is Xen, which is meant to be lean, micro-kernel like
hypervisor. If we now add code from Linux, there must be a good
rationale why we need it. And this is why we need to make sure that
CPUFreq is really justified in the first place.
So I am a bit wary that pulling some rather unrelated Linux *framework*
into Xen bloats it up and introduces more burden to the trusted code
base. With SCPI being the only user, this controller - client
abstraction is not really needed. And to just trigger an interrupt on
the SCP side we just need to:
writel(BIT(channel), base_addr + CPU_INTR_H_SET);

I expect other mailboxes to be similarly simple.
The only other code needed is some DT parsing.

That being said I haven't look too closely how much code this actually
pulls in, it is just my gut feeling that it's a bit over the top,
conceptually.

>>> If we just want to support CPUfreq using SCPI via SMC/Juno MHU/Rockchip
>>> mailbox, we can get away with a *much* simpler solution.
>>
>> Agree, but I am afraid that simplifying things now might lead to some
>> difficulties when there is a need
>> to integrate a little bit different mailbox IP. Also, we need to
>> recheck if SCMI, we might want to support as well,
>> have the similar interface with mailbox.
>>
> Exactly.

My understanding is that the SCMI transport protocol is not different
from that used by SCPI.

Cheers,
Andre.

>>> - We would need to port mailbox drivers one-by-one anyway, so we could
>>> as well implement the simple "press-the-button" subset for each mailbox
>>> separately.
>>
> Is it about virtual controller?
> 
>>> The interface between the SCPI code and the mailbox is
>>> probably just "signal_mailbox()".
>>
> Afterall we should have the following to spread the nice feeling of
> "supporting doorbell controllers"  :)
> 
> mailbox_client.h
> ***
> void signal_mailbox(struct mbox_chan *chan)
> {
>(void)mbox_send_message(chan, NULL);
>mbox_client_txdone(chan, 0);
> }
> 
> 
> Cheers!
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [OSSTEST PATCH] ts-xen-build-prep: Install libelf-dev for benefit of linux.git

2017-11-15 Thread Ian Jackson
Juergen Gross writes ("Re: [OSSTEST PATCH] ts-xen-build-prep: Install 
libelf-dev for benefit of linux.git"):
> The kernel now is using objtool to create unwind information. This needs
> libelf to work. Advantage is that this approach no longer depends on
> assembler sources being heavily annotated with unwind hints.

Thanks.  I have adopted that for the commit message.

> Acked-by: Juergen Gross 

I will push this now to osstest pretest.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [seabios test] 116187: regressions - FAIL

2017-11-15 Thread osstest service owner
flight 116187 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116187/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115539

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115539
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115539
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115539
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 seabios  63451fca13c75870e1703eb3e20584d91179aebc
baseline version:
 seabios  0ca6d6277dfafc671a5b3718cbeb5c78e2a888ea

Last test of basis   115539  2017-11-03 20:48:58 Z   11 days
Testing same since   115733  2017-11-10 17:19:59 Z4 days9 attempts


People who touched revisions under test:
  Kevin O'Connor 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 63451fca13c75870e1703eb3e20584d91179aebc
Author: Kevin O'Connor 
Date:   Fri Nov 10 11:49:19 2017 -0500

docs: Note v1.11.0 release

Signed-off-by: Kevin O'Connor 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus test] 116182: regressions - FAIL

2017-11-15 Thread osstest service owner
flight 116182 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116182/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-saverestore fail REGR. vs. 115643
 build-amd64-pvops 6 kernel-build fail REGR. vs. 115643
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-saverestore fail REGR. vs. 115643

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-examine  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115643
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115643
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115643
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armh

Re: [Xen-devel] [PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path

2017-11-15 Thread Peter Zijlstra
On Mon, Nov 13, 2017 at 06:06:02PM +0800, Quan Xu wrote:
> From: Yang Zhang 
> 
> Implement a generic idle poll which resembles the functionality
> found in arch/. Provide weak arch_cpu_idle_poll function which
> can be overridden by the architecture code if needed.

No, we want less of those magic hooks, not more.

> Interrupts arrive which may not cause a reschedule in idle loops.
> In KVM guest, this costs several VM-exit/VM-entry cycles, VM-entry
> for interrupts and VM-exit immediately. Also this becomes more
> expensive than bare metal. Add a generic idle poll before enter
> real idle path. When a reschedule event is pending, we can bypass
> the real idle path.

Why not do a HV specific idle driver?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [OSSTEST PATCH] ts-xen-build-prep: Install libelf-dev for benefit of linux.git

2017-11-15 Thread Juergen Gross
On 15/11/17 12:11, Ian Jackson wrote:
> Linux upstream has started needing libelf-dev.  Without it, recent tip
> fails (in our configuration) like this:
> 
>  Makefile:938: *** "Cannot generate ORC metadata for CONFIG_UNWINDER_ORC=y, 
> please install libelf-dev, libelf-devel or elfutils-libelf-devel".  Stop.

The kernel now is using objtool to create unwind information. This needs
libelf to work. Advantage is that this approach no longer depends on
assembler sources being heavily annotated with unwind hints.

> It is not clear exactly when this requirement was introduced.  Our
> bisector said:
>   Bug introduced:  91a6a6cfee8ad34ea4cc10a54c0765edfe437cdb
>   Bug not present: 1c9dbd4615fd751e5e0b99807a3c7c8612e28e20
> but the "introduced" commit is a merge of a large branch, so it's not
> blaming a specific commit.  None of the commits in that range mention
> libelf so the most likely reason is a consequence of a change to some
> configuration interactions (ie, probably, an expansion of the scope of
> an existing dependency).
> 
> CC: Konrad Rzeszutek Wilk 
> CC: Stefano Stabellini 
> CC: Boris Ostrovsky 
> CC: Juergen Gross 
> CC: Paul Durrant 
> CC: Wei Liu 
> Signed-off-by: Ian Jackson 

Acked-by: Juergen Gross 


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Unable to create guest PV domain on OMAP5432

2017-11-15 Thread Jayadev Kumaran
Hello Andrii,

>> What defconfig are you based on? Do you have a device-tree support
enabled?

I use *omap2plus_defconfig* . Yes , device tree support is there and the
dts file used is

*omap5-uevm.dts>> *But it did not get a command line to setup console on
hvc0, or the kernel crashed in earliest stages.

Is there a way to debug which one of the above two possibilities has lead
to the issue ? As I'm using the dom0 kernel image for my guest, is it still
possible that it could be a kernel crash since it has already booted up for
dom0.

Thanks and Regards,

On Wed, Nov 15, 2017 at 4:31 PM, Andrii Anisov 
wrote:

> Hello Jayadev,
>
>
> On 15.11.17 12:46, Jayadev Kumaran wrote:
>
>> Hello Andrii,
>>
>> >> What kernel do you use for DomU? Please make sure you have in that
>> kernel configuration XEN support enabled (together with hypervisor console
>> support).
>>
>> I use a 3.15 kernel (git://git.kernel.org/pub/scm/
>> linux/kernel/git/torvalds/linux.git > /linux/kernel/git/torvalds/linux.git>) with the following configs
>> enabled for Xen support.
>> CONFIG_XEN_DOM0=y
>> CONFIG_XEN=y
>> CONFIG_XEN_BLKDEV_FRONTEND=y
>> CONFIG_XEN_BLKDEV_BACKEND=y
>> CONFIG_XEN_NETDEV_FRONTEND=y
>> CONFIG_XEN_NETDEV_BACKEND=y
>> CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y
>> CONFIG_HVC_XEN=y
>> CONFIG_HVC_XEN_FRONTEND=y
>> CONFIG_XEN_DEV_EVTCHN=y
>> CONFIG_XEN_BACKEND=y
>> CONFIG_XENFS=y
>> CONFIG_XEN_COMPAT_XENFS=y
>> CONFIG_XEN_SYS_HYPERVISOR=y
>> CONFIG_XEN_XENBUS_FRONTEND=y
>> CONFIG_XEN_GNTDEV=y
>> CONFIG_XEN_GRANT_DEV_ALLOC=y
>> CONFIG_XEN_PRIVCMD=y
>>
>> In fact, it is the same kernel as that of dom0, just to check if Xen is
>> properly configured.
>>
> What defconfig are you based on? Do you have a device-tree support enabled?
>
> Also, I had previously tried with a kernel image with no Xen support and
>> the results were quite similar - guest domain gets created and is shown in
>> a running state as per /'xl list' /, however /'xl console/' shows nothing
>> but hangs until I press Ctrl+5 .
>>
> Your guest is created and XEN treats it as being running, you can see this
> in `xl list`.
> But it did not get a command line to setup console on hvc0, or the kernel
> crashed in earliest stages.
>
>
> --
>
> *Andrii Anisov*
>
>
>
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 116178: regressions - FAIL

2017-11-15 Thread osstest service owner
flight 116178 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116178/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw 15 guest-start/debian.repeat fail REGR. vs. 116161

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116161
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116161
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 116161
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116161
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116161
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 116161
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116161
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116161
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116161
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116161
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 xen  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
baseline version:
 xen  36c80e29e36eee02f20f18e7f32267442b18c8bd

Last test of basis   116161  2017-11-14 16:48:27 Z0 days
Testing same since   116178  2017-11-15 00:51:31 Z0 days1 attempts


People who touched revisions under test:
  Eric Chanudet 
  Min He 
  Yi Zhang 
  Yu Zhang 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass 

[Xen-devel] [OSSTEST PATCH] ts-xen-build-prep: Install libelf-dev for benefit of linux.git

2017-11-15 Thread Ian Jackson
Linux upstream has started needing libelf-dev.  Without it, recent tip
fails (in our configuration) like this:

 Makefile:938: *** "Cannot generate ORC metadata for CONFIG_UNWINDER_ORC=y, 
please install libelf-dev, libelf-devel or elfutils-libelf-devel".  Stop.

It is not clear exactly when this requirement was introduced.  Our
bisector said:
  Bug introduced:  91a6a6cfee8ad34ea4cc10a54c0765edfe437cdb
  Bug not present: 1c9dbd4615fd751e5e0b99807a3c7c8612e28e20
but the "introduced" commit is a merge of a large branch, so it's not
blaming a specific commit.  None of the commits in that range mention
libelf so the most likely reason is a consequence of a change to some
configuration interactions (ie, probably, an expansion of the scope of
an existing dependency).

CC: Konrad Rzeszutek Wilk 
CC: Stefano Stabellini 
CC: Boris Ostrovsky 
CC: Juergen Gross 
CC: Paul Durrant 
CC: Wei Liu 
Signed-off-by: Ian Jackson 
---
 ts-xen-build-prep | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index 3e98364..3309216 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -207,6 +207,7 @@ sub prep () {
   autoconf automake libtool xsltproc
   libxml2-utils libxml2-dev
   libdevmapper-dev w3c-dtd-xhtml libxml-xpath-perl
+  libelf-dev
   ccache nasm checkpolicy ebtables);
 
 if ($ho->{Suite} !~ m/squeeze|wheezy/) {
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Unable to create guest PV domain on OMAP5432

2017-11-15 Thread Andrii Anisov

Hello Jayadev,


On 15.11.17 12:46, Jayadev Kumaran wrote:

Hello Andrii,

>> What kernel do you use for DomU? Please make sure you have in that 
kernel configuration XEN support enabled (together with hypervisor 
console support).


I use a 3.15 kernel 
(git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
) 
with the following configs enabled for Xen support.

CONFIG_XEN_DOM0=y
CONFIG_XEN=y
CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_XEN_NETDEV_FRONTEND=y
CONFIG_XEN_NETDEV_BACKEND=y
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_XEN_DEV_EVTCHN=y
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=y
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_XEN_PRIVCMD=y

In fact, it is the same kernel as that of dom0, just to check if Xen 
is properly configured.

What defconfig are you based on? Do you have a device-tree support enabled?

Also, I had previously tried with a kernel image with no Xen support 
and the results were quite similar - guest domain gets created and is 
shown in a running state as per /'xl list' /, however /'xl console/' 
shows nothing but hangs until I press Ctrl+5 .
Your guest is created and XEN treats it as being running, you can see 
this in `xl list`.
But it did not get a command line to setup console on hvc0, or the 
kernel crashed in earliest stages.



--

*Andrii Anisov*



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/2 v2] xen: Add support for initializing 16550 UART using ACPI

2017-11-15 Thread Bhupinder Thakur

Hi,


On Thursday 09 November 2017 05:01 PM, Roger Pau Monné wrote:

On Thu, Nov 09, 2017 at 03:49:23PM +0530, Bhupinder Thakur wrote:

Currently, Xen supports only DT based initialization of 16550 UART.
This patch adds support for initializing 16550 UART using ACPI SPCR table.

This patch also makes the uart initialization code common between DT and
ACPI based initialization.

Signed-off-by: Bhupinder Thakur 
---
TBD:
There was one review comment from Julien about how the uart->io_size is being
calculated. Currently, I am calulating the io_size based on address of the last
UART register.

pci_uart_config also calcualates the uart->io_size like this:

uart->io_size = max(8U << param->reg_shift,
  param->uart_offset);

I am not sure whether we can use similar logic for calculating uart->io_size.

Changes since v1:
- Reused common code between DT and ACPI based initializations

CC: Andrew Cooper 
CC: George Dunlap 
CC: Ian Jackson 
CC: Jan Beulich 
CC: Konrad Rzeszutek Wilk 
CC: Stefano Stabellini 
CC: Tim Deegan 
CC: Wei Liu 
CC: Julien Grall 

  xen/drivers/char/ns16550.c  | 132 
  xen/include/xen/8250-uart.h |   1 +
  2 files changed, 121 insertions(+), 12 deletions(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index e0f8199..cf42fce 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -1463,18 +1463,13 @@ void __init ns16550_init(int index, struct 
ns16550_defaults *defaults)
  }
  
  #ifdef CONFIG_HAS_DEVICE_TREE

-static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
-   const void *data)
+static int ns16550_init_dt(struct ns16550 *uart,
+   const struct dt_device_node *dev)

Why are you dropping the __init attribute?
This is a helper I defined for initializing the uart and called from the 
main __init function.



  {
-struct ns16550 *uart;
-int res;
+int res = 0;
  u32 reg_shift, reg_width;
  u64 io_size;
  
-uart = &ns16550_com[0];

-
-ns16550_init_common(uart);
-
  uart->baud  = BAUD_AUTO;
  uart->data_bits = 8;
  uart->parity= UART_PARITY_NONE;
@@ -1510,18 +1505,103 @@ static int __init ns16550_uart_dt_init(struct 
dt_device_node *dev,
  
  uart->dw_usr_bsy = dt_device_is_compatible(dev, "snps,dw-apb-uart");
  
+return res;

+}
+#else
+static int ns16550_init_dt(struct ns16550 *uart,
+   const struct dt_device_node *dev)
+{
+return -EINVAL;
+}
+#endif
+
+#ifdef CONFIG_ACPI
+#include 

Please place the include at the top of the file, together with the
other ones.

ok.



+static int ns16550_init_acpi(struct ns16550 *uart,
+ const void *data)
+{
+struct acpi_table_spcr *spcr = NULL;
+int status = 0;

I don't think you need to initialize any of those two variables. Or
do:

int status = acpi_get_table(ACPI_SIG_SPCR, 0,
 (struct acpi_table_header **)&spcr);

if ( ... )


ok.

+status = acpi_get_table(ACPI_SIG_SPCR, 0,
+(struct acpi_table_header **)&spcr);
+
+if ( ACPI_FAILURE(status) )
+{
+printk("ns16550: Failed to get SPCR table\n");
+return -EINVAL;
+}
+
+uart->baud  = BAUD_AUTO;
+uart->data_bits = 8;
+uart->parity= spcr->parity;
+uart->stop_bits = spcr->stop_bits;
+uart->io_base = spcr->serial_port.address;
+uart->irq = spcr->interrupt;
+uart->reg_width = spcr->serial_port.bit_width / 8;
+uart->reg_shift = 0;
+uart->io_size = UART_MAX_REG << uart->reg_shift;

You seem to align some of the '=' above but not all, please do either
one, but consistently.

I will align the assignments.

+
+irq_set_type(spcr->interrupt, spcr->interrupt_type);
+
+return 0;
+}
+#else
+static int ns16550_init_acpi(struct ns16550 *uart,
+ const void *data)
+{
+return -EINVAL;
+}
+#endif
+
+static int ns16550_uart_init(struct ns16550 **puart,
+ const void *data, bool acpi)
+{
+struct ns16550 *uart = &ns16550_com[0];
+
+*puart = uart;
+
+ns16550_init_common(uart);
+
+return ( acpi ) ? ns16550_init_acpi(uart, data)

   ^ unneeded parentheses.

+: ns16550_init_dt(uart, data);
+}
+
+static void ns16550_vuart_init(struct ns16550 *uart)
+{
+#ifdef CONFIG_ARM
  uart->vuart.base_addr = uart->io_base;
  uart->vuart.size = uart->io_size;
-uart->vuart.data_off = UART_THR vuart.status_off = UART_LSRvuart.status = UART_LSR_THRE|UART_LSR_TEMT;
+uart->vuart.data_off = UART_THR << uart->reg_shift;
+uart->vuart.status_off = UART_LSR << uart->reg_shift;
+uart->vuart.status = UART_LSR_THRE | UART_LSR_TEMT;

You should try to avoid mixing functional changes with style ones.
Please split this into a pre-patch.

I will ad

[Xen-devel] [xen-unstable-coverity test] 116195: all pass - PUSHED

2017-11-15 Thread osstest service owner
flight 116195 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116195/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
baseline version:
 xen  3b2966e72c414592cd2c86c21a0d4664cf627b9c

Last test of basis   116112  2017-11-12 09:23:02 Z3 days
Testing same since   116195  2017-11-15 09:24:03 Z0 days1 attempts


People who touched revisions under test:
  Anthony PERARD 
  Bhupinder Thakur 
  Eric Chanudet 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 
  Min He 
  Pawel Wieczorkiewicz 
  Wei Liu 
  Yi Zhang 
  Yu Zhang 

jobs:
 coverity-amd64   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/xen.git
   3b2966e..b9ee1fd  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f -> 
coverity-tested/smoke

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Unable to create guest PV domain on OMAP5432

2017-11-15 Thread Jayadev Kumaran
Hello Andrii,

>> What kernel do you use for DomU? Please make sure you have in that
kernel configuration XEN support enabled (together with hypervisor console
support).

I use a 3.15 kernel (git://
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git) with the
following configs enabled for Xen support.

CONFIG_XEN_DOM0=y
CONFIG_XEN=y
CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_XEN_NETDEV_FRONTEND=y
CONFIG_XEN_NETDEV_BACKEND=y
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_XEN_DEV_EVTCHN=y
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=y
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_XEN_PRIVCMD=y


In fact, it is the same kernel as that of dom0, just to check if Xen is
properly configured.

Also, I had previously tried with a kernel image with no Xen support and
the results were quite similar - guest domain gets created and is shown in
a running state as per *'xl list' *, however *'xl console*' shows nothing
but hangs until I press Ctrl+5 .

Thanks and Regards,

On Wed, Nov 15, 2017 at 3:45 PM, Andrii Anisov 
wrote:

> Dear Jayadev,
>
>
> Find my comments inlined:
>
>
> On 15.11.17 08:08, Jayadev Kumaran wrote:
>
>> Hello Andrii,
>>
>> >> BTW, what is your dom0 system? Does it have bash?
>> > _dom0 uses a modified kernel(3.15) with Xen support and  default omap
>> fs_
>>
>> I made certain changes to my configuration file. Instead of trying to use
>> a disk, I want to the guest domain up from ramdisk image.
>>
> Its quite wise idea to sort out problems.
>
> My new configuration file looks like
>>
>> "
>> name = "android"
>>
>> kernel = "/home/root/android/kernel"
>> ramdisk = "/home/root/android/ramdisk.img"
>> #bootloader = "/usr/lib/xen-4.4/bin/pygrub"
>>
>> memory = 512
>> vcpus = 1
>>
>> device_model_version = 'qemu-xen-traditional'
>>
>> extra = "console=hvc0 rw init=/bin/sh earlyprintk=xenboot"
>>
>> "
>>
>> I'm able to create a guest domain as well.
>>
>> /root@omap5-evm:~# xl -vvv create android.cfg
>>
>> Parsing config from android.cfg
>> libxl: debug: libxl_create.c:1646:do_domain_create: Domain 0:ao 0x46e30:
>> create: how=(nil) callback=(nil) poller=0x46e90
>> libxl: debug: libxl_arm.c:87:libxl__arch_domain_prepare_config:
>> Configure the domain
>> libxl: debug: libxl_arm.c:90:libxl__arch_domain_prepare_config:  -
>> Allocate 0 SPIs
>> libxl: debug: libxl_create.c:987:initiate_domain_create: Domain
>> 1:running bootloader
>> libxl: debug: libxl_bootloader.c:335:libxl__bootloader_run: Domain 1:no
>> bootloader configured, using user supplied kernel
>> libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch
>> w=0x47780: deregister unregistered
>> (XEN) grant_table.c:1688:d0v0 Expanding d1 grant table from 0 to 1 frames
>> domainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0 rw
>> init=/bin/sh earlyprintk=xenboot", features=""
>> libxl: debug: libxl_dom.c:779:libxl__build_pv: pv kernel mapped 0 path
>> /home/root/android/kernel
>> domainbuilder: detail: xc_dom_kernel_file: filename="/home/root/android/k
>> ernel"
>> domainbuilder: detail: xc_dom_malloc_filemap: 4782 kB
>> domainbuilder: detail: xc_dom_ramdisk_file: filename="/home/root/android/r
>> amdisk.img"
>> domainbuilder: detail: xc_dom_malloc_filemap: 179 kB
>> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.10, caps xen-3.0-armv7l
>> domainbuilder: detail: xc_dom_rambase_init: RAM starts at 4
>> domainbuilder: detail: xc_dom_parse_image: called
>> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader
>> ...
>> domainbuilder: detail: loader probe failed
>> domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64)
>> loader ...
>> domainbuilder: detail: xc_dom_probe_zimage64_kernel: kernel is not an
>> arm64 Image
>> domainbuilder: detail: loader probe failed
>> domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM32)
>> loader ...
>> domainbuilder: detail: loader probe OK
>> domainbuilder: detail: xc_dom_parse_zimage32_kernel: called
>> domainbuilder: detail: xc_dom_parse_zimage32_kernel: xen-3.0-armv7l:
>> 0x40008000 -> 0x404b3b28
>> libxl: debug: libxl_arm.c:866:libxl__prepare_dtb: constructing DTB for
>> Xen version 4.10 guest
>> libxl: debug: libxl_arm.c:867:libxl__prepare_dtb:  - vGIC version: V2
>> libxl: debug: libxl_arm.c:321:make_chosen_node: /chosen/bootargs =
>> console=hvc0 rw init=/bin/sh earlyprintk=xenboot
>> libxl: debug: libxl_arm.c:328:make_chosen_node: /chosen adding
>> placeholder linux,initrd properties
>> libxl: debug: libxl_arm.c:441:make_memory_nodes: Creating placeholder
>> node /memory@4000
>> libxl: debug: libxl_arm.c:441:make_memory_nodes: Creating placeholder
>> node /memory@2
>> libxl: debug: libxl_arm.c:964:libxl__prepare_dtb: fdt total size 1394
>> domainbuilder: detail: xc_dom_devicetree_mem: called
>> libxl: debug: libxl_arm.c:1005:libxl__arch_domain_init_hw_description:

[Xen-devel] [distros-debian-squeeze test] 72449: trouble: broken/fail/pass

2017-11-15 Thread Platform Team regression test user
flight 72449 distros-debian-squeeze real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72449/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-i386-squeeze-netboot-pygrubbroken
 test-amd64-amd64-i386-squeeze-netboot-pygrub 4 host-install(4) broken REGR. 
vs. 72431

Tests which did not succeed, but are not blocking:
 test-amd64-i386-amd64-squeeze-netboot-pygrub 10 debian-di-install fail like 
72431
 test-amd64-amd64-amd64-squeeze-netboot-pygrub 10 debian-di-install fail like 
72431
 test-amd64-i386-i386-squeeze-netboot-pygrub 10 debian-di-install fail like 
72431

baseline version:
 flight   72431

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-squeeze-netboot-pygrubfail
 test-amd64-i386-amd64-squeeze-netboot-pygrub fail
 test-amd64-amd64-i386-squeeze-netboot-pygrub broken  
 test-amd64-i386-i386-squeeze-netboot-pygrub  fail



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus bisection] complete build-amd64-pvops

2017-11-15 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job build-amd64-pvops
testid kernel-build

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
  Bug introduced:  91a6a6cfee8ad34ea4cc10a54c0765edfe437cdb
  Bug not present: 1c9dbd4615fd751e5e0b99807a3c7c8612e28e20
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/116192/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-linus/build-amd64-pvops.kernel-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-linus/build-amd64-pvops.kernel-build
 --summary-out=tmp/116192.bisection-summary --basis-template=115643 
--blessings=real,real-bisect linux-linus build-amd64-pvops kernel-build
Searching for failure / basis pass:
 116164 fail [host=godello0] / 116136 [host=pinot1] 116119 [host=godello1] 
116103 [host=nobling1] 115718 [host=baroque0] 115690 ok.
Failure / basis pass flights: 116164 / 115690
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Latest 894025f24bd028942da3e602b87d9f7223109b14 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
Basis pass 87df26175e67c26ccdd3a002fbbb8cde78e28a19 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
Generating revisions with ./adhoc-revtuple-generator  
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git#87df26175e67c26ccdd3a002fbbb8cde78e28a19-894025f24bd028942da3e602b87d9f7223109b14
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
Loaded 1100 nodes in revision graph
Searching for test results:
 115690 pass 87df26175e67c26ccdd3a002fbbb8cde78e28a19 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 115718 [host=baroque0]
 116103 [host=nobling1]
 116174 pass 5cff3684192773a2c2b1102acd10b99aaa6a3f67 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116152 fail 43ff2f4db9d0f76452b77cfa645f02b471143b24 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116184 fail 91a6a6cfee8ad34ea4cc10a54c0765edfe437cdb 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116163 pass 87df26175e67c26ccdd3a002fbbb8cde78e28a19 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116119 [host=godello1]
 116175 pass 1c9dbd4615fd751e5e0b99807a3c7c8612e28e20 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116136 [host=pinot1]
 116166 fail 43ff2f4db9d0f76452b77cfa645f02b471143b24 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116186 pass 1c9dbd4615fd751e5e0b99807a3c7c8612e28e20 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116167 pass b39545684a90ef3374abc0969d64c7bc540d128d 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116177 pass 03b2a320b19f1424e9ac9c21696be9c60b6d0d93 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116169 pass e75427c6945460c36bfcab4cd33db0adc0e17200 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116164 fail 894025f24bd028942da3e602b87d9f7223109b14 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116179 fail 6a9f70b0a5b3ca5db1dd5c7743ca555bfca2ae08 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116180 pass 87df26175e67c26ccdd3a002fbbb8cde78e28a19 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116188 fail 91a6a6cfee8ad34ea4cc10a54c0765edfe437cdb 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116192 fail 91a6a6cfee8ad34ea4cc10a54c0765edfe437cdb 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116171 pass 3e2014637c50e5d6a77cd63d5db6c209fe29d1b1 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116181 fail 894025f24bd028942da3e602b87d9f7223109b14 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
 116189 pass 1c9dbd4615fd751e5e0b99807a3c7c8612e28e20 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
Searching for interesting versions
 Result found: flight 115690 (pass), for basis pass
 Result found: flight 116164 (fail), for basis failure
 Repro found: flight 116180 (pass), for basis pass
 Repro found: flight 116181 (fail), for basis failure
 0 revisions at 1c9dbd4615fd751e5e0b99807a3c7c8612e28e20 
c530a75c1e6a472b0eb9558310b518f0dfcd8860
No revisions left to test, checking graph state.
 Result found: flight 116175 (pass), for last pass
 Result found: flight 116184 (fail), for first failure
 Repro found: flight 116186 (pass), for last pass
 Repro found: flight 116188 (fail), for first failure
 Repro found: flight 116189 (pass), for last pass
 Repro found: flight 116192 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
  Bug introduced:  91a6a6cfee8ad34ea4cc10a54c0765edfe437cdb
  Bug not present: 1c9dbd4615fd751e5e0b99807a3c7c8612e28e20
  Last fail repro: http:/

Re: [Xen-devel] Unable to create guest PV domain on OMAP5432

2017-11-15 Thread Andrii Anisov

Dear Jayadev,


Find my comments inlined:


On 15.11.17 08:08, Jayadev Kumaran wrote:

Hello Andrii,

>> BTW, what is your dom0 system? Does it have bash?
> _dom0 uses a modified kernel(3.15) with Xen support and  default 
omap fs_


I made certain changes to my configuration file. Instead of trying to 
use a disk, I want to the guest domain up from ramdisk image.

Its quite wise idea to sort out problems.


My new configuration file looks like

"
name = "android"

kernel = "/home/root/android/kernel"
ramdisk = "/home/root/android/ramdisk.img"
#bootloader = "/usr/lib/xen-4.4/bin/pygrub"

memory = 512
vcpus = 1

device_model_version = 'qemu-xen-traditional'

extra = "console=hvc0 rw init=/bin/sh earlyprintk=xenboot"

"

I'm able to create a guest domain as well.

/root@omap5-evm:~# xl -vvv create android.cfg
Parsing config from android.cfg
libxl: debug: libxl_create.c:1646:do_domain_create: Domain 0:ao 
0x46e30: create: how=(nil) callback=(nil) poller=0x46e90
libxl: debug: libxl_arm.c:87:libxl__arch_domain_prepare_config: 
Configure the domain
libxl: debug: libxl_arm.c:90:libxl__arch_domain_prepare_config:  - 
Allocate 0 SPIs
libxl: debug: libxl_create.c:987:initiate_domain_create: Domain 
1:running bootloader
libxl: debug: libxl_bootloader.c:335:libxl__bootloader_run: Domain 
1:no bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch 
w=0x47780: deregister unregistered

(XEN) grant_table.c:1688:d0v0 Expanding d1 grant table from 0 to 1 frames
domainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0 rw 
init=/bin/sh earlyprintk=xenboot", features=""
libxl: debug: libxl_dom.c:779:libxl__build_pv: pv kernel mapped 0 path 
/home/root/android/kernel
domainbuilder: detail: xc_dom_kernel_file: 
filename="/home/root/android/kernel"

domainbuilder: detail: xc_dom_malloc_filemap    : 4782 kB
domainbuilder: detail: xc_dom_ramdisk_file: 
filename="/home/root/android/ramdisk.img"

domainbuilder: detail: xc_dom_malloc_filemap    : 179 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.10, caps 
xen-3.0-armv7l

domainbuilder: detail: xc_dom_rambase_init: RAM starts at 4
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary 
loader ...

domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64) 
loader ...
domainbuilder: detail: xc_dom_probe_zimage64_kernel: kernel is not an 
arm64 Image

domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM32) 
loader ...

domainbuilder: detail: loader probe OK
domainbuilder: detail: xc_dom_parse_zimage32_kernel: called
domainbuilder: detail: xc_dom_parse_zimage32_kernel: xen-3.0-armv7l: 
0x40008000 -> 0x404b3b28
libxl: debug: libxl_arm.c:866:libxl__prepare_dtb: constructing DTB for 
Xen version 4.10 guest

libxl: debug: libxl_arm.c:867:libxl__prepare_dtb:  - vGIC version: V2
libxl: debug: libxl_arm.c:321:make_chosen_node: /chosen/bootargs = 
console=hvc0 rw init=/bin/sh earlyprintk=xenboot
libxl: debug: libxl_arm.c:328:make_chosen_node: /chosen adding 
placeholder linux,initrd properties
libxl: debug: libxl_arm.c:441:make_memory_nodes: Creating placeholder 
node /memory@4000
libxl: debug: libxl_arm.c:441:make_memory_nodes: Creating placeholder 
node /memory@2

libxl: debug: libxl_arm.c:964:libxl__prepare_dtb: fdt total size 1394
domainbuilder: detail: xc_dom_devicetree_mem: called
libxl: debug: libxl_arm.c:1005:libxl__arch_domain_init_hw_description: 
Generating ACPI tables is disabled by user.
domainbuilder: detail: xc_dom_mem_init: mem 512 MB, pages 0x2 
pages, 4k each

domainbuilder: detail: xc_dom_mem_init: 0x2 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: set_mode: guest xen-3.0-armv7l, address size 32
domainbuilder: detail: xc_dom_malloc    : 1024 kB
domainbuilder: detail: populate_guest_memory: populating RAM @ 
4000-6000 (512MB)
domainbuilder: detail: populate_one_size: populated 0x100/0x100 
entries with shift 9

domainbuilder: detail: meminit: placing boot modules at 0x4800
domainbuilder: detail: meminit: ramdisk: 0x4800 -> 0x4802d000
domainbuilder: detail: meminit: devicetree: 0x4802d000 -> 0x4802e000
libxl: debug: 
libxl_arm.c:1073:libxl__arch_domain_finalise_hw_description: /chosen 
updating initrd properties to cover 4800-4802d000
libxl: debug: libxl_arm.c:1039:finalise_one_node: Populating 
placeholder node /memory@4000
libxl: debug: libxl_arm.c:1033:finalise_one_node: Nopping out 
placeholder node /memory@2

domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 
0x40008+0x4ac at 0xb6098000
domainbuilder: detail: xc_dom_alloc_segment:   kernel : 0x40008000 -> 
0x404b4000  (pfn 0x40008 + 0x4ac pages)

domainbuilder: detail: xc_dom_load_zimage_kernel: ca

[Xen-devel] [xen-unstable baseline-only test] 72448: regressions - FAIL

2017-11-15 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 72448 xen-unstable real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72448/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore.2 fail REGR. vs. 72444
 test-amd64-amd64-xl-qemut-win10-i386 16 guest-localmigrate/x10 fail REGR. vs. 
72444

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail like 72444
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail like 72444
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   like 72444
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   like 72444
 test-amd64-i386-freebsd10-amd64 11 guest-start fail like 72444
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   like 72444
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install  fail like 72444
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 72444
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail like 72444
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 72444
 test-amd64-amd64-examine  4 memdisk-try-append   fail   never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 10 windows-install fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass

version targeted for testing:
 xen  36c80e29e36eee02f20f18e7f32267442b18c8bd
baseline version:
 xen  3b2966e72c414592cd2c86c21a0d4664cf627b9c

Last test of basis72444  2017-11-14 02:17:32 Z1 days
Testing same since72448  2017-11-15 00:52:35 Z0 days1 attempts


People who touched revisions under test:
  Anthony PERARD 
  Bhupinder Thakur 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 
  Pawel Wieczorkiewicz 
  Wei Liu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build

Re: [Xen-devel] [PATCH v3] xen-disk: use an IOThread per instance

2017-11-15 Thread Paul Durrant
Anthony, Stefano,

  Ping?

> -Original Message-
> From: Paul Durrant [mailto:paul.durr...@citrix.com]
> Sent: 07 November 2017 10:47
> To: qemu-de...@nongnu.org; xen-de...@lists.xenproject.org
> Cc: Paul Durrant ; Stefano Stabellini
> ; Anthony Perard ;
> Kevin Wolf ; Max Reitz 
> Subject: [PATCH v3] xen-disk: use an IOThread per instance
> 
> This patch allocates an IOThread object for each xen_disk instance and
> sets the AIO context appropriately on connect. This allows processing
> of I/O to proceed in parallel.
> 
> The patch also adds tracepoints into xen_disk to make it possible to
> follow the state transtions of an instance in the log.
> 
> Signed-off-by: Paul Durrant 
> ---
> Cc: Stefano Stabellini 
> Cc: Anthony Perard 
> Cc: Kevin Wolf 
> Cc: Max Reitz 
> 
> v3:
>  - Use new iothread_create/destroy() functions
> 
> v2:
>  - explicitly acquire and release AIO context in qemu_aio_complete() and
>blk_bh()
> ---
>  hw/block/trace-events |  7 +++
>  hw/block/xen_disk.c   | 53
> ---
>  2 files changed, 53 insertions(+), 7 deletions(-)
> 
> diff --git a/hw/block/trace-events b/hw/block/trace-events
> index cb6767b3ee..962a3bfa24 100644
> --- a/hw/block/trace-events
> +++ b/hw/block/trace-events
> @@ -10,3 +10,10 @@ virtio_blk_submit_multireq(void *vdev, void *mrb, int
> start, int num_reqs, uint6
>  # hw/block/hd-geometry.c
>  hd_geometry_lchs_guess(void *blk, int cyls, int heads, int secs) "blk %p
> LCHS %d %d %d"
>  hd_geometry_guess(void *blk, uint32_t cyls, uint32_t heads, uint32_t secs,
> int trans) "blk %p CHS %u %u %u trans %d"
> +
> +# hw/block/xen_disk.c
> +xen_disk_alloc(char *name) "%s"
> +xen_disk_init(char *name) "%s"
> +xen_disk_connect(char *name) "%s"
> +xen_disk_disconnect(char *name) "%s"
> +xen_disk_free(char *name) "%s"
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index e431bd89e8..f74fcd42d1 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -27,10 +27,12 @@
>  #include "hw/xen/xen_backend.h"
>  #include "xen_blkif.h"
>  #include "sysemu/blockdev.h"
> +#include "sysemu/iothread.h"
>  #include "sysemu/block-backend.h"
>  #include "qapi/error.h"
>  #include "qapi/qmp/qdict.h"
>  #include "qapi/qmp/qstring.h"
> +#include "trace.h"
> 
>  /* - */
> 
> @@ -125,6 +127,9 @@ struct XenBlkDev {
>  DriveInfo   *dinfo;
>  BlockBackend*blk;
>  QEMUBH  *bh;
> +
> +IOThread*iothread;
> +AioContext  *ctx;
>  };
> 
>  /* - */
> @@ -596,9 +601,12 @@ static int ioreq_runio_qemu_aio(struct ioreq
> *ioreq);
>  static void qemu_aio_complete(void *opaque, int ret)
>  {
>  struct ioreq *ioreq = opaque;
> +struct XenBlkDev *blkdev = ioreq->blkdev;
> +
> +aio_context_acquire(blkdev->ctx);
> 
>  if (ret != 0) {
> -xen_pv_printf(&ioreq->blkdev->xendev, 0, "%s I/O error\n",
> +xen_pv_printf(&blkdev->xendev, 0, "%s I/O error\n",
>ioreq->req.operation == BLKIF_OP_READ ? "read" : 
> "write");
>  ioreq->aio_errors++;
>  }
> @@ -607,10 +615,10 @@ static void qemu_aio_complete(void *opaque, int
> ret)
>  if (ioreq->presync) {
>  ioreq->presync = 0;
>  ioreq_runio_qemu_aio(ioreq);
> -return;
> +goto done;
>  }
>  if (ioreq->aio_inflight > 0) {
> -return;
> +goto done;
>  }
> 
>  if (xen_feature_grant_copy) {
> @@ -647,16 +655,19 @@ static void qemu_aio_complete(void *opaque, int
> ret)
>  }
>  case BLKIF_OP_READ:
>  if (ioreq->status == BLKIF_RSP_OKAY) {
> -block_acct_done(blk_get_stats(ioreq->blkdev->blk), &ioreq->acct);
> +block_acct_done(blk_get_stats(blkdev->blk), &ioreq->acct);
>  } else {
> -block_acct_failed(blk_get_stats(ioreq->blkdev->blk), 
> &ioreq->acct);
> +block_acct_failed(blk_get_stats(blkdev->blk), &ioreq->acct);
>  }
>  break;
>  case BLKIF_OP_DISCARD:
>  default:
>  break;
>  }
> -qemu_bh_schedule(ioreq->blkdev->bh);
> +qemu_bh_schedule(blkdev->bh);
> +
> +done:
> +aio_context_release(blkdev->ctx);
>  }
> 
>  static bool blk_split_discard(struct ioreq *ioreq, blkif_sector_t
> sector_number,
> @@ -913,17 +924,29 @@ static void blk_handle_requests(struct XenBlkDev
> *blkdev)
>  static void blk_bh(void *opaque)
>  {
>  struct XenBlkDev *blkdev = opaque;
> +
> +aio_context_acquire(blkdev->ctx);
>  blk_handle_requests(blkdev);
> +aio_context_release(blkdev->ctx);
>  }
> 
>  static void blk_alloc(struct XenDevice *xendev)
>  {
>  struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev,
> xendev);
> +Error *err = NULL;
> +
> +trace_xen_disk_alloc(xendev->name);
> 
>  QLIST_INIT(&blkdev->inflight);
>  QLIST_INIT(&blkdev->finish

Re: [Xen-devel] [RFC v2 5/7] acpi:arm64: Add support for parsing IORT table

2017-11-15 Thread Julien Grall

Hi Sameer,

On 11/15/2017 01:27 AM, Goel, Sameer wrote:



On 11/8/2017 7:41 AM, Manish Jaggi wrote:

Hi Sameer

On 9/21/2017 6:07 AM, Sameer Goel wrote:

Add support for parsing IORT table to initialize SMMU devices.
* The code for creating an SMMU device has been modified, so that the SMMU
device can be initialized.
* The NAMED NODE code has been commented out as this will need DOM0 kernel
support.
* ITS code has been included but it has not been tested.

Signed-off-by: Sameer Goel 

Followup of the discussions we had on iort parsing and querying streamID and 
deviceId based on RID.
I have extended your patchset with a patch that provides an alternative
way of parsing iort into maps : {rid-streamid}, {rid-deviceID)
which can directly be looked up for searching streamId for a rid. This
will remove the need to traverse iort table again.

The test patch just describes the proposed flow and how the parsing and
query code might fit in. I have not tested it.
The code only compiles.

https://github.com/mjaggi-cavium/xen-wip/commit/df006d64bdbb5c8344de5a710da8bf64c9e8edd5
(This repo has all 7 of your patches + test code patch merged.

Note: The commit text of the patch describes the basic flow /assumptions / 
usage of functions.
Please see the code along with the v2 design draft.
[RFC] [Draft Design v2] ACPI/IORT Support in Xen.
https://lists.xen.org/archives/html/xen-devel/2017-11/msg00512.html

I seek your advice on this. Please provide your feedback.

I responded back on the other thread. I think we are fixing something that is 
not broken. I will try to post a couple of new RFCs and let's discuss this with 
incremental changes on the mailing list.


That other thread was a separate mailing list. For the benefits of the 
rest of the community it would be nice if you can post the summary here.


However, nobody said the code was broken. IORT will be used in various 
place in Xen and Manish is looking whether we can parse only once and 
for all the IORT. I think it is latest design is promising for that.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/2 v2] xen: Add support for initializing 16550 UART using ACPI

2017-11-15 Thread Bhupinder Thakur

Hi Julien,


On Tuesday 14 November 2017 12:21 AM, Julien Grall wrote:

Hi Bhupinder,

On 11/09/2017 10:19 AM, Bhupinder Thakur wrote:

Currently, Xen supports only DT based initialization of 16550 UART.
This patch adds support for initializing 16550 UART using ACPI SPCR 
table.


This patch also makes the uart initialization code common between DT and
ACPI based initialization.


Can you please have one patch to refactor the code and one to add ACPI 
support? This will be easier to review.



ok.


Signed-off-by: Bhupinder Thakur 
---
TBD:
There was one review comment from Julien about how the uart->io_size 
is being
calculated. Currently, I am calulating the io_size based on address 
of the last

UART register.

pci_uart_config also calcualates the uart->io_size like this:

uart->io_size = max(8U << param->reg_shift,
  param->uart_offset);

I am not sure whether we can use similar logic for calculating 
uart->io_size.


Changes since v1:
- Reused common code between DT and ACPI based initializations

CC: Andrew Cooper 
CC: George Dunlap 
CC: Ian Jackson 
CC: Jan Beulich 
CC: Konrad Rzeszutek Wilk 
CC: Stefano Stabellini 
CC: Tim Deegan 
CC: Wei Liu 
CC: Julien Grall 

  xen/drivers/char/ns16550.c  | 132 


  xen/include/xen/8250-uart.h |   1 +
  2 files changed, 121 insertions(+), 12 deletions(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index e0f8199..cf42fce 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -1463,18 +1463,13 @@ void __init ns16550_init(int index, struct 
ns16550_defaults *defaults)

  }
    #ifdef CONFIG_HAS_DEVICE_TREE
-static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
-   const void *data)
+static int ns16550_init_dt(struct ns16550 *uart,
+   const struct dt_device_node *dev)
  {
-    struct ns16550 *uart;
-    int res;
+    int res = 0;
  u32 reg_shift, reg_width;
  u64 io_size;
  -    uart = &ns16550_com[0];
-
-    ns16550_init_common(uart);
-
  uart->baud  = BAUD_AUTO;
  uart->data_bits = 8;
  uart->parity    = UART_PARITY_NONE;
@@ -1510,18 +1505,103 @@ static int __init 
ns16550_uart_dt_init(struct dt_device_node *dev,
    uart->dw_usr_bsy = dt_device_is_compatible(dev, 
"snps,dw-apb-uart");

  +    return res;
+}
+#else
+static int ns16550_init_dt(struct ns16550 *uart,
+   const struct dt_device_node *dev)
+{
+    return -EINVAL;
+}
+#endif
+
+#ifdef CONFIG_ACPI
+#include 
+static int ns16550_init_acpi(struct ns16550 *uart,
+ const void *data)
+{
+    struct acpi_table_spcr *spcr = NULL;
+    int status = 0;
+
+    status = acpi_get_table(ACPI_SIG_SPCR, 0,
+    (struct acpi_table_header **)&spcr);
+
+    if ( ACPI_FAILURE(status) )
+    {
+    printk("ns16550: Failed to get SPCR table\n");
+    return -EINVAL;
+    }
+
+    uart->baud  = BAUD_AUTO;
+    uart->data_bits = 8;
+    uart->parity    = spcr->parity;
+    uart->stop_bits = spcr->stop_bits;
+    uart->io_base = spcr->serial_port.address;
+    uart->irq = spcr->interrupt;
+    uart->reg_width = spcr->serial_port.bit_width / 8;
+    uart->reg_shift = 0;
+    uart->io_size = UART_MAX_REG << uart->reg_shift;
+
+    irq_set_type(spcr->interrupt, spcr->interrupt_type);
+
+    return 0;
+}
+#else
+static int ns16550_init_acpi(struct ns16550 *uart,
+ const void *data)
+{
+    return -EINVAL;
+}
+#endif
+
+static int ns16550_uart_init(struct ns16550 **puart,
+ const void *data, bool acpi)
+{
+    struct ns16550 *uart = &ns16550_com[0];
+
+    *puart = uart;
+
+    ns16550_init_common(uart);
+
+    return ( acpi ) ? ns16550_init_acpi(uart, data)
+    : ns16550_init_dt(uart, data);
+}


This function does not look very useful but getting &ns16550_com[0].
I do agree that we need it is nice to have common code, but I think 
you went too far here.


There are no need for 3 separate functions and 2 functions for each 
firmware.


I think duplicating the code of ns16550_uart_init for ACPI and DT is 
fine. You could then create a function that is a merge vuart_init and 
register_init.


We can retain the ns16550_init_acpi() and ns16550_init_dt() and call 
them directly from the main __init functions.



This would also limit the number of #ifdef within this code.


+
+static void ns16550_vuart_init(struct ns16550 *uart)
+{
+#ifdef CONFIG_ARM
  uart->vuart.base_addr = uart->io_base;
  uart->vuart.size = uart->io_size;
-    uart->vuart.data_off = UART_THR vuart.status_off = UART_LSRvuart.status = UART_LSR_THRE|UART_LSR_TEMT;
+    uart->vuart.data_off = UART_THR << uart->reg_shift;
+    uart->vuart.status_off = UART_LSR << uart->reg_shift;
+    uart->vuart.status = UART_LSR_THRE | UART_LSR_TEMT;
+#

[Xen-devel] [libvirt test] 116185: regressions - FAIL

2017-11-15 Thread osstest service owner
flight 116185 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116185/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 115476
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 115476

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  52125f90c93152968cc44f72a132e7ad5df241d9
baseline version:
 libvirt  1bf893406637e852daeaafec6617d3ee3716de25

Last test of basis   115476  2017-11-02 04:22:37 Z   13 days
Failing since115509  2017-11-03 04:20:26 Z   12 days   12 attempts
Testing same since   116185  2017-11-15 04:20:18 Z0 days1 attempts


People who touched revisions under test:
  Andrea Bolognani 
  Christian Ehrhardt 
  Daniel Veillard 
  Dawid Zamirski 
  Jim Fehlig 
  Jiri Denemark 
  John Ferlan 
  Michal Privoznik 
  Nikolay Shirokovskiy 
  Peter Krempa 
  Pino Toscano 
  Viktor Mihajlovski 
  Wim ten Have 
  xinhua.Cao 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmblocked 
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm blocked 
 test-amd64-i386-libvirt-xsm  blocked 
 test-amd64-amd64-libvirt pass
 test-armhf-armhf-libvirt blocked 
 test-amd64-i386-libvirt  blocked 
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair blocked 
 test-amd64-i386-libvirt-qcow2blocked 
 test-armhf-armhf-libvirt-raw blocked 
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1647 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-15 Thread Juergen Gross
On 14/11/17 22:46, Boris Ostrovsky wrote:
> On 11/14/2017 04:11 AM, Juergen Gross wrote:
>> On 13/11/17 19:33, Stefano Stabellini wrote:
>>> On Mon, 13 Nov 2017, Juergen Gross wrote:
 On 11/11/17 00:57, Stefano Stabellini wrote:
> On Tue, 7 Nov 2017, Juergen Gross wrote:
>> On 06/11/17 23:17, Stefano Stabellini wrote:
>>> mutex_trylock() returns 1 if you take the lock and 0 if not. Assume you
>>> take in_mutex on the first try, but you can't take out_mutex. Next times
>>> you call mutex_trylock() in_mutex is going to fail. It's an endless
>>> loop.
>>>
>>> Solve the problem by moving the two mutex_trylock calls to two separate
>>> loops.
>>>
>>> Reported-by: Dan Carpenter 
>>> Signed-off-by: Stefano Stabellini 
>>> CC: boris.ostrov...@oracle.com
>>> CC: jgr...@suse.com
>>> ---
>>>  drivers/xen/pvcalls-front.c | 5 +++--
>>>  1 file changed, 3 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
>>> index 0c1ec68..047dce7 100644
>>> --- a/drivers/xen/pvcalls-front.c
>>> +++ b/drivers/xen/pvcalls-front.c
>>> @@ -1048,8 +1048,9 @@ int pvcalls_front_release(struct socket *sock)
>>>  * is set to NULL -- we only need to wait for the 
>>> existing
>>>  * waiters to return.
>>>  */
>>> -   while (!mutex_trylock(&map->active.in_mutex) ||
>>> -  !mutex_trylock(&map->active.out_mutex))
>>> +   while (!mutex_trylock(&map->active.in_mutex))
>>> +   cpu_relax();
>>> +   while (!mutex_trylock(&map->active.out_mutex))
>>> cpu_relax();
>> Any reason you don't just use mutex_lock()?
> Hi Juergen, sorry for the late reply.
>
> Yes, you are right. Given the patch, it would be just the same to use
> mutex_lock.
>
> This is where I realized that actually we have a problem: no matter if
> we use mutex_lock or mutex_trylock, there are no guarantees that we'll
> be the last to take the in/out_mutex. Other waiters could be still
> outstanding.
>
> We solved the same problem using a refcount in pvcalls_front_remove. In
> this case, I was thinking of reusing the mutex internal counter for
> efficiency, instead of adding one more refcount.
>
> For using the mutex as a refcount, there is really no need to call
> mutex_trylock or mutex_lock. I suggest checking on the mutex counter
> directly:
>
>
>   while (atomic_long_read(&map->active.in_mutex.owner) != 0UL ||
>  atomic_long_read(&map->active.out_mutex.owner) != 0UL)
>   cpu_relax();
>
> Cheers,
>
> Stefano
>
>
> ---
>
> xen/pvcalls: fix potential endless loop in pvcalls-front.c
>
> mutex_trylock() returns 1 if you take the lock and 0 if not. Assume you
> take in_mutex on the first try, but you can't take out_mutex. Next time
> you call mutex_trylock() in_mutex is going to fail. It's an endless
> loop.
>
> Actually, we don't want to use mutex_trylock at all: we don't need to
> take the mutex, we only need to wait until the last mutex waiter/holder
> releases it.
>
> Instead of calling mutex_trylock or mutex_lock, just check on the mutex
> refcount instead.
>
> Reported-by: Dan Carpenter 
> Signed-off-by: Stefano Stabellini 
> CC: boris.ostrov...@oracle.com
> CC: jgr...@suse.com
>
> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> index 0c1ec68..9f33cb8 100644
> --- a/drivers/xen/pvcalls-front.c
> +++ b/drivers/xen/pvcalls-front.c
> @@ -1048,8 +1048,8 @@ int pvcalls_front_release(struct socket *sock)
>* is set to NULL -- we only need to wait for the existing
>* waiters to return.
>*/
> - while (!mutex_trylock(&map->active.in_mutex) ||
> -!mutex_trylock(&map->active.out_mutex))
> + while (atomic_long_read(&map->active.in_mutex.owner) != 0UL ||
> +atomic_long_read(&map->active.out_mutex.owner) != 0UL)
 I don't like this.

 Can't you use a kref here? Even if it looks like more overhead it is
 much cleaner. There will be no questions regarding possible races,
 while with an approach like yours will always smell racy (can't there
 be someone taking the mutex just after above test?).

 In no case you should make use of the mutex internals.
>>> Boris' suggestion solves that problem well. Would you be OK with the
>>> proposed
>>>
>>> while(mutex_is_locked(&map->active.in_mutex.owner) ||
>>>   mutex_is_locked(&map->active.out_mutex.owner))
>>> cpu_relax();
>>>
>>> ?
>> I'm not convinced there isn't