Re: [Xen-devel] [PATCH v7 00/15] Load BIOS via toolstack instead of been embedded in hvmloader.

2016-07-28 Thread Boris Ostrovsky



On 07/28/2016 06:49 AM, Anthony PERARD wrote:

Hi all,

Changes in V7:
   - There is one new patch at the end to fix the doc.
   - Patch 6 as been change.
   that's it.

   There is just a few missing ackes:
 6 xen: Move the hvm_start_info C representation from libxc to public/xen.h
 8 hvmloader: Locate the BIOS blob
 9 hvmloader: Check modules whereabouts in perform_tests
15 docs/misc/hvmlite: Point to the canonical definition of hvm_start_info

Thanks.

A git tree can be found here:
git://xenbits.xen.org/people/aperard/xen-unstable.git
tag: hvmloader-with-separated-bios-v7



I am unable to build this:

libxl_paths.c: In function ‘libxl__seabios_path’:
libxl_paths.c:40: error: ‘SEABIOS_PATH’ undeclared (first use in this 
function)

libxl_paths.c:40: error: (Each undeclared identifier is reported only once
libxl_paths.c:40: error: for each function it appears in.)
libxl_paths.c: In function ‘libxl__ovmf_path’:
libxl_paths.c:45: error: ‘OVMF_PATH’ undeclared (first use in this function)

IIUIC these two are supposed to be generated into tools/config.h but 
they were not for me. I haven't looked any further yet.


-boris



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-upstream-4.4-testing test] 99724: tolerable FAIL - PUSHED

2016-07-28 Thread osstest service owner
flight 99724 qemu-upstream-4.4-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/99724/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 95501

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass

version targeted for testing:
 qemuue72488cdcf2208f2df334fa88c35b33e695fa93b
baseline version:
 qemuuca3d95113903ccffe52af3922bbf7bd14a9acc78

Last test of basis95501  2016-06-10 11:49:05 Z   48 days
Testing same since99724  2016-07-27 18:09:58 Z1 days1 attempts


People who touched revisions under test:
  P J P 
  Stefan Hajnoczi 
  Stefano Stabellini 

jobs:
 build-amd64-xend pass
 build-i386-xend  pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass
 test-amd64-amd64-xl-qemuu-win7-amd64 pass
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-credit2  pass
 test-amd64-i386-freebsd10-i386   pass
 test-amd64-amd64-qemuu-nested-intel  fail
 test-amd64-i386-qemuu-rhel6hvm-intel pass
 test-amd64-amd64-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-xl-multivcpupass
 test-amd64-amd64-pairpass
 test-amd64-i386-pair pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-amd64-amd64-pv  pass
 test-amd64-i386-pv   pass
 test-amd64-amd64-amd64-pvgrubpass
 test-amd64-amd64-i386-pvgrub pass
 test-amd64-amd64-pygrub  pass
 test-amd64-amd64-xl-qcow2pass
 test-amd64-i386-xl-raw   pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass
 test-amd64-amd64-libvirt-vhd pass
 test-amd64-amd64-xl-qemuu-winxpsp3   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-4.4-testing
+ revision=e72488cdcf2208f2df334fa88c35b33e695fa93b
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();

Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Razvan Cojocaru
On 07/29/16 00:25, Julien Grall wrote:
> 
> 
> On 28/07/2016 22:05, Tamas K Lengyel wrote:
>> On Thu, Jul 28, 2016 at 3:01 PM, Julien Grall 
>> wrote:
>> That's not how we do it with vm_event. Even on x86 we only selectively
>> set registers using the VM_EVENT_FLAG_SET_REGISTERS flag (albeit it
>> not being documented in the header). As for "not exposing them" it's a
>> waste to declare separate structures for getting and setting. I'll
>> change my mind about that if Razvan is on the side that we should
>> start doing that, but I don't think that's the case at the moment.
> 
> Is there any rationale to only set a subset of the information you
> retrieved?

The perennial speed optimization, but mainly that setting everything can
have side-effects (on x86 I remember that at the time I wrote the
initial patch this had something to do with the control registers - if
you'd like I can try to follow the code again and try to remember what
the exact issue was).

My main use-case at the time was to simply set EIP (I needed to be able
to skip the current instruction if it happened to be deemed to be
malicious by the introspection engine). I believe it has been assumed at
the time that setting the GPRS is enough, and that can be extended in
the future by interested parties.


Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.6-testing test] 99720: regressions - FAIL

2016-07-28 Thread osstest service owner
flight 99720 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/99720/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  9 redhat-install  fail REGR. vs. 96031

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 11 guest-start  fail   like 96006
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 96031
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 96031
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 96031
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 96031

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  dfe85d302f5f127c4ab5e2a5e8bcd6a964f7218c
baseline version:
 xen  285248d91b20bc8245f9241e21d3e7b23f67b550

Last test of basis96031  2016-06-20 23:50:23 Z   38 days
Testing same since99720  2016-07-27 18:01:38 Z1 days1 attempts


People who touched revisions under test:
  Andrew Cooper 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 

[Xen-devel] [xen-unstable-smoke test] 99768: tolerable all pass - PUSHED

2016-07-28 Thread osstest service owner
flight 99768 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/99768/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  6a9b2b6f76c40bfe5f8d645d9a8f6e7db4f93be8
baseline version:
 xen  b29f4c1e37c78874048a34700a967973bb31fbf9

Last test of basis99750  2016-07-28 12:20:23 Z0 days
Testing same since99768  2016-07-29 01:01:47 Z0 days1 attempts


People who touched revisions under test:
  Julien Grall 
  Stefano Stabellini 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-smoke
+ revision=6a9b2b6f76c40bfe5f8d645d9a8f6e7db4f93be8
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable-smoke 
6a9b2b6f76c40bfe5f8d645d9a8f6e7db4f93be8
+ branch=xen-unstable-smoke
+ revision=6a9b2b6f76c40bfe5f8d645d9a8f6e7db4f93be8
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-smoke
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-smoke
+ prevxenbranch=xen-4.7-testing
+ '[' x6a9b2b6f76c40bfe5f8d645d9a8f6e7db4f93be8 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local 'options=[fetch=try]'
 getconfig GitCacheProxy
 perl -e '
use Osstest;
readglobalconfig();
print $c{"GitCacheProxy"} or die $!;
'
+++ local cache=git://cache:9419/
+++ '[' 

[Xen-devel] [xen-unstable test] 99719: regressions - FAIL

2016-07-28 Thread osstest service owner
flight 99719 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/99719/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pair15 debian-install/dst_host   fail REGR. vs. 97664

Regressions which are regarded as allowable (not blocking):
 build-amd64-rumpuserxen   6 xen-buildfail   like 97664
 build-i386-rumpuserxen6 xen-buildfail   like 97664
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 97664
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 97664
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 97664
 test-amd64-amd64-xl-rtds  9 debian-install   fail   like 97664
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 97664
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail   like 97664

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass

version targeted for testing:
 xen  d5438accceecc8172db2d37d98b695eb8bc43afc
baseline version:
 xen  e763268781d341fef05d461f3057e6ced5e033f2

Last test of basis97664  2016-07-19 15:37:51 Z9 days
Failing since 97709  2016-07-20 11:27:05 Z8 days5 attempts
Testing same since99719  2016-07-27 18:01:10 Z1 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Boris Ostrovsky 
  Dario Faggioli 
  David Scott 
  George Dunlap 
  Ian Jackson 
  Jonathan Daugherty 
  Juergen Gross 
  Julien Grall 
  Marek Marczykowski-Górecki 
  Roger Pau Monne 
  Roger Pau Monné 
  Sander Eikelenboom 
  Stefano 

[Xen-devel] [PATCH] x86/vMsi-x: check whether the msixtbl_list has been initialized or not when accessing it

2016-07-28 Thread Chao Gao
MSI-x tables' initialization had been detered in the commit
74c6dc2d0ac4dcab0c6243cdf6ed550c1532b798. If an assigned device does not support
MSI-x, the msixtbl_list won't be initialized. Howerver, both of following paths
XEN_DOMCTL_bind_pt_irq
pt_irq_create_bind
msixtbl_pt_register
and
XEN_DOMCTL_unbind_pt_irq
pt_irq_destroy_bind
msixtbl_pt_unregister
do not check this case and will cause Xen panic consequently.

Signed-off-by: Chao Gao 
---
 xen/arch/x86/hvm/vmsi.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index e418b98..e0d710b 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -449,7 +449,7 @@ int msixtbl_pt_register(struct domain *d, struct pirq 
*pirq, uint64_t gtable)
 ASSERT(pcidevs_locked());
 ASSERT(spin_is_locked(>event_lock));
 
-if ( !has_vlapic(d) )
+if ( !has_vlapic(d) || !d->arch.hvm_domain.msixtbl_list.next )
 return -ENODEV;
 
 /*
@@ -519,7 +519,7 @@ void msixtbl_pt_unregister(struct domain *d, struct pirq 
*pirq)
 ASSERT(pcidevs_locked());
 ASSERT(spin_is_locked(>event_lock));
 
-if ( !has_vlapic(d) )
+if ( !has_vlapic(d) || !d->arch.hvm_domain.msixtbl_list.next )
 return;
 
 irq_desc = pirq_spin_lock_irq_desc(pirq, NULL);
-- 
1.8.3.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 00/15] xen/arm: P2M clean-up fixes

2016-07-28 Thread Stefano Stabellini
Committed, thanks

On Thu, 28 Jul 2016, Julien Grall wrote:
> Hello all,
> 
> This patch series contains a bunch of clean-up and fixes for the P2M code on
> ARM. The major changes are:
> - Deduce the memory attributes from the p2m type
> - Switch to read-write lock to improve performance
> - Simplify the TLB flush for a give p2m
> 
> For all the changes see in each patch.
> 
> I have provided a branch with all the patches applied on my repo:
> git://xenbits.xen.org/people/julieng/xen-unstable.git branch p2m-cleanup-v2.
> 
> Yours sincerely,
> 
> Julien Grall (15):
>   xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry
>   xen/arm: p2m: Use a whitelist rather than blacklist in
> get_page_from_gfn
>   xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO
>   xen/arm: p2m: Find the memory attributes based on the p2m type
>   xen/arm: p2m: Remove unnecessary locking
>   xen/arm: p2m: Introduce p2m_{read,write}_{,un}lock helpers
>   xen/arm: p2m: Switch the p2m lock from spinlock to rwlock
>   xen/arm: Don't call p2m_alloc_table from arch_domain_create
>   xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain
>   xen/arm: p2m: Don't need to restore the state for an idle vCPU.
>   xen/arm: p2m: Rework the context switch to another VTTBR in
> flush_tlb_domain
>   xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state
>   xen/arm: Don't export flush_tlb_domain
>   xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb
>   xen/arm: p2m: Pass the p2m in parameter rather the domain when it is
> possible
> 
>  xen/arch/arm/domain.c  |   3 -
>  xen/arch/arm/p2m.c | 194 
> -
>  xen/arch/arm/traps.c   |   2 +-
>  xen/include/asm-arm/domain.h   |   1 -
>  xen/include/asm-arm/flushtlb.h |   3 -
>  xen/include/asm-arm/p2m.h  |  25 +++---
>  6 files changed, 113 insertions(+), 115 deletions(-)
> 
> -- 
> 1.9.1
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 11/15] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain

2016-07-28 Thread Stefano Stabellini
On Thu, 28 Jul 2016, Julien Grall wrote:
> The current implementation of flush_tlb_domain is relying on the domain
> to have a single p2m. With the upcoming feature altp2m, a single domain
> may have different p2m. So we would need to switch to the correct p2m in
> order to flush the TLBs.
> 
> Rather than checking whether the domain is not the current domain, check
> whether the VTTBR is different. The resulting assembly code is much
> smaller: from 38 instructions (+ 2 functions call) to 22 instructions.
> 
> Signed-off-by: Julien Grall 

Acked-by: Stefano Stabellini 


>  xen/arch/arm/p2m.c | 18 +++---
>  1 file changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index aff5906..7ee0171 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -151,24 +151,28 @@ void p2m_restore_state(struct vcpu *n)
>  
>  void flush_tlb_domain(struct domain *d)
>  {
> +struct p2m_domain *p2m = >arch.p2m;
>  unsigned long flags = 0;
> +uint64_t ovttbr;
>  
>  /*
> - * Update the VTTBR if necessary with the domain d. In this case,
> - * it's only necessary to flush TLBs on every CPUs with the current VMID
> - * (our domain).
> + * ARM only provides an instruction to flush TLBs for the current
> + * VMID. So switch to the VTTBR of a given P2M if different.
>   */
> -if ( d != current->domain )
> +ovttbr = READ_SYSREG64(VTTBR_EL2);
> +if ( ovttbr != p2m->vttbr )
>  {
>  local_irq_save(flags);
> -p2m_load_VTTBR(d);
> +WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
> +isb();
>  }
>  
>  flush_tlb();
>  
> -if ( d != current->domain )
> +if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
>  {
> -p2m_load_VTTBR(current->domain);
> +WRITE_SYSREG64(ovttbr, VTTBR_EL2);
> +isb();
>  local_irq_restore(flags);
>  }
>  }
> -- 
> 1.9.1
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 09/15] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain

2016-07-28 Thread Stefano Stabellini
On Thu, 28 Jul 2016, Julien Grall wrote:
> The field vttbr holds the base address of the translation table for
> guest. Its value will depends on how the p2m has been initialized and
> will only be used by the P2M code.
> 
> So move the field from arch_domain to p2m_domain. This will also ease
> the implementation of altp2m.
> 
> Signed-off-by: Julien Grall 

Reviewed-by: Stefano Stabellini 


> ---
> Changes in v2:
> - Forgot to add my signed-off-by
> - Fix typo in the commit message
> ---
>  xen/arch/arm/p2m.c   | 11 +++
>  xen/arch/arm/traps.c |  2 +-
>  xen/include/asm-arm/domain.h |  1 -
>  xen/include/asm-arm/p2m.h|  3 +++
>  4 files changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 512fd7d..7e524fe 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -107,10 +107,14 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>  
>  static void p2m_load_VTTBR(struct domain *d)
>  {
> +struct p2m_domain *p2m = >arch.p2m;
> +
>  if ( is_idle_domain(d) )
>  return;
> -BUG_ON(!d->arch.vttbr);
> -WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
> +
> +ASSERT(p2m->vttbr);
> +
> +WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>  isb(); /* Ensure update is visible */
>  }
>  
> @@ -1297,8 +1301,7 @@ static int p2m_alloc_table(struct domain *d)
>  
>  p2m->root = page;
>  
> -d->arch.vttbr = page_to_maddr(p2m->root)
> -| ((uint64_t)p2m->vmid&0xff)<<48;
> +p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 
> 48;
>  
>  /*
>   * Make sure that all TLBs corresponding to the new VMID are flushed
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 2482a20..f509a00 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -880,7 +880,7 @@ void vcpu_show_registers(const struct vcpu *v)
>  ctxt.ifsr32_el2 = v->arch.ifsr;
>  #endif
>  
> -ctxt.vttbr_el2 = v->domain->arch.vttbr;
> +ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
>  
>  _show_registers(>arch.cpu_info->guest_cpu_user_regs, , 1, v);
>  }
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 4e9d8bf..9452fcd 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -48,7 +48,6 @@ struct arch_domain
>  
>  /* Virtual MMU */
>  struct p2m_domain p2m;
> -uint64_t vttbr;
>  
>  struct hvm_domain hvm_domain;
>  gfn_t *grant_table_gfn;
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index ce28e8a..53c4d78 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -32,6 +32,9 @@ struct p2m_domain {
>  /* Current VMID in use */
>  uint8_t vmid;
>  
> +/* Current Translation Table Base Register for the p2m */
> +uint64_t vttbr;
> +
>  /*
>   * Highest guest frame that's ever been mapped in the p2m
>   * Only takes into account ram and foreign mapping
> -- 
> 1.9.1
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 02/15] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn

2016-07-28 Thread Stefano Stabellini
On Thu, 28 Jul 2016, Julien Grall wrote:
> Currently, the check in get_page_from_gfn is using a blacklist. This is
> very fragile because we may forgot to update the check when a new p2m
> type is added.
> 
> To avoid any possible issue, use a whitelist. All type backed by a RAM
> page can could potential be valid. The check is borrowed from x86.
> 
> Note with this change, it is not possible anymore to retrieve a page when
> the p2m type is p2m_iommu_map_*. This is fine because they are special
> mappings for direct mapping workaround and the associated GFN should be
> used at all by callers of get_page_from_gfn.
> 
> Signed-off-by: Julien Grall 

Reviewed-by: Stefano Stabellini 


> ---
> Changes in v2:
> - Update the commit message about iommu_mappings
> ---
>  xen/include/asm-arm/p2m.h | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 3091c04..78d37ab 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -104,9 +104,16 @@ typedef enum {
>  #define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw) |\
> p2m_to_mask(p2m_ram_ro))
>  
> +/* Grant mapping types, which map to a real frame in another VM */
> +#define P2M_GRANT_TYPES (p2m_to_mask(p2m_grant_map_rw) |  \
> + p2m_to_mask(p2m_grant_map_ro))
> +
>  /* Useful predicates */
>  #define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
>  #define p2m_is_foreign(_t) (p2m_to_mask(_t) & p2m_to_mask(p2m_map_foreign))
> +#define p2m_is_any_ram(_t) (p2m_to_mask(_t) &   \
> +(P2M_RAM_TYPES | P2M_GRANT_TYPES |  \
> + p2m_to_mask(p2m_map_foreign)))
>  
>  static inline
>  void p2m_mem_access_emulate_check(struct vcpu *v,
> @@ -224,7 +231,7 @@ static inline struct page_info *get_page_from_gfn(
>  if (t)
>  *t = p2mt;
>  
> -if ( p2mt == p2m_invalid || p2mt == p2m_mmio_direct )
> +if ( !p2m_is_any_ram(p2mt) )
>  return NULL;
>  
>  if ( !mfn_valid(mfn) )
> -- 
> 1.9.1
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-3.18 test] 99718: regressions - FAIL

2016-07-28 Thread osstest service owner
flight 99718 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/99718/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-amd64-pvgrub  6 xen-boot fail REGR. vs. 96188
 test-amd64-i386-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
96188
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 96188
 test-amd64-i386-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
96188
 test-amd64-i386-xl-qemut-winxpsp3  9 windows-install  fail REGR. vs. 96188
 test-armhf-armhf-xl-vhd   9 debian-di-install fail REGR. vs. 96188
 test-amd64-amd64-libvirt-xsm  6 xen-boot  fail REGR. vs. 96188
 test-amd64-i386-freebsd10-amd64  9 freebsd-installfail REGR. vs. 96188
 test-amd64-i386-qemut-rhel6hvm-amd  9 redhat-install  fail REGR. vs. 96188
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 9 windows-install fail REGR. vs. 96188
 test-amd64-i386-qemuu-rhel6hvm-amd  9 redhat-install  fail REGR. vs. 96188
 test-amd64-i386-xl-qemut-win7-amd64  9 windows-installfail REGR. vs. 96188
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
REGR. vs. 96188
 test-amd64-amd64-xl-qemut-winxpsp3  6 xen-bootfail REGR. vs. 96188
 test-amd64-i386-libvirt   9 debian-installfail REGR. vs. 96188
 test-amd64-amd64-i386-pvgrub  6 xen-boot  fail REGR. vs. 96188
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install   fail REGR. vs. 96188
 test-amd64-i386-qemut-rhel6hvm-intel  9 redhat-installfail REGR. vs. 96188
 test-amd64-amd64-xl   6 xen-boot  fail REGR. vs. 96188
 test-amd64-i386-xl-xsm9 debian-installfail REGR. vs. 96188
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 windows-install fail REGR. vs. 96188
 test-amd64-amd64-libvirt  6 xen-boot  fail REGR. vs. 96188
 test-armhf-armhf-xl-multivcpu  9 debian-install   fail REGR. vs. 96188
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 96188
 test-amd64-amd64-libvirt-vhd  6 xen-boot  fail REGR. vs. 96188
 test-amd64-i386-xl-raw9 debian-di-install fail REGR. vs. 96188
 test-amd64-i386-qemuu-rhel6hvm-intel  9 redhat-installfail REGR. vs. 96188
 test-amd64-i386-xl-qemuu-win7-amd64  9 windows-installfail REGR. vs. 96188
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail REGR. 
vs. 96188
 test-armhf-armhf-libvirt  9 debian-installfail REGR. vs. 96188
 test-amd64-i386-libvirt-xsm   9 debian-installfail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  6 xen-boot fail REGR. vs. 96188
 test-amd64-amd64-xl-pvh-amd   6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-ovmf-amd64  6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 96188
 test-amd64-amd64-xl-qemut-win7-amd64  6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-qemuu-nested-intel  6 xen-boot   fail REGR. vs. 96188
 test-amd64-amd64-xl-xsm   6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
96188
 test-amd64-amd64-xl-credit2   6 xen-boot  fail REGR. vs. 96188
 test-armhf-armhf-xl-cubietruck  9 debian-install  fail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-win7-amd64  6 xen-boot  fail REGR. vs. 96188
 test-amd64-i386-freebsd10-i386  9 freebsd-install fail REGR. vs. 96188
 test-amd64-i386-xl9 debian-installfail REGR. vs. 96188
 test-amd64-amd64-xl-qemut-debianhvm-amd64  6 xen-boot fail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-winxpsp3  6 xen-bootfail REGR. vs. 96188
 test-amd64-amd64-xl-qcow2 6 xen-boot  fail REGR. vs. 96188
 test-armhf-armhf-xl-xsm   9 debian-installfail REGR. vs. 96188
 test-amd64-amd64-pygrub   6 xen-boot  fail REGR. vs. 96188
 test-armhf-armhf-xl-credit2   9 debian-installfail REGR. vs. 96188
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail REGR. vs. 96188
 test-armhf-armhf-libvirt-xsm  9 debian-installfail REGR. vs. 96188
 test-amd64-amd64-qemuu-nested-amd  6 xen-boot fail REGR. vs. 96188
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 96188
 test-amd64-i386-xl-qemuu-winxpsp3  9 windows-install  fail REGR. vs. 96188
 test-armhf-armhf-xl   9 debian-installfail REGR. vs. 96188
 test-armhf-armhf-xl-arndale   9 debian-installfail REGR. vs. 96188
 test-amd64-i386-libvirt-pair 15 debian-install/dst_host   fail REGR. vs. 96188
 test-amd64-amd64-pair 9 xen-boot/src_host fail REGR. vs. 96188
 

Re: [Xen-devel] [PATCH] mem_access: Use monitor_traps instead of mem_access_send_req

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 2:38 PM, Julien Grall  wrote:
> Hello Tamas,
>
>
> On 28/07/2016 20:35, Tamas K Lengyel wrote:
>>
>> The two functions monitor_traps and mem_access_send_req duplicate
>> some of the same functionality. The mem_access_send_req however leaves a
>> lot of the standard vm_event fields to be filled by other functions.
>>
>> Since mem_access events go on the monitor ring in this patch we
>> consolidate
>> all paths to use monitor_traps to place events on the ring and to fill in
>> the common parts of the requests.
>>
>> Signed-off-by: Tamas K Lengyel 
>> ---
>> Cc: Stefano Stabellini 
>> Cc: Julien Grall 
>> Cc: Jan Beulich 
>> Cc: Andrew Cooper 
>> Cc: Razvan Cojocaru 
>> Cc: George Dunlap 
>> ---
>>  xen/arch/arm/p2m.c| 69
>> +++
>>  xen/arch/x86/hvm/hvm.c| 16 ++---
>>  xen/arch/x86/hvm/monitor.c|  6 
>>  xen/arch/x86/mm/p2m.c | 24 ++
>>  xen/common/mem_access.c   | 11 ---
>>  xen/include/asm-x86/hvm/monitor.h |  2 ++
>>  xen/include/asm-x86/p2m.h | 13 +---
>>  xen/include/xen/mem_access.h  |  7 
>>  8 files changed, 63 insertions(+), 85 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index d82349c..df898a3 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -5,7 +5,7 @@
>>  #include 
>>  #include 
>>  #include 
>> -#include 
>> +#include 
>>  #include 
>>  #include 
>>  #include 
>> @@ -1642,12 +1642,41 @@ void __init setup_virt_paging(void)
>>  smp_call_function(setup_virt_paging_one, (void *)val, 1);
>>  }
>>
>> +static int
>> +__p2m_mem_access_send_req(paddr_t gpa, vaddr_t gla, const struct npfec
>> npfec,
>> +  xenmem_access_t xma)
>> +{
>> +struct vcpu *v = current;
>> +vm_event_request_t req = {};
>> +bool_t sync = (xma == XENMEM_access_n2rwx) ? 0 : 1;
>> +
>> +req.reason = VM_EVENT_REASON_MEM_ACCESS;
>> +
>> +/* Send request to mem access subscriber */
>> +req.u.mem_access.gfn = gpa >> PAGE_SHIFT;
>> +req.u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
>> +if ( npfec.gla_valid )
>> +{
>> +req.u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
>> +req.u.mem_access.gla = gla;
>> +
>> +if ( npfec.kind == npfec_kind_with_gla )
>> +req.u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
>> +else if ( npfec.kind == npfec_kind_in_gpt )
>> +req.u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
>> +}
>> +req.u.mem_access.flags |= npfec.read_access? MEM_ACCESS_R : 0;
>> +req.u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
>> +req.u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0;
>> +
>> +return monitor_traps(v, sync, );
>> +}
>> +
>>  bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec
>> npfec)
>>  {
>>  int rc;
>>  bool_t violation;
>>  xenmem_access_t xma;
>> -vm_event_request_t *req;
>>  struct vcpu *v = current;
>>  struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
>>
>> @@ -1734,40 +1763,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t
>> gla, const struct npfec npfec)
>>  return false;
>>  }
>>
>> -req = xzalloc(vm_event_request_t);
>> -if ( req )
>> -{
>> -req->reason = VM_EVENT_REASON_MEM_ACCESS;
>> -
>> -/* Pause the current VCPU */
>> -if ( xma != XENMEM_access_n2rwx )
>> -req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
>> -
>> -/* Send request to mem access subscriber */
>> -req->u.mem_access.gfn = gpa >> PAGE_SHIFT;
>> -req->u.mem_access.offset =  gpa & ((1 << PAGE_SHIFT) - 1);
>> -if ( npfec.gla_valid )
>> -{
>> -req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
>> -req->u.mem_access.gla = gla;
>> -
>> -if ( npfec.kind == npfec_kind_with_gla )
>> -req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
>> -else if ( npfec.kind == npfec_kind_in_gpt )
>> -req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
>> -}
>> -req->u.mem_access.flags |= npfec.read_access? MEM_ACCESS_R :
>> 0;
>> -req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W :
>> 0;
>> -req->u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X :
>> 0;
>> -req->vcpu_id = v->vcpu_id;
>> -
>> -mem_access_send_req(v->domain, req);
>> -xfree(req);
>> -}
>> -
>> -/* Pause the current VCPU */
>> -if ( xma != XENMEM_access_n2rwx )
>> -vm_event_vcpu_pause(v);
>> +if ( __p2m_mem_access_send_req(gpa, gla, npfec, xma) < 0 )
>> +domain_crash(v->domain);
>
>
> This patch 

Re: [Xen-devel] [PATCH] mem_access: Use monitor_traps instead of mem_access_send_req

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 2:54 PM, Andrew Cooper
 wrote:
> On 28/07/2016 20:35, Tamas K Lengyel wrote:
>> The two functions monitor_traps and mem_access_send_req duplicate
>> some of the same functionality. The mem_access_send_req however leaves a
>> lot of the standard vm_event fields to be filled by other functions.
>>
>> Since mem_access events go on the monitor ring in this patch we consolidate
>> all paths to use monitor_traps to place events on the ring and to fill in
>> the common parts of the requests.
>>
>> Signed-off-by: Tamas K Lengyel 
>> ---
>> Cc: Stefano Stabellini 
>> Cc: Julien Grall 
>> Cc: Jan Beulich 
>> Cc: Andrew Cooper 
>> Cc: Razvan Cojocaru 
>> Cc: George Dunlap 
>
> Common and x86 bits Reviewed-by: Andrew Cooper
> , but a few suggestions.
>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index d82349c..df898a3 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1642,12 +1642,41 @@ void __init setup_virt_paging(void)
>>  smp_call_function(setup_virt_paging_one, (void *)val, 1);
>>  }
>>
>> +static int
>> +__p2m_mem_access_send_req(paddr_t gpa, vaddr_t gla, const struct npfec 
>> npfec,
>> +  xenmem_access_t xma)
>> +{
>> +struct vcpu *v = current;
>> +vm_event_request_t req = {};
>> +bool_t sync = (xma == XENMEM_access_n2rwx) ? 0 : 1;
>> +
>> +req.reason = VM_EVENT_REASON_MEM_ACCESS;
>> +
>> +/* Send request to mem access subscriber */
>> +req.u.mem_access.gfn = gpa >> PAGE_SHIFT;
>> +req.u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
>
> I see this is only code motion, but ~PAGE_MASK here instead of
> open-coding it.

Sounds good.

>
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index daaee1d..688370d 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -1846,11 +1846,12 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned 
>> long gla,
>>  }
>>  }
>>
>> -if ( p2m_mem_access_check(gpa, gla, npfec, _ptr) )
>> -{
>> +sync = p2m_mem_access_check(gpa, gla, npfec, _ptr);
>> +
>> +if ( !sync ) {
>
> Please keep this brace on the newline (inline with the style), even if
> it doesn't match the style of the else clause.

Sure.

>
>>  fall_through = 1;
>>  } else {
>> -/* Rights not promoted, vcpu paused, work here is done */
>> +/* Rights not promoted (aka. sync event), work here is done 
>> */
>>  rc = 1;
>>  goto out_put_gfn;
>>  }
>> @@ -1956,7 +1957,12 @@ out:
>>  }
>>  if ( req_ptr )
>>  {
>> -mem_access_send_req(currd, req_ptr);
>> +if ( hvm_monitor_mem_access(curr, sync, req_ptr) < 0 )
>> +{
>> +/* Crash the domain */
>> +rc = 0;
>> +}
>
> It is reasonable to omit the braces here.

Yea but with the comment being in-between I just didn't like the look
of it without the braces..

>
>> +
>>  xfree(req_ptr);
>>  }
>>  return rc;
>> diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c
>> index 7277c12..c7285c6 100644
>> --- a/xen/arch/x86/hvm/monitor.c
>> +++ b/xen/arch/x86/hvm/monitor.c
>> @@ -152,6 +152,12 @@ int hvm_monitor_cpuid(unsigned long insn_length)
>>  return monitor_traps(curr, 1, );
>>  }
>>
>> +int hvm_monitor_mem_access(struct vcpu* v, bool_t sync,
>
> vcpu *v.
>
> ~Andrew

Thanks,
Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 4:03 PM, Julien Grall  wrote:
>
>
> On 28/07/2016 22:33, Tamas K Lengyel wrote:
>>
>> On Jul 28, 2016 15:25, "Julien Grall" > > wrote:
>>>
>>>
>>>
>>>
>>> On 28/07/2016 22:05, Tamas K Lengyel wrote:


 On Thu, Jul 28, 2016 at 3:01 PM, Julien Grall >
>> > wrote:

 That's not how we do it with vm_event. Even on x86 we only selectively
 set registers using the VM_EVENT_FLAG_SET_REGISTERS flag (albeit it
 not being documented in the header). As for "not exposing them" it's a
 waste to declare separate structures for getting and setting. I'll
 change my mind about that if Razvan is on the side that we should
 start doing that, but I don't think that's the case at the moment.
>>>
>>>
>>>
>>> Is there any rationale to only set a subset of the information you
>>
>> retrieved?
>>>
>>>
>>
>> I just did a testrun with setting every register through this method to
>> 0 other then pc and it resulted in hypervisor crash. Not sure if it's
>> just my setup or not though so I'm still poking at it. However, I don't
>> really see a usecase where setting ttbr regs to be required either via
>> the fast method so it simply may not worth digging into it more at this
>> time.
>
>
> To confirm, do you mean setting CPSR, TTBR0_EL1, TTBR1_EL1 to 0?
>
> TTBR*_EL1 are safe to set to any values (they are directly accessible by the
> guest anyway). However this is not the case of CPSR. From my understanding
> of the ARM ARM (B1-1150 in ARM DDI 0406C.b) is writing 0 to M[4:0] will lead
> to an unpredictable behavior (that could cause an hypervisor trap).
>
> Can you copy/paste the hypervisor crash log?
>

OK, the issue I had was that right now mem_access on ARM doesn't send
the registers (yet) as it needs my monitor_traps work from the other
patch so I kept setting all registers to 0. Rebasing on top of my
other patch now I was able to verify that indeed only setting cpsr to
arbitrary values (like 0) can cause hypervisor crash as follows:

(XEN) CPU1: Unexpected Trap: Prefetch Abort
(XEN) [ Xen-4.8-unstable  arm32  debug=y  Not tainted ]
(XEN) CPU:1
(XEN) PC: c09195f4
(XEN) CPSR:   001a MODE:Hypervisor
(XEN)  R0: 600f0013 R1: 390038ff R2: c0dd4540 R3: 38ff38ff
(XEN)  R4:  R5: c0dd4240 R6: c7fb0a64 R7: 0001
(XEN)  R8: c7eaaf60 R9: c0dd4240 R10:c7eaaf58 R11: R12:
(XEN) USR: SP: beb4e20c LR: 7f5883ed
(XEN) SVC: SP: c277ddc0 LR: c02caf58 SPSR:800f0030
(XEN) ABT: SP: c0dd9a4c LR: c02147c0 SPSR:80070193
(XEN) UND: SP: c0dd9a58 LR: c0214880 SPSR:20060093
(XEN) IRQ: SP: c0dd9a40 LR: c0214800 SPSR:600f0193
(XEN) FIQ: SP: c0dd9a64 LR: c0dd9a64 SPSR:
(XEN) FIQ: R8:  R9:  R10: R11: R12:
(XEN)
(XEN)  SCTLR: 10c5387d
(XEN)TCR: 
(XEN)  TTBR0: 426b806a
(XEN)  TTBR1: 4020406a
(XEN)   IFAR: b6e75480, IFSR: 0007
(XEN)   DFAR: 7f5ab024, DFSR: 0017
(XEN)
(XEN)   VTCR_EL2: 80003558
(XEN)  VTTBR_EL2: 0002bfa86000
(XEN)
(XEN)  SCTLR_EL2: 30cd187f
(XEN)HCR_EL2: 0038663f
(XEN)  TTBR0_EL2: bdfea000
(XEN)
(XEN)ESR_EL2: 8406
(XEN)  HPFAR_EL2: 0001c810
(XEN)  HDFAR: e0800f00
(XEN)  HIFAR: c09195f4
(XEN)
(XEN) Xen BUG at traps.c:946
(XEN) [ Xen-4.8-unstable  arm32  debug=y  Not tainted ]
(XEN) CPU:1
(XEN) PC: 0025ee4c traps.c#show_guest_stack+0xcc/0x274
(XEN) CPSR:   811a MODE:Hypervisor
(XEN)  R0: 40064000 R1: 43fcff58 R2: 43ce9d00 R3: 000a
(XEN)  R4: 43fcff9c R5: 40064000 R6: 00282608 R7: 43fcff58
(XEN)  R8: c7eaaf60 R9: c0dd4240 R10:c7eaaf58 R11:43fcff0c R12:
(XEN) HYP: SP: 43fcfedc LR: 0025f69c
(XEN)
(XEN)   VTCR_EL2: 80003558
(XEN)  VTTBR_EL2: 0002bfa86000
(XEN)
(XEN)  SCTLR_EL2: 30cd187f
(XEN)HCR_EL2: 0038663f
(XEN)  TTBR0_EL2: bdfea000
(XEN)
(XEN)ESR_EL2: 
(XEN)  HPFAR_EL2: 0001c810
(XEN)  HDFAR: e0800f00
(XEN)  HIFAR: c09195f4
(XEN)
(XEN) Xen stack trace from sp=43fcfedc:
(XEN)00282608 43fcff58 c7eaaf60 c0dd4240 43fcff9c 00281a74 00282608 43fcff58
(XEN)c7eaaf60 c0dd4240 c7eaaf58 43fcff2c 0025f69c 43fcff58 00281a74 00282608
(XEN)43fcff58 c7eaaf60 c0dd4240 43fcff3c 0025f848 0030b614 00281a74 43fcff4c
(XEN)0025f94c  0001 43fcff54 002650f0 43fcff58 00264e90 600f0013
(XEN)390038ff c0dd4540 38ff38ff  c0dd4240 c7fb0a64 0001 c7eaaf60
(XEN)c0dd4240 c7eaaf58   43fcff9c 7f5883ed c09195f4 001a
(XEN)0007 beb4e20c c0dd9a40 c0214800 c277ddc0 c02caf58 c0dd9a4c c02147c0
(XEN)c0dd9a58 c0214880      c0dd9a64
(XEN)c0dd9a64 800f0030 80070193 20060093 600f0193   
(XEN)
(XEN) Xen call trace:
(XEN)[<0025ee4c>] 

Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Julien Grall



On 28/07/2016 22:33, Tamas K Lengyel wrote:

On Jul 28, 2016 15:25, "Julien Grall" > wrote:




On 28/07/2016 22:05, Tamas K Lengyel wrote:


On Thu, Jul 28, 2016 at 3:01 PM, Julien Grall > wrote:

That's not how we do it with vm_event. Even on x86 we only selectively
set registers using the VM_EVENT_FLAG_SET_REGISTERS flag (albeit it
not being documented in the header). As for "not exposing them" it's a
waste to declare separate structures for getting and setting. I'll
change my mind about that if Razvan is on the side that we should
start doing that, but I don't think that's the case at the moment.



Is there any rationale to only set a subset of the information you

retrieved?




I just did a testrun with setting every register through this method to
0 other then pc and it resulted in hypervisor crash. Not sure if it's
just my setup or not though so I'm still poking at it. However, I don't
really see a usecase where setting ttbr regs to be required either via
the fast method so it simply may not worth digging into it more at this
time.


To confirm, do you mean setting CPSR, TTBR0_EL1, TTBR1_EL1 to 0?

TTBR*_EL1 are safe to set to any values (they are directly accessible by 
the guest anyway). However this is not the case of CPSR. From my 
understanding of the ARM ARM (B1-1150 in ARM DDI 0406C.b) is writing 0 
to M[4:0] will lead to an unpredictable behavior (that could cause an 
hypervisor trap).


Can you copy/paste the hypervisor crash log?

Thank you,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Tamas K Lengyel
On Jul 28, 2016 15:25, "Julien Grall"  wrote:
>
>
>
> On 28/07/2016 22:05, Tamas K Lengyel wrote:
>>
>> On Thu, Jul 28, 2016 at 3:01 PM, Julien Grall 
wrote:
>> That's not how we do it with vm_event. Even on x86 we only selectively
>> set registers using the VM_EVENT_FLAG_SET_REGISTERS flag (albeit it
>> not being documented in the header). As for "not exposing them" it's a
>> waste to declare separate structures for getting and setting. I'll
>> change my mind about that if Razvan is on the side that we should
>> start doing that, but I don't think that's the case at the moment.
>
>
> Is there any rationale to only set a subset of the information you
retrieved?
>

I just did a testrun with setting every register through this method to 0
other then pc and it resulted in hypervisor crash. Not sure if it's just my
setup or not though so I'm still poking at it. However, I don't really see
a usecase where setting ttbr regs to be required either via the fast method
so it simply may not worth digging into it more at this time.

Tamas
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Julien Grall



On 28/07/2016 22:05, Tamas K Lengyel wrote:

On Thu, Jul 28, 2016 at 3:01 PM, Julien Grall  wrote:
That's not how we do it with vm_event. Even on x86 we only selectively
set registers using the VM_EVENT_FLAG_SET_REGISTERS flag (albeit it
not being documented in the header). As for "not exposing them" it's a
waste to declare separate structures for getting and setting. I'll
change my mind about that if Razvan is on the side that we should
start doing that, but I don't think that's the case at the moment.


Is there any rationale to only set a subset of the information you 
retrieved?


--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 3:01 PM, Julien Grall  wrote:
>
>
> On 28/07/2016 21:48, Tamas K Lengyel wrote:
>>
>> On Thu, Jul 28, 2016 at 2:41 PM, Andrew Cooper
>>  wrote:
>>>
>>> On 28/07/2016 21:36, Tamas K Lengyel wrote:

 On Thu, Jul 28, 2016 at 2:26 PM, Andrew Cooper
  wrote:
>
> On 28/07/2016 21:05, Tamas K Lengyel wrote:
>>
>> Add support for getting/setting registers through vm_event on ARM.
>> The set of registers can be expanded in the future to include other
>> registers
>> as well if necessary but for now it is limited to TTB/CR/R0/R1, PC and
>> CPSR.
>>
>> Signed-off-by: Tamas K Lengyel 
>> ---
>> Cc: Stefano Stabellini 
>> Cc: Julien Grall 
>> Cc: Razvan Cojocaru 
>> Cc: Jan Beulich 
>> Cc: Andrew Cooper 
>
> For the x86 and common bits, Reviewed-by: Andrew Cooper
> 
>
> However,
>
>> +#include 
>> +#include 
>> +
>> +void vm_event_fill_regs(vm_event_request_t *req)
>> +{
>> +const struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +
>> +req->data.regs.arm.cpsr = regs->cpsr;
>> +req->data.regs.arm.pc = regs->pc;
>> +req->data.regs.arm.ttbcr = READ_SYSREG(TCR_EL1);
>> +req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1);
>> +req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1);
>> +}
>> +
>> +void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
>> +{
>> +struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +
>> +regs->cpsr = rsp->data.regs.arm.cpsr;
>> +regs->pc = rsp->data.regs.arm.pc;
>> +v->arch.ttbcr = rsp->data.regs.arm.ttbcr;
>> +v->arch.ttbr0 = rsp->data.regs.arm.ttbr0;
>> +v->arch.ttbr1 = rsp->data.regs.arm.ttbr1;
>
> Not knowing anything about ARM, but this looks like it is missing some
> sanity/plausibility checks (to protect Xen against accidental
> clobbering
> from the vm_event listener), and some WRITE_SYSREG() to reload the new
> values (unless this is done unconditionally later, at which point you
> should at least leave a comment here saying so).
>
 This function only ever gets called if the vm_event response
 specifically has the VM_EVENT_FLAG_SET_REGISTERS set, so accidental
 clobbering is not possible.
>>>
>>>
>>> That isn't my point.  Are there any reserved bits in the registers
>>> themselves which could cause Xen to fault when it tries to reload?  If
>>> all that happens is a domain_crash() then ok, but if Xen falls over with
>>> a fatal fault, that should be avoided.
>
>
> The TTBR*_EL1 are registers that can be set by the guest without any trap to
> the hypervisor. So they will not cause Xen to fault even writing to any
> reserved bit.
>
>>
>> I agree. At the moment the only register I actually need access
>> through vm_event setting is PC so I'll just leave the other registers
>> out and document it in the vm_event header.
>
>
> I am starting to be really annoyed with this kind of sentence. It is not
> difficult to get things correct from the beginning.
>
> You either set/get them or do not expose them at all. But please avoid to
> have half of an implementation just because your use case does not need it.

That's not how we do it with vm_event. Even on x86 we only selectively
set registers using the VM_EVENT_FLAG_SET_REGISTERS flag (albeit it
not being documented in the header). As for "not exposing them" it's a
waste to declare separate structures for getting and setting. I'll
change my mind about that if Razvan is on the side that we should
start doing that, but I don't think that's the case at the moment.

Cheers,
Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Julien Grall



On 28/07/2016 21:48, Tamas K Lengyel wrote:

On Thu, Jul 28, 2016 at 2:41 PM, Andrew Cooper
 wrote:

On 28/07/2016 21:36, Tamas K Lengyel wrote:

On Thu, Jul 28, 2016 at 2:26 PM, Andrew Cooper
 wrote:

On 28/07/2016 21:05, Tamas K Lengyel wrote:

Add support for getting/setting registers through vm_event on ARM.
The set of registers can be expanded in the future to include other registers
as well if necessary but for now it is limited to TTB/CR/R0/R1, PC and CPSR.

Signed-off-by: Tamas K Lengyel 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
Cc: Razvan Cojocaru 
Cc: Jan Beulich 
Cc: Andrew Cooper 

For the x86 and common bits, Reviewed-by: Andrew Cooper


However,


+#include 
+#include 
+
+void vm_event_fill_regs(vm_event_request_t *req)
+{
+const struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+req->data.regs.arm.cpsr = regs->cpsr;
+req->data.regs.arm.pc = regs->pc;
+req->data.regs.arm.ttbcr = READ_SYSREG(TCR_EL1);
+req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1);
+req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1);
+}
+
+void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
+{
+struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+regs->cpsr = rsp->data.regs.arm.cpsr;
+regs->pc = rsp->data.regs.arm.pc;
+v->arch.ttbcr = rsp->data.regs.arm.ttbcr;
+v->arch.ttbr0 = rsp->data.regs.arm.ttbr0;
+v->arch.ttbr1 = rsp->data.regs.arm.ttbr1;

Not knowing anything about ARM, but this looks like it is missing some
sanity/plausibility checks (to protect Xen against accidental clobbering
from the vm_event listener), and some WRITE_SYSREG() to reload the new
values (unless this is done unconditionally later, at which point you
should at least leave a comment here saying so).


This function only ever gets called if the vm_event response
specifically has the VM_EVENT_FLAG_SET_REGISTERS set, so accidental
clobbering is not possible.


That isn't my point.  Are there any reserved bits in the registers
themselves which could cause Xen to fault when it tries to reload?  If
all that happens is a domain_crash() then ok, but if Xen falls over with
a fatal fault, that should be avoided.


The TTBR*_EL1 are registers that can be set by the guest without any 
trap to the hypervisor. So they will not cause Xen to fault even writing 
to any reserved bit.




I agree. At the moment the only register I actually need access
through vm_event setting is PC so I'll just leave the other registers
out and document it in the vm_event header.


I am starting to be really annoyed with this kind of sentence. It is not 
difficult to get things correct from the beginning.


You either set/get them or do not expose them at all. But please avoid 
to have half of an implementation just because your use case does not 
need it.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] mem_access: Use monitor_traps instead of mem_access_send_req

2016-07-28 Thread Andrew Cooper
On 28/07/2016 20:35, Tamas K Lengyel wrote:
> The two functions monitor_traps and mem_access_send_req duplicate
> some of the same functionality. The mem_access_send_req however leaves a
> lot of the standard vm_event fields to be filled by other functions.
>
> Since mem_access events go on the monitor ring in this patch we consolidate
> all paths to use monitor_traps to place events on the ring and to fill in
> the common parts of the requests.
>
> Signed-off-by: Tamas K Lengyel 
> ---
> Cc: Stefano Stabellini 
> Cc: Julien Grall 
> Cc: Jan Beulich 
> Cc: Andrew Cooper 
> Cc: Razvan Cojocaru 
> Cc: George Dunlap 

Common and x86 bits Reviewed-by: Andrew Cooper
, but a few suggestions.

> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index d82349c..df898a3 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1642,12 +1642,41 @@ void __init setup_virt_paging(void)
>  smp_call_function(setup_virt_paging_one, (void *)val, 1);
>  }
>  
> +static int
> +__p2m_mem_access_send_req(paddr_t gpa, vaddr_t gla, const struct npfec npfec,
> +  xenmem_access_t xma)
> +{
> +struct vcpu *v = current;
> +vm_event_request_t req = {};
> +bool_t sync = (xma == XENMEM_access_n2rwx) ? 0 : 1;
> +
> +req.reason = VM_EVENT_REASON_MEM_ACCESS;
> +
> +/* Send request to mem access subscriber */
> +req.u.mem_access.gfn = gpa >> PAGE_SHIFT;
> +req.u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);

I see this is only code motion, but ~PAGE_MASK here instead of
open-coding it.

> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index daaee1d..688370d 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1846,11 +1846,12 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned 
> long gla,
>  }
>  }
>  
> -if ( p2m_mem_access_check(gpa, gla, npfec, _ptr) )
> -{
> +sync = p2m_mem_access_check(gpa, gla, npfec, _ptr);
> +
> +if ( !sync ) {

Please keep this brace on the newline (inline with the style), even if
it doesn't match the style of the else clause.

>  fall_through = 1;
>  } else {
> -/* Rights not promoted, vcpu paused, work here is done */
> +/* Rights not promoted (aka. sync event), work here is done 
> */
>  rc = 1;
>  goto out_put_gfn;
>  }
> @@ -1956,7 +1957,12 @@ out:
>  }
>  if ( req_ptr )
>  {
> -mem_access_send_req(currd, req_ptr);
> +if ( hvm_monitor_mem_access(curr, sync, req_ptr) < 0 )
> +{
> +/* Crash the domain */
> +rc = 0;
> +}

It is reasonable to omit the braces here.

> +
>  xfree(req_ptr);
>  }
>  return rc;
> diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c
> index 7277c12..c7285c6 100644
> --- a/xen/arch/x86/hvm/monitor.c
> +++ b/xen/arch/x86/hvm/monitor.c
> @@ -152,6 +152,12 @@ int hvm_monitor_cpuid(unsigned long insn_length)
>  return monitor_traps(curr, 1, );
>  }
>  
> +int hvm_monitor_mem_access(struct vcpu* v, bool_t sync,

vcpu *v.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 2:41 PM, Andrew Cooper
 wrote:
> On 28/07/2016 21:36, Tamas K Lengyel wrote:
>> On Thu, Jul 28, 2016 at 2:26 PM, Andrew Cooper
>>  wrote:
>>> On 28/07/2016 21:05, Tamas K Lengyel wrote:
 Add support for getting/setting registers through vm_event on ARM.
 The set of registers can be expanded in the future to include other 
 registers
 as well if necessary but for now it is limited to TTB/CR/R0/R1, PC and 
 CPSR.

 Signed-off-by: Tamas K Lengyel 
 ---
 Cc: Stefano Stabellini 
 Cc: Julien Grall 
 Cc: Razvan Cojocaru 
 Cc: Jan Beulich 
 Cc: Andrew Cooper 
>>> For the x86 and common bits, Reviewed-by: Andrew Cooper
>>> 
>>>
>>> However,
>>>
 +#include 
 +#include 
 +
 +void vm_event_fill_regs(vm_event_request_t *req)
 +{
 +const struct cpu_user_regs *regs = guest_cpu_user_regs();
 +
 +req->data.regs.arm.cpsr = regs->cpsr;
 +req->data.regs.arm.pc = regs->pc;
 +req->data.regs.arm.ttbcr = READ_SYSREG(TCR_EL1);
 +req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1);
 +req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1);
 +}
 +
 +void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
 +{
 +struct cpu_user_regs *regs = guest_cpu_user_regs();
 +
 +regs->cpsr = rsp->data.regs.arm.cpsr;
 +regs->pc = rsp->data.regs.arm.pc;
 +v->arch.ttbcr = rsp->data.regs.arm.ttbcr;
 +v->arch.ttbr0 = rsp->data.regs.arm.ttbr0;
 +v->arch.ttbr1 = rsp->data.regs.arm.ttbr1;
>>> Not knowing anything about ARM, but this looks like it is missing some
>>> sanity/plausibility checks (to protect Xen against accidental clobbering
>>> from the vm_event listener), and some WRITE_SYSREG() to reload the new
>>> values (unless this is done unconditionally later, at which point you
>>> should at least leave a comment here saying so).
>>>
>> This function only ever gets called if the vm_event response
>> specifically has the VM_EVENT_FLAG_SET_REGISTERS set, so accidental
>> clobbering is not possible.
>
> That isn't my point.  Are there any reserved bits in the registers
> themselves which could cause Xen to fault when it tries to reload?  If
> all that happens is a domain_crash() then ok, but if Xen falls over with
> a fatal fault, that should be avoided.

I agree. At the moment the only register I actually need access
through vm_event setting is PC so I'll just leave the other registers
out and document it in the vm_event header.

>
> (i.e. there should be no bit pattern a vm_event listener could ever set
> which causes a crash of the hypervisor itself)
>
>>  Also, using WRITE_SYSREG() is not safe at
>> this point because current != v.
>
> Ok, but how do these new values end up getting propagated into hardware?
>

AFAIK during scheduling the registers get loaded from this save state.

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Andrew Cooper
On 28/07/2016 21:36, Tamas K Lengyel wrote:
> On Thu, Jul 28, 2016 at 2:26 PM, Andrew Cooper
>  wrote:
>> On 28/07/2016 21:05, Tamas K Lengyel wrote:
>>> Add support for getting/setting registers through vm_event on ARM.
>>> The set of registers can be expanded in the future to include other 
>>> registers
>>> as well if necessary but for now it is limited to TTB/CR/R0/R1, PC and CPSR.
>>>
>>> Signed-off-by: Tamas K Lengyel 
>>> ---
>>> Cc: Stefano Stabellini 
>>> Cc: Julien Grall 
>>> Cc: Razvan Cojocaru 
>>> Cc: Jan Beulich 
>>> Cc: Andrew Cooper 
>> For the x86 and common bits, Reviewed-by: Andrew Cooper
>> 
>>
>> However,
>>
>>> +#include 
>>> +#include 
>>> +
>>> +void vm_event_fill_regs(vm_event_request_t *req)
>>> +{
>>> +const struct cpu_user_regs *regs = guest_cpu_user_regs();
>>> +
>>> +req->data.regs.arm.cpsr = regs->cpsr;
>>> +req->data.regs.arm.pc = regs->pc;
>>> +req->data.regs.arm.ttbcr = READ_SYSREG(TCR_EL1);
>>> +req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1);
>>> +req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1);
>>> +}
>>> +
>>> +void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
>>> +{
>>> +struct cpu_user_regs *regs = guest_cpu_user_regs();
>>> +
>>> +regs->cpsr = rsp->data.regs.arm.cpsr;
>>> +regs->pc = rsp->data.regs.arm.pc;
>>> +v->arch.ttbcr = rsp->data.regs.arm.ttbcr;
>>> +v->arch.ttbr0 = rsp->data.regs.arm.ttbr0;
>>> +v->arch.ttbr1 = rsp->data.regs.arm.ttbr1;
>> Not knowing anything about ARM, but this looks like it is missing some
>> sanity/plausibility checks (to protect Xen against accidental clobbering
>> from the vm_event listener), and some WRITE_SYSREG() to reload the new
>> values (unless this is done unconditionally later, at which point you
>> should at least leave a comment here saying so).
>>
> This function only ever gets called if the vm_event response
> specifically has the VM_EVENT_FLAG_SET_REGISTERS set, so accidental
> clobbering is not possible.

That isn't my point.  Are there any reserved bits in the registers
themselves which could cause Xen to fault when it tries to reload?  If
all that happens is a domain_crash() then ok, but if Xen falls over with
a fatal fault, that should be avoided.

(i.e. there should be no bit pattern a vm_event listener could ever set
which causes a crash of the hypervisor itself)

>  Also, using WRITE_SYSREG() is not safe at
> this point because current != v.

Ok, but how do these new values end up getting propagated into hardware?

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 2:38 PM, Julien Grall  wrote:
>
>
> On 28/07/2016 21:36, Tamas K Lengyel wrote:
>>
>> On Thu, Jul 28, 2016 at 2:26 PM, Andrew Cooper
>>  wrote:
>>>
>>> On 28/07/2016 21:05, Tamas K Lengyel wrote:

 Add support for getting/setting registers through vm_event on ARM.
 The set of registers can be expanded in the future to include other
 registers
 as well if necessary but for now it is limited to TTB/CR/R0/R1, PC and
 CPSR.

 Signed-off-by: Tamas K Lengyel 
 ---
 Cc: Stefano Stabellini 
 Cc: Julien Grall 
 Cc: Razvan Cojocaru 
 Cc: Jan Beulich 
 Cc: Andrew Cooper 
>>>
>>>
>>> For the x86 and common bits, Reviewed-by: Andrew Cooper
>>> 
>>>
>>> However,
>>>
 +#include 
 +#include 
 +
 +void vm_event_fill_regs(vm_event_request_t *req)
 +{
 +const struct cpu_user_regs *regs = guest_cpu_user_regs();
 +
 +req->data.regs.arm.cpsr = regs->cpsr;
 +req->data.regs.arm.pc = regs->pc;
 +req->data.regs.arm.ttbcr = READ_SYSREG(TCR_EL1);
 +req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1);
 +req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1);
 +}
 +
 +void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
 +{
 +struct cpu_user_regs *regs = guest_cpu_user_regs();
 +
 +regs->cpsr = rsp->data.regs.arm.cpsr;
 +regs->pc = rsp->data.regs.arm.pc;
 +v->arch.ttbcr = rsp->data.regs.arm.ttbcr;
 +v->arch.ttbr0 = rsp->data.regs.arm.ttbr0;
 +v->arch.ttbr1 = rsp->data.regs.arm.ttbr1;
>>>
>>>
>>> Not knowing anything about ARM, but this looks like it is missing some
>>> sanity/plausibility checks (to protect Xen against accidental clobbering
>>> from the vm_event listener), and some WRITE_SYSREG() to reload the new
>>> values (unless this is done unconditionally later, at which point you
>>> should at least leave a comment here saying so).
>>>
>>
>> This function only ever gets called if the vm_event response
>> specifically has the VM_EVENT_FLAG_SET_REGISTERS set, so accidental
>> clobbering is not possible. Also, using WRITE_SYSREG() is not safe at
>> this point because current != v. However, I have another issue here
>> with regs which should actually be:
>
>
> What if the vCPU is running? If it cannot happen, please document and add an
> ASSERT/BUG_ON.

vCPU is guaranteed to be paused because we check
atomic_read(>vm_event_pause_count) before calling this. Adding an
ASSERT should be fine though.

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Julien Grall



On 28/07/2016 21:36, Tamas K Lengyel wrote:

On Thu, Jul 28, 2016 at 2:26 PM, Andrew Cooper
 wrote:

On 28/07/2016 21:05, Tamas K Lengyel wrote:

Add support for getting/setting registers through vm_event on ARM.
The set of registers can be expanded in the future to include other registers
as well if necessary but for now it is limited to TTB/CR/R0/R1, PC and CPSR.

Signed-off-by: Tamas K Lengyel 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
Cc: Razvan Cojocaru 
Cc: Jan Beulich 
Cc: Andrew Cooper 


For the x86 and common bits, Reviewed-by: Andrew Cooper


However,


+#include 
+#include 
+
+void vm_event_fill_regs(vm_event_request_t *req)
+{
+const struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+req->data.regs.arm.cpsr = regs->cpsr;
+req->data.regs.arm.pc = regs->pc;
+req->data.regs.arm.ttbcr = READ_SYSREG(TCR_EL1);
+req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1);
+req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1);
+}
+
+void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
+{
+struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+regs->cpsr = rsp->data.regs.arm.cpsr;
+regs->pc = rsp->data.regs.arm.pc;
+v->arch.ttbcr = rsp->data.regs.arm.ttbcr;
+v->arch.ttbr0 = rsp->data.regs.arm.ttbr0;
+v->arch.ttbr1 = rsp->data.regs.arm.ttbr1;


Not knowing anything about ARM, but this looks like it is missing some
sanity/plausibility checks (to protect Xen against accidental clobbering
from the vm_event listener), and some WRITE_SYSREG() to reload the new
values (unless this is done unconditionally later, at which point you
should at least leave a comment here saying so).



This function only ever gets called if the vm_event response
specifically has the VM_EVENT_FLAG_SET_REGISTERS set, so accidental
clobbering is not possible. Also, using WRITE_SYSREG() is not safe at
this point because current != v. However, I have another issue here
with regs which should actually be:


What if the vCPU is running? If it cannot happen, please document and 
add an ASSERT/BUG_ON.




struct cpu_user_regs *regs = >arch.cpu_info->guest_cpu_user_regs;

I'll fix that shortly.

Thanks,
Tamas



--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] mem_access: Use monitor_traps instead of mem_access_send_req

2016-07-28 Thread Julien Grall

Hello Tamas,

On 28/07/2016 20:35, Tamas K Lengyel wrote:

The two functions monitor_traps and mem_access_send_req duplicate
some of the same functionality. The mem_access_send_req however leaves a
lot of the standard vm_event fields to be filled by other functions.

Since mem_access events go on the monitor ring in this patch we consolidate
all paths to use monitor_traps to place events on the ring and to fill in
the common parts of the requests.

Signed-off-by: Tamas K Lengyel 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Razvan Cojocaru 
Cc: George Dunlap 
---
 xen/arch/arm/p2m.c| 69 +++
 xen/arch/x86/hvm/hvm.c| 16 ++---
 xen/arch/x86/hvm/monitor.c|  6 
 xen/arch/x86/mm/p2m.c | 24 ++
 xen/common/mem_access.c   | 11 ---
 xen/include/asm-x86/hvm/monitor.h |  2 ++
 xen/include/asm-x86/p2m.h | 13 +---
 xen/include/xen/mem_access.h  |  7 
 8 files changed, 63 insertions(+), 85 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d82349c..df898a3 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -5,7 +5,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
@@ -1642,12 +1642,41 @@ void __init setup_virt_paging(void)
 smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }

+static int
+__p2m_mem_access_send_req(paddr_t gpa, vaddr_t gla, const struct npfec npfec,
+  xenmem_access_t xma)
+{
+struct vcpu *v = current;
+vm_event_request_t req = {};
+bool_t sync = (xma == XENMEM_access_n2rwx) ? 0 : 1;
+
+req.reason = VM_EVENT_REASON_MEM_ACCESS;
+
+/* Send request to mem access subscriber */
+req.u.mem_access.gfn = gpa >> PAGE_SHIFT;
+req.u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
+if ( npfec.gla_valid )
+{
+req.u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
+req.u.mem_access.gla = gla;
+
+if ( npfec.kind == npfec_kind_with_gla )
+req.u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
+else if ( npfec.kind == npfec_kind_in_gpt )
+req.u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
+}
+req.u.mem_access.flags |= npfec.read_access? MEM_ACCESS_R : 0;
+req.u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
+req.u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0;
+
+return monitor_traps(v, sync, );
+}
+
 bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
 {
 int rc;
 bool_t violation;
 xenmem_access_t xma;
-vm_event_request_t *req;
 struct vcpu *v = current;
 struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);

@@ -1734,40 +1763,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 return false;
 }

-req = xzalloc(vm_event_request_t);
-if ( req )
-{
-req->reason = VM_EVENT_REASON_MEM_ACCESS;
-
-/* Pause the current VCPU */
-if ( xma != XENMEM_access_n2rwx )
-req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
-
-/* Send request to mem access subscriber */
-req->u.mem_access.gfn = gpa >> PAGE_SHIFT;
-req->u.mem_access.offset =  gpa & ((1 << PAGE_SHIFT) - 1);
-if ( npfec.gla_valid )
-{
-req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
-req->u.mem_access.gla = gla;
-
-if ( npfec.kind == npfec_kind_with_gla )
-req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
-else if ( npfec.kind == npfec_kind_in_gpt )
-req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
-}
-req->u.mem_access.flags |= npfec.read_access? MEM_ACCESS_R : 0;
-req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
-req->u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0;
-req->vcpu_id = v->vcpu_id;
-
-mem_access_send_req(v->domain, req);
-xfree(req);
-}
-
-/* Pause the current VCPU */
-if ( xma != XENMEM_access_n2rwx )
-vm_event_vcpu_pause(v);
+if ( __p2m_mem_access_send_req(gpa, gla, npfec, xma) < 0 )
+domain_crash(v->domain);


This patch is doing more than it is claimed in the commit message.

In general, moving the code and introducing changes within the same 
patch should really be avoided. So please split it in 2 patches.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 2:26 PM, Andrew Cooper
 wrote:
> On 28/07/2016 21:05, Tamas K Lengyel wrote:
>> Add support for getting/setting registers through vm_event on ARM.
>> The set of registers can be expanded in the future to include other registers
>> as well if necessary but for now it is limited to TTB/CR/R0/R1, PC and CPSR.
>>
>> Signed-off-by: Tamas K Lengyel 
>> ---
>> Cc: Stefano Stabellini 
>> Cc: Julien Grall 
>> Cc: Razvan Cojocaru 
>> Cc: Jan Beulich 
>> Cc: Andrew Cooper 
>
> For the x86 and common bits, Reviewed-by: Andrew Cooper
> 
>
> However,
>
>> +#include 
>> +#include 
>> +
>> +void vm_event_fill_regs(vm_event_request_t *req)
>> +{
>> +const struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +
>> +req->data.regs.arm.cpsr = regs->cpsr;
>> +req->data.regs.arm.pc = regs->pc;
>> +req->data.regs.arm.ttbcr = READ_SYSREG(TCR_EL1);
>> +req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1);
>> +req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1);
>> +}
>> +
>> +void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
>> +{
>> +struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +
>> +regs->cpsr = rsp->data.regs.arm.cpsr;
>> +regs->pc = rsp->data.regs.arm.pc;
>> +v->arch.ttbcr = rsp->data.regs.arm.ttbcr;
>> +v->arch.ttbr0 = rsp->data.regs.arm.ttbr0;
>> +v->arch.ttbr1 = rsp->data.regs.arm.ttbr1;
>
> Not knowing anything about ARM, but this looks like it is missing some
> sanity/plausibility checks (to protect Xen against accidental clobbering
> from the vm_event listener), and some WRITE_SYSREG() to reload the new
> values (unless this is done unconditionally later, at which point you
> should at least leave a comment here saying so).
>

This function only ever gets called if the vm_event response
specifically has the VM_EVENT_FLAG_SET_REGISTERS set, so accidental
clobbering is not possible. Also, using WRITE_SYSREG() is not safe at
this point because current != v. However, I have another issue here
with regs which should actually be:

struct cpu_user_regs *regs = >arch.cpu_info->guest_cpu_user_regs;

I'll fix that shortly.

Thanks,
Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Andrew Cooper
On 28/07/2016 21:05, Tamas K Lengyel wrote:
> Add support for getting/setting registers through vm_event on ARM.
> The set of registers can be expanded in the future to include other registers
> as well if necessary but for now it is limited to TTB/CR/R0/R1, PC and CPSR.
>
> Signed-off-by: Tamas K Lengyel 
> ---
> Cc: Stefano Stabellini 
> Cc: Julien Grall 
> Cc: Razvan Cojocaru 
> Cc: Jan Beulich 
> Cc: Andrew Cooper 

For the x86 and common bits, Reviewed-by: Andrew Cooper


However,

> +#include 
> +#include 
> +
> +void vm_event_fill_regs(vm_event_request_t *req)
> +{
> +const struct cpu_user_regs *regs = guest_cpu_user_regs();
> +
> +req->data.regs.arm.cpsr = regs->cpsr;
> +req->data.regs.arm.pc = regs->pc;
> +req->data.regs.arm.ttbcr = READ_SYSREG(TCR_EL1);
> +req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1);
> +req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1);
> +}
> +
> +void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
> +{
> +struct cpu_user_regs *regs = guest_cpu_user_regs();
> +
> +regs->cpsr = rsp->data.regs.arm.cpsr;
> +regs->pc = rsp->data.regs.arm.pc;
> +v->arch.ttbcr = rsp->data.regs.arm.ttbcr;
> +v->arch.ttbr0 = rsp->data.regs.arm.ttbr0;
> +v->arch.ttbr1 = rsp->data.regs.arm.ttbr1;

Not knowing anything about ARM, but this looks like it is missing some
sanity/plausibility checks (to protect Xen against accidental clobbering
from the vm_event listener), and some WRITE_SYSREG() to reload the new
values (unless this is done unconditionally later, at which point you
should at least leave a comment here saying so).

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] arm/vm_event: get/set registers

2016-07-28 Thread Tamas K Lengyel
Add support for getting/setting registers through vm_event on ARM.
The set of registers can be expanded in the future to include other registers
as well if necessary but for now it is limited to TTB/CR/R0/R1, PC and CPSR.

Signed-off-by: Tamas K Lengyel 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
Cc: Razvan Cojocaru 
Cc: Jan Beulich 
Cc: Andrew Cooper 
---
 xen/arch/arm/Makefile  |  1 +
 xen/arch/arm/vm_event.c| 53 ++
 xen/include/asm-arm/vm_event.h | 11 -
 xen/include/asm-x86/vm_event.h |  4 
 xen/include/public/vm_event.h  | 14 +--
 xen/include/xen/vm_event.h |  3 +++
 6 files changed, 69 insertions(+), 17 deletions(-)
 create mode 100644 xen/arch/arm/vm_event.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index b264ed4..5752830 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -41,6 +41,7 @@ obj-y += traps.o
 obj-y += vgic.o
 obj-y += vgic-v2.o
 obj-$(CONFIG_ARM_64) += vgic-v3.o
+obj-y += vm_event.o
 obj-y += vtimer.o
 obj-y += vpsci.o
 obj-y += vuart.o
diff --git a/xen/arch/arm/vm_event.c b/xen/arch/arm/vm_event.c
new file mode 100644
index 000..5e4bee1
--- /dev/null
+++ b/xen/arch/arm/vm_event.c
@@ -0,0 +1,53 @@
+/*
+ * arch/arm/vm_event.c
+ *
+ * Architecture-specific vm_event handling routines
+ *
+ * Copyright (c) 2016 Tamas K Lengyel (tamas.leng...@zentific.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; If not, see .
+ */
+
+#include 
+#include 
+
+void vm_event_fill_regs(vm_event_request_t *req)
+{
+const struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+req->data.regs.arm.cpsr = regs->cpsr;
+req->data.regs.arm.pc = regs->pc;
+req->data.regs.arm.ttbcr = READ_SYSREG(TCR_EL1);
+req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1);
+req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1);
+}
+
+void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
+{
+struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+regs->cpsr = rsp->data.regs.arm.cpsr;
+regs->pc = rsp->data.regs.arm.pc;
+v->arch.ttbcr = rsp->data.regs.arm.ttbcr;
+v->arch.ttbr0 = rsp->data.regs.arm.ttbr0;
+v->arch.ttbr1 = rsp->data.regs.arm.ttbr1;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
index ccc4b60..9482636 100644
--- a/xen/include/asm-arm/vm_event.h
+++ b/xen/include/asm-arm/vm_event.h
@@ -45,15 +45,4 @@ void vm_event_register_write_resume(struct vcpu *v, 
vm_event_response_t *rsp)
 /* Not supported on ARM. */
 }
 
-static inline
-void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
-{
-/* Not supported on ARM. */
-}
-
-static inline void vm_event_fill_regs(vm_event_request_t *req)
-{
-/* Not supported on ARM. */
-}
-
 #endif /* __ASM_ARM_VM_EVENT_H__ */
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index 7e6adff..294def6 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -39,8 +39,4 @@ void vm_event_toggle_singlestep(struct domain *d, struct vcpu 
*v);
 
 void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp);
 
-void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp);
-
-void vm_event_fill_regs(vm_event_request_t *req);
-
 #endif /* __ASM_X86_VM_EVENT_H__ */
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index 64e6857..1e3195d 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -132,8 +132,8 @@
 #define VM_EVENT_X86_XCR0   3
 
 /*
- * Using a custom struct (not hvm_hw_cpu) so as to not fill
- * the vm_event ring buffer too quickly.
+ * Using custom vCPU structs (i.e. not hvm_hw_cpu) for both x86 and ARM
+ * so as to not fill the vm_event ring buffer too quickly.
  */
 struct vm_event_regs_x86 {
 uint64_t rax;
@@ -171,6 +171,15 @@ struct vm_event_regs_x86 {
 uint32_t _pad;
 };
 
+struct vm_event_regs_arm {
+uint64_t ttbr0;
+uint64_t ttbr1;
+uint64_t ttbcr;
+uint64_t pc;
+uint32_t cpsr;
+uint32_t _pad;
+};
+
 /*
  * mem_access flag definitions
  *
@@ -273,6 +282,7 @@ typedef struct vm_event_st {
 union {
 

[Xen-devel] [ovmf baseline-only test] 66856: trouble: blocked/broken/pass

2016-07-28 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 66856 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/66856/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm3 host-install(3) broken REGR. vs. 66812
 build-i386-pvops  3 host-install(3) broken REGR. vs. 66812
 build-i3863 host-install(3) broken REGR. vs. 66812

Tests which did not succeed, but are not blocking:
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 39dbc4d5534790b5efcd67ce6b0f82ac23c6db6d
baseline version:
 ovmf 136c648f5985a725fbd399085c16932a4c2f65d7

Last test of basis66812  2016-07-26 10:18:39 Z2 days
Testing same since66856  2016-07-28 18:16:38 Z0 days1 attempts


People who touched revisions under test:
  Hao Wu 
  Laszlo Ersek 
  Ruiyu Ni 
  Satya Yarlagadda 
  Thomas Palmer 
  Yarlagadda, Satya P 
  Yonghong Zhu 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   broken  
 build-amd64  pass
 build-i386   broken  
 build-amd64-libvirt  pass
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary

broken-step build-i386-xsm host-install(3)
broken-step build-i386-pvops host-install(3)
broken-step build-i386 host-install(3)

Push not applicable.


commit 39dbc4d5534790b5efcd67ce6b0f82ac23c6db6d
Author: Thomas Palmer 
Date:   Wed Jul 27 01:48:15 2016 -0500

OvmfPkg/Sec: Support SECTION2 DXEFV types

Support down-stream projects that require large DXEFV sizes greater
than 16MB by handling SECTION2 common headers. These are already
created by the build tools when necessary.

Use IS_SECTION2 and SECTION2_SIZE macros to calculate accurate image
sizes when appropriate.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Thomas Palmer 
Reviewed-by: Laszlo Ersek 
Regression-tested-by: Laszlo Ersek 
[ler...@redhat.com: fix NB->MB typo in commit message]
Signed-off-by: Laszlo Ersek 

commit 5e443e376928de02ee5af8f151ad315e48372ff2
Author: Thomas Palmer 
Date:   Wed Jul 27 01:48:14 2016 -0500

OvmfPkg/Sec: Use EFI_COMMON_SECTION_HEADER to avoid casts

Drop superfluous casts. There is no change in behavior because
EFI_FIRMWARE_VOLUME_IMAGE_SECTION is just a typedef of
EFI_COMMON_SECTION_HEADER.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Thomas Palmer 
Reviewed-by: Laszlo Ersek 
Regression-tested-by: Laszlo Ersek 

commit c8ecaaf5e3d3f9b81d73f329501d3fa39739bd41
Author: Ruiyu Ni 
Date:   Tue Jul 26 21:07:19 2016 +0800

PcAtChipsetPkg/PcRtc: Fix a NULL pointer deference issue

When a platform which doesn't support ACPI 1.0 (no XSDT) and FADT
is not produced at the first time when ACPI table is published,
GetCenturyRtcAddress() unconditionally deference Rsdp->RsdtAddress
but Rsdp->RsdtAddress is 0 in this case.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ruiyu Ni 
Reviewed-by: Eric Dong 

commit 96fcfdbfb068670b849a850bbed28ecad913af80
Author: Ruiyu Ni 
Date:   Tue Jul 26 18:20:05 2016 +0800

PcAtChipsetPkg/PcRtc: Fix a stack corruption issue

In 32bit environment, ScanTableInSDT() incorrectly copies 8 bytes
of data to 4-byte 

Re: [Xen-devel] OVMF very slow on AMD

2016-07-28 Thread Boris Ostrovsky
On 07/28/2016 03:44 PM, Andrew Cooper wrote:
 As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
 but no corresponding SVM code. Could it be that we need to set gPAT, for
 example?
>>> A better approach would be to find out why ovmf insists on disabling
>>> caches at all.  Even if we optimise the non-PCI-device case in the
>>> hypervisor, a passthrough case will still run like treacle if caches are
>>> disabled.
>> True, we should understand why OVMF does this. But I think we also need
>> to understand what makes Intel run faster. Or is it already clear from
>> vmx_handle_cd()?
> Wow this code is hard to follow :(
>
> handle_cd() is only called when an IOMMU is enabled and the domain in
> question has access to real ioports or PCI devices.
>
> However, I really can't spot anything that ends up eliding the
> cache-disable setting even for Intel.  This clearly needs further
> investigation.

So as an easy start perhaps Anthony could check whether this call is
made with his guest running on Intel.

-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] OVMF very slow on AMD

2016-07-28 Thread Andrew Cooper
On 28/07/16 20:25, Boris Ostrovsky wrote:
> On 07/28/2016 11:51 AM, Andrew Cooper wrote:
>> On 28/07/16 16:17, Boris Ostrovsky wrote:
>>> On 07/28/2016 06:54 AM, Andrew Cooper wrote:
 On 28/07/16 11:43, George Dunlap wrote:
> On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
>  wrote:
>> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
>>> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
 On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
> I can try to describe how OVMF is setting up the memory.
 From the start of the day:
 setup gdt
 cr0 = 0x4023
>>> I think this is slightly odd, with bit 30 (cache disable) set. I'd
>>> suspect that this would affect both Intel and AMD though.
>>>
>>> Can you try clearing this bit?
>> That works...
>>
>> I wonder why it does not appear to affect Intel or KVM.
> Are those bits hard-coded, or are they set based on the hardware
> that's available?
>
> Is it possible that the particular combination of CPUID bits presented
> by Xen on AMD are causing a different value to be written?
>
> Or is it possible that the cache disable bit is being ignored (by Xen)
> on Intel and KVM?
 If a guest has no hardware, then it has no reason to actually disable
 caches.  We should have logic to catch this an avoid actually disabling
 caches when the guest asks for it.
>>> Is this really safe to do? Can't a guest decide to disable cache to
>>> avoid having to deal with coherency in SW?
>> What SW coherency issue do you think can be solved with disabling the cache?
>>
>> x86 has strict ordering of writes and reads with respect to each other. 
>> The only case which can be out of order is reads promoted ahead of
>> unaliasing writes.
> Right, that was not a good example.
>
>>> As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
>>> but no corresponding SVM code. Could it be that we need to set gPAT, for
>>> example?
>> A better approach would be to find out why ovmf insists on disabling
>> caches at all.  Even if we optimise the non-PCI-device case in the
>> hypervisor, a passthrough case will still run like treacle if caches are
>> disabled.
> True, we should understand why OVMF does this. But I think we also need
> to understand what makes Intel run faster. Or is it already clear from
> vmx_handle_cd()?

Wow this code is hard to follow :(

handle_cd() is only called when an IOMMU is enabled and the domain in
question has access to real ioports or PCI devices.

However, I really can't spot anything that ends up eliding the
cache-disable setting even for Intel.  This clearly needs further
investigation.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 3/6] xen/arm: Use check_workaround to handle the erratum 766422

2016-07-28 Thread Stefano Stabellini
On Wed, 27 Jul 2016, Julien Grall wrote:
> Currently, Xen is accessing the stored MIDR everytime it has to check
> whether the processor is affected by the erratum 766422.
> 
> This could take advantage of the new capability bitfields to detect
> whether the processor is affected at boot time.
> 
> With this patch, the number of instructions to check the erratum is
> going down from ~13 (including 2 loads and a co-processor access) to
> ~6 instructions (include 1 load).
> 
> Signed-off-by: Julien Grall 

Reviewed-by: Stefano Stabellini 


> ---
> Changes in v2:
> - Update the commit message
> ---
>  xen/arch/arm/cpuerrata.c  | 6 ++
>  xen/arch/arm/traps.c  | 3 ++-
>  xen/include/asm-arm/arm32/processor.h | 4 
>  xen/include/asm-arm/arm64/processor.h | 2 --
>  xen/include/asm-arm/cpuerrata.h   | 2 ++
>  xen/include/asm-arm/cpufeature.h  | 3 ++-
>  xen/include/asm-arm/processor.h   | 2 ++
>  7 files changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 3ac97b3..748e02e 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -17,6 +17,12 @@ is_affected_midr_range(const struct arm_cpu_capabilities 
> *entry)
>  }
>  
>  static const struct arm_cpu_capabilities arm_errata[] = {
> +{
> +/* Cortex-A15 r0p4 */
> +.desc = "ARM erratum 766422",
> +.capability = ARM32_WORKAROUND_766422,
> +MIDR_RANGE(MIDR_CORTEX_A15, 0x04, 0x04),
> +},
>  #if defined(CONFIG_ARM64_ERRATUM_827319) || \
>  defined(CONFIG_ARM64_ERRATUM_824069)
>  {
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index b34c46f..28982a4 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -46,6 +46,7 @@
>  #include "vtimer.h"
>  #include 
>  #include 
> +#include 
>  
>  /* The base of the stack must always be double-word aligned, which means
>   * that both the kernel half of struct cpu_user_regs (which is pushed in
> @@ -2481,7 +2482,7 @@ static void do_trap_data_abort_guest(struct 
> cpu_user_regs *regs,
>   * Erratum 766422: Thumb store translation fault to Hypervisor may
>   * not have correct HSR Rt value.
>   */
> -if ( cpu_has_erratum_766422() && (regs->cpsr & PSR_THUMB) && dabt.write )
> +if ( check_workaround_766422() && (regs->cpsr & PSR_THUMB) && dabt.write 
> )
>  {
>  rc = decode_instruction(regs, );
>  if ( rc )
> diff --git a/xen/include/asm-arm/arm32/processor.h 
> b/xen/include/asm-arm/arm32/processor.h
> index f41644d..11366bb 100644
> --- a/xen/include/asm-arm/arm32/processor.h
> +++ b/xen/include/asm-arm/arm32/processor.h
> @@ -115,10 +115,6 @@ struct cpu_user_regs
>  #define READ_SYSREG(R...)   READ_SYSREG32(R)
>  #define WRITE_SYSREG(V, R...)   WRITE_SYSREG32(V, R)
>  
> -/* Erratum 766422: only Cortex A15 r0p4 is affected */
> -#define cpu_has_erratum_766422() \
> -(unlikely(current_cpu_data.midr.bits == 0x410fc0f4))
> -
>  #endif /* __ASSEMBLY__ */
>  
>  #endif /* __ASM_ARM_ARM32_PROCESSOR_H */
> diff --git a/xen/include/asm-arm/arm64/processor.h 
> b/xen/include/asm-arm/arm64/processor.h
> index fef35a5..b0726ff 100644
> --- a/xen/include/asm-arm/arm64/processor.h
> +++ b/xen/include/asm-arm/arm64/processor.h
> @@ -111,8 +111,6 @@ struct cpu_user_regs
>  #define READ_SYSREG(name) READ_SYSREG64(name)
>  #define WRITE_SYSREG(v, name) WRITE_SYSREG64(v, name)
>  
> -#define cpu_has_erratum_766422() 0
> -
>  #endif /* __ASSEMBLY__ */
>  
>  #endif /* __ASM_ARM_ARM64_PROCESSOR_H */
> diff --git a/xen/include/asm-arm/cpuerrata.h b/xen/include/asm-arm/cpuerrata.h
> index 2982a92..5880e77 100644
> --- a/xen/include/asm-arm/cpuerrata.h
> +++ b/xen/include/asm-arm/cpuerrata.h
> @@ -40,6 +40,8 @@ static inline bool_t check_workaround_##erratum(void)   
> \
>  
>  #endif
>  
> +CHECK_WORKAROUND_HELPER(766422, ARM32_WORKAROUND_766422, CONFIG_ARM_32)
> +
>  #undef CHECK_WORKAROUND_HELPER
>  
>  #endif /* __ARM_CPUERRATA_H__ */
> diff --git a/xen/include/asm-arm/cpufeature.h 
> b/xen/include/asm-arm/cpufeature.h
> index 78e2263..ac6eaf0 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -37,8 +37,9 @@
>  
>  #define ARM64_WORKAROUND_CLEAN_CACHE0
>  #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE1
> +#define ARM32_WORKAROUND_766422 2
>  
> -#define ARM_NCAPS   2
> +#define ARM_NCAPS   3
>  
>  #ifndef __ASSEMBLY__
>  
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index 1708253..15bf890 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -46,9 +46,11 @@
>  
>  #define ARM_CPU_IMP_ARM 0x41
>  
> +#define ARM_CPU_PART_CORTEX_A15 0xC0F
>  #define ARM_CPU_PART_CORTEX_A53 0xD03
>  #define ARM_CPU_PART_CORTEX_A57 0xD07
>  
> +#define 

Re: [Xen-devel] [PATCH v2 2/6] xen/arm: Provide macros to help creating workaround helpers

2016-07-28 Thread Stefano Stabellini
On Wed, 27 Jul 2016, Julien Grall wrote:
> Workarounds may require to execute a different path when the platform
> is affected by the associated erratum. Furthermore, this may need to
> be called in the common code.
> 
> To avoid too much intrusion/overhead, the workaround helpers need to
> be a nop on architecture which will never have the workaround and have
> to be quick to check whether the platform requires it.
> 
> The alternative framework is used to transform the check in a single
> instruction. When the framework is not available, the helper will have
> ~6 instructions including 1 instruction load.
> 
> The macro will create a handler called check_workaround_x with
>  the erratum number.
> 
> For instance, the line bellow will create a workaround helper for
> erratum #424242 which is enabled when the capability
> ARM64_WORKAROUND_424242 is set and only available for ARM64:
> 
> CHECK_WORKAROUND_HELPER(424242, ARM64_WORKAROUND_42424242, CONFIG_ARM64)
> 
> Signed-off-by: Julien Grall 
> Reviewed-by: Konrad Rzeszutek Wilk 

Acked-by: Stefano Stabellini 


> ---
> Changes in v2:
> - Add Konrad's reviewed-by
> ---
>  xen/include/asm-arm/cpuerrata.h | 39 +++
>  1 file changed, 39 insertions(+)
> 
> diff --git a/xen/include/asm-arm/cpuerrata.h b/xen/include/asm-arm/cpuerrata.h
> index c495ee5..2982a92 100644
> --- a/xen/include/asm-arm/cpuerrata.h
> +++ b/xen/include/asm-arm/cpuerrata.h
> @@ -1,8 +1,47 @@
>  #ifndef __ARM_CPUERRATA_H__
>  #define __ARM_CPUERRATA_H__
>  
> +#include 
> +#include 
> +#include 
> +
>  void check_local_cpu_errata(void);
>  
> +#ifdef CONFIG_ALTERNATIVE
> +
> +#define CHECK_WORKAROUND_HELPER(erratum, feature, arch) \
> +static inline bool_t check_workaround_##erratum(void)   \
> +{   \
> +if ( !IS_ENABLED(arch) )\
> +return 0;   \
> +else\
> +{   \
> +bool_t ret; \
> +\
> +asm volatile (ALTERNATIVE("mov %0, #0", \
> +  "mov %0, #1", \
> +  feature)  \
> +  : "=r" (ret));\
> +\
> +return unlikely(ret);   \
> +}   \
> +}
> +
> +#else /* CONFIG_ALTERNATIVE */
> +
> +#define CHECK_WORKAROUND_HELPER(erratum, feature, arch) \
> +static inline bool_t check_workaround_##erratum(void)   \
> +{   \
> +if ( !IS_ENABLED(arch) )\
> +return 0;   \
> +else\
> +return unlikely(cpus_have_cap(feature));\
> +}
> +
> +#endif
> +
> +#undef CHECK_WORKAROUND_HELPER
> +
>  #endif /* __ARM_CPUERRATA_H__ */
>  /*
>   * Local variables:
> -- 
> 1.9.1
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/6] xen/arm: traps: Simplify the switch in do_trap_*_abort_guest

2016-07-28 Thread Stefano Stabellini
On Wed, 27 Jul 2016, Julien Grall wrote:
> The fault status we care are in the form xx where xx is the lookup
> level that gave the fault. We can simplify the code by masking the 2 least
> significant bits.
> 
> Signed-off-by: Julien Grall 

Reviewed-by: Stefano Stabellini 


> ---
> The switch has not been replaced by a simple if because more case
> will be added in follow-up patches.
> 
> Changes in v2:
> - Fix typoes in the commit message
> ---
>  xen/arch/arm/traps.c | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 2d05936..b34c46f 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2387,9 +2387,9 @@ static void do_trap_instr_abort_guest(struct 
> cpu_user_regs *regs,
>  int rc;
>  register_t gva = READ_SYSREG(FAR_EL2);
>  
> -switch ( hsr.iabt.ifsc & 0x3f )
> +switch ( hsr.iabt.ifsc & ~FSC_LL_MASK )
>  {
> -case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
> +case FSC_FLT_PERM:
>  {
>  paddr_t gpa;
>  const struct npfec npfec = {
> @@ -2450,9 +2450,9 @@ static void do_trap_data_abort_guest(struct 
> cpu_user_regs *regs,
>  return; /* Try again */
>  }
>  
> -switch ( dabt.dfsc & 0x3f )
> +switch ( dabt.dfsc & ~FSC_LL_MASK )
>  {
> -case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
> +case FSC_FLT_PERM:
>  {
>  const struct npfec npfec = {
>  .read_access = !dabt.write,
> -- 
> 1.9.1
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] mem_access: Use monitor_traps instead of mem_access_send_req

2016-07-28 Thread Tamas K Lengyel
The two functions monitor_traps and mem_access_send_req duplicate
some of the same functionality. The mem_access_send_req however leaves a
lot of the standard vm_event fields to be filled by other functions.

Since mem_access events go on the monitor ring in this patch we consolidate
all paths to use monitor_traps to place events on the ring and to fill in
the common parts of the requests.

Signed-off-by: Tamas K Lengyel 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Razvan Cojocaru 
Cc: George Dunlap 
---
 xen/arch/arm/p2m.c| 69 +++
 xen/arch/x86/hvm/hvm.c| 16 ++---
 xen/arch/x86/hvm/monitor.c|  6 
 xen/arch/x86/mm/p2m.c | 24 ++
 xen/common/mem_access.c   | 11 ---
 xen/include/asm-x86/hvm/monitor.h |  2 ++
 xen/include/asm-x86/p2m.h | 13 +---
 xen/include/xen/mem_access.h  |  7 
 8 files changed, 63 insertions(+), 85 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d82349c..df898a3 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -5,7 +5,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
@@ -1642,12 +1642,41 @@ void __init setup_virt_paging(void)
 smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }
 
+static int
+__p2m_mem_access_send_req(paddr_t gpa, vaddr_t gla, const struct npfec npfec,
+  xenmem_access_t xma)
+{
+struct vcpu *v = current;
+vm_event_request_t req = {};
+bool_t sync = (xma == XENMEM_access_n2rwx) ? 0 : 1;
+
+req.reason = VM_EVENT_REASON_MEM_ACCESS;
+
+/* Send request to mem access subscriber */
+req.u.mem_access.gfn = gpa >> PAGE_SHIFT;
+req.u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
+if ( npfec.gla_valid )
+{
+req.u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
+req.u.mem_access.gla = gla;
+
+if ( npfec.kind == npfec_kind_with_gla )
+req.u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
+else if ( npfec.kind == npfec_kind_in_gpt )
+req.u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
+}
+req.u.mem_access.flags |= npfec.read_access? MEM_ACCESS_R : 0;
+req.u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
+req.u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0;
+
+return monitor_traps(v, sync, );
+}
+
 bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
 {
 int rc;
 bool_t violation;
 xenmem_access_t xma;
-vm_event_request_t *req;
 struct vcpu *v = current;
 struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
 
@@ -1734,40 +1763,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 return false;
 }
 
-req = xzalloc(vm_event_request_t);
-if ( req )
-{
-req->reason = VM_EVENT_REASON_MEM_ACCESS;
-
-/* Pause the current VCPU */
-if ( xma != XENMEM_access_n2rwx )
-req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
-
-/* Send request to mem access subscriber */
-req->u.mem_access.gfn = gpa >> PAGE_SHIFT;
-req->u.mem_access.offset =  gpa & ((1 << PAGE_SHIFT) - 1);
-if ( npfec.gla_valid )
-{
-req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
-req->u.mem_access.gla = gla;
-
-if ( npfec.kind == npfec_kind_with_gla )
-req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
-else if ( npfec.kind == npfec_kind_in_gpt )
-req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
-}
-req->u.mem_access.flags |= npfec.read_access? MEM_ACCESS_R : 0;
-req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
-req->u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0;
-req->vcpu_id = v->vcpu_id;
-
-mem_access_send_req(v->domain, req);
-xfree(req);
-}
-
-/* Pause the current VCPU */
-if ( xma != XENMEM_access_n2rwx )
-vm_event_vcpu_pause(v);
+if ( __p2m_mem_access_send_req(gpa, gla, npfec, xma) < 0 )
+domain_crash(v->domain);
 
 return false;
 }
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index daaee1d..688370d 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1707,7 +1707,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long 
gla,
 int rc, fall_through = 0, paged = 0;
 int sharing_enomem = 0;
 vm_event_request_t *req_ptr = NULL;
-bool_t ap2m_active;
+bool_t ap2m_active, sync = 0;
 
 /* On Nested Virtualization, walk the guest page table.
  * If this succeeds, all is fine.
@@ -1846,11 

Re: [Xen-devel] OVMF very slow on AMD

2016-07-28 Thread Boris Ostrovsky
On 07/28/2016 11:51 AM, Andrew Cooper wrote:
> On 28/07/16 16:17, Boris Ostrovsky wrote:
>> On 07/28/2016 06:54 AM, Andrew Cooper wrote:
>>> On 28/07/16 11:43, George Dunlap wrote:
 On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
  wrote:
> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
>> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
>>> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
 I can try to describe how OVMF is setting up the memory.
>>> From the start of the day:
>>> setup gdt
>>> cr0 = 0x4023
>> I think this is slightly odd, with bit 30 (cache disable) set. I'd
>> suspect that this would affect both Intel and AMD though.
>>
>> Can you try clearing this bit?
> That works...
>
> I wonder why it does not appear to affect Intel or KVM.
 Are those bits hard-coded, or are they set based on the hardware
 that's available?

 Is it possible that the particular combination of CPUID bits presented
 by Xen on AMD are causing a different value to be written?

 Or is it possible that the cache disable bit is being ignored (by Xen)
 on Intel and KVM?
>>> If a guest has no hardware, then it has no reason to actually disable
>>> caches.  We should have logic to catch this an avoid actually disabling
>>> caches when the guest asks for it.
>> Is this really safe to do? Can't a guest decide to disable cache to
>> avoid having to deal with coherency in SW?
> What SW coherency issue do you think can be solved with disabling the cache?
>
> x86 has strict ordering of writes and reads with respect to each other. 
> The only case which can be out of order is reads promoted ahead of
> unaliasing writes.

Right, that was not a good example.

>
>> As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
>> but no corresponding SVM code. Could it be that we need to set gPAT, for
>> example?
> A better approach would be to find out why ovmf insists on disabling
> caches at all.  Even if we optimise the non-PCI-device case in the
> hypervisor, a passthrough case will still run like treacle if caches are
> disabled.

True, we should understand why OVMF does this. But I think we also need
to understand what makes Intel run faster. Or is it already clear from
vmx_handle_cd()?

-boris



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCHv1] xen/privcmd: add IOCTL_PRIVCMD_RESTRICT_DOMID

2016-07-28 Thread Boris Ostrovsky
On 07/28/2016 12:13 PM, David Vrabel wrote:
>
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index df2e6f7..513d1c5 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -43,6 +43,18 @@ MODULE_LICENSE("GPL");
>  
>  #define PRIV_VMA_LOCKED ((void *)1)
>  
> +#define UNRESTRICTED_DOMID ((domid_t)-1)

This can probably go into a header file since you've used the same macro
for event channel restricted domains.

> +
> +struct privcmd_data {
> + domid_t restrict_domid;
> +};
> +
> +static bool privcmd_is_allowed(struct privcmd_data *priv, domid_t domid)
> +{
> + return priv->restrict_domid == UNRESTRICTED_DOMID
> + || priv->restrict_domid == domid;
> +}

I also wonder whether this can be made useful to event channels (and
possibly other operations we might want to try restricting in the future).

-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.7.0 boot PANIC on kernel 4.7.0-4 + UEFI ?

2016-07-28 Thread Daniel Kiper
On Thu, Jul 28, 2016 at 11:25:42AM -0700, li...@ssl-mail.com wrote:
> > Hmmm Could you provide full console dump from Xen and Linux kernel?
>
> Will serial console output with these options
>
>   kernel: earlyprintk=xen,keep debug loglevel=8
>   hypervisor: loglvl=all guest_loglvl=all sync_console console_to_ring
>
> do?

I think that you should add to above mentioned hypervisor command line
at least "com1=115200,8n1 console=com1". Of course this is an example.
You should find your serial port details and configure it properly.

Daniel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.7.0 boot PANIC on kernel 4.7.0-4 + UEFI ?

2016-07-28 Thread lists
On 07/28/2016 11:25 AM, li...@ssl-mail.com wrote:>
> Hmmm Could you provide full console dump from Xen and Linux kernel?
> 
> Will serial console output with these options
> 
>   kernel: earlyprintk=xen,keep debug loglevel=8
>   hypervisor: loglvl=all guest_loglvl=all sync_console console_to_ring
> 
> do?

I'll just assume it does.

So full console output from boot -> crash now doesn't look any different than 

https://lists.xen.org/archives/html/xen-devel/2016-07/msg02814.html

On 07/27/2016 08:50 AM, Andrew Cooper wrote
>> For the Linux crash, can you boot Linux with 
"earlyprintk=xen" and see
>> if that provides more help as to what went wrong?
>
>Here's serial console output with grub2 log parameters 
included as
>
>kernel: earlyprintk=xen,keep debug loglevel=8
>hypervisor: loglvl=all guest_loglvl=all sync_console 
console_to_ring


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [DRAFT v3] XenSock protocol design document

2016-07-28 Thread Stefano Stabellini
On Thu, 28 Jul 2016, Sander Eikelenboom wrote:
> Thursday, July 28, 2016, 8:11:53 PM, you wrote:
> 
> > ping
> 
> Hi Stefano,
> 
> JFYI:
> Since this doesn't seem to be checked with the upstream kernel yet,
> I don't know if you are aware of the opinions expressed upstream 
> about the proposed Hyper-V socket patches:
> http://lkml.iu.edu/hypermail/linux/kernel/1607.3/01748.html
> 
> (and if that should either influence your design or design process)

Thanks Sander, I am aware of that conversation going on. However the
problem they have is that hv_sock is similar to vsock (at least in
purpose), and the kernel guys would like to see only one option for
VM-hypervisor communications.  That is understandable.  The Xen
community had a similar discussion when v4v was proposed (we already had
vchan).

This is not an inter-VM or VM-hypervisor communication protocol. It
cannot be replaced with vsock. They might still dislike xensock and even
nack it, but I think it will be for different reasons.

On a related topic, I am thinking of renaming xensock to something more
like "PVCalls".

XenSock is confusing. It encourages comparisons with vsock. xensock
sounds like vsock or hv_sock for xen, which is not. In fact in the
future there might be a virtio version of this protocol, and still it
wouldn't be able to replace virtio-vsock.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [DRAFT v3] XenSock protocol design document

2016-07-28 Thread Sander Eikelenboom

Thursday, July 28, 2016, 8:11:53 PM, you wrote:

> ping

Hi Stefano,

JFYI:
Since this doesn't seem to be checked with the upstream kernel yet,
I don't know if you are aware of the opinions expressed upstream 
about the proposed Hyper-V socket patches:
http://lkml.iu.edu/hypermail/linux/kernel/1607.3/01748.html

(and if that should either influence your design or design process)

--
Sander

> On Wed, 20 Jul 2016, Stefano Stabellini wrote:
>> Hi all,
>> 
>> This is the design document of the XenSock protocol. You can find
>> prototypes of the Linux frontend and backend drivers here:
>> 
>> git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git xensock-3
>> 
>> To use them, make sure to enable CONFIG_XENSOCK in your kernel config
>> and add "xensock=1" to the command line of your DomU Linux kernel. You
>> also need the toolstack to create the initial xenstore nodes for the
>> protocol. To do that, please apply the attached patch to libxl (the
>> patch is based on Xen 4.7.0-rc3) and add "xensock=1" to your DomU config
>> file.
>> 
>> Cheers,
>> 
>> Stefano
>> 
>> 
>> Changes in v3:
>> - add a dummy element to struct xen_xensock_request to make sure the
>>   size of the struct is the same on both x86_32 and x86_64
>> 
>> Changes in v2:
>> - add max-dataring-page-order
>> - add "Publish backend features and transport parameters" to backend
>>   xenbus workflow
>> - update new cmd values
>> - update xen_xensock_request
>> - add backlog parameter to listen and binary layout
>> - add description of new data ring format (interface+data)
>> - modify connect and accept to reflect new data ring format
>> - add link to POSIX docs
>> - add error numbers
>> - add address format section and relevant numeric definitions
>> - add explicit mention of unimplemented commands
>> - add protocol node name
>> - add xenbus shutdown diagram
>> - add socket operation
>> 
>> ---
>> 
>> 
>> # XenSocks Protocol v1
>> 
>> ## Rationale
>> 
>> XenSocks is a paravirtualized protocol for the POSIX socket API.
>> 
>> The purpose of XenSocks is to allow the implementation of a specific set
>> of POSIX functions to be done in a domain other than your own. It allows
>> connect, accept, bind, release, listen, poll, recvmsg and sendmsg to be
>> implemented in another domain.
>> 
>> XenSocks provides the following benefits:
>> * guest networking works out of the box with VPNs, wireless networks and
>>   any other complex configurations on the host
>> * guest services listen on ports bound directly to the backend domain IP
>>   addresses
>> * localhost becomes a secure namespace for inter-VMs communications
>> * full visibility of the guest behavior on the backend domain, allowing
>>   for inexpensive filtering and manipulation of any guest calls
>> * excellent performance
>> 
>> 
>> ## Design
>> 
>> ### Xenstore
>> 
>> The frontend and the backend connect to each other exchanging information via
>> xenstore. The toolstack creates front and back nodes with state
>> XenbusStateInitialising. The protocol node name is **xensock**. There can 
>> only
>> be one XenSock frontend per domain.
>> 
>>  Frontend XenBus Nodes
>> 
>> port
>>  Values: 
>> 
>>  The identifier of the Xen event channel used to signal activity
>>  in the ring buffer.
>> 
>> ring-ref
>>  Values: 
>> 
>>  The Xen grant reference granting permission for the backend to map
>>  the sole page in a single page sized ring buffer.
>> 
>>  Backend XenBus Nodes
>> 
>> max-dataring-page-order
>> Values: 
>> 
>> The maximum supported size of the data ring in units of lb(machine
>> pages). (e.g. 0 == 1 page,  1 = 2 pages, 2 == 4 pages, etc.).
>> 
>>  State Machine
>> 
>> Initialization:
>> 
>> *Front*   *Back*
>> XenbusStateInitialising   XenbusStateInitialising
>> - Query virtual device- Query backend device
>>   properties.   identification data.
>> - Setup OS device instance.   - Publish backend features
>> - Allocate and initialize the   and transport parameters
>>   request ring.  |
>> - Publish transport parameters   |
>>   that will be in effect during  V
>>   this connection.XenbusStateInitWait
>>  |
>>  |
>>  V
>>XenbusStateInitialised
>> 
>>   - Query frontend transport 
>> parameters.
>>   - Connect to the request ring and
>> event channel.
>>  |
>>  |
>>  V
>>  

[Xen-devel] [xen-4.3-testing test] 99717: trouble: blocked/broken/fail/pass

2016-07-28 Thread osstest service owner
flight 99717 xen-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/99717/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-i386-pvgrub  3 host-install(3) broken REGR. vs. 96460

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-localmigrate fail REGR. vs. 96460
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 96460
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 96460

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install  fail never pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail never pass
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-armhf-armhf-xl-vhd   6 xen-boot fail   never pass
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail  never pass
 test-armhf-armhf-libvirt-qcow2  6 xen-boot fail never pass
 test-armhf-armhf-xl   6 xen-boot fail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt  6 xen-boot fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale   6 xen-boot fail   never pass
 test-armhf-armhf-libvirt-raw  6 xen-boot fail   never pass
 test-armhf-armhf-xl-cubietruck  6 xen-boot fail never pass
 test-armhf-armhf-xl-credit2   6 xen-boot fail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 20 leak-check/checkfail never pass

version targeted for testing:
 xen  f009300d7e2c93f3b98ed4dbe08b3238c9eb0818
baseline version:
 xen  0a8c94fae993dd8f2b27fd4cc694f61c21de84bf

Last test of basis96460  2016-06-30 07:36:20 Z   28 days
Testing same since99717  2016-07-27 18:00:06 Z1 days1 attempts


People who touched revisions under test:
  Andrew Cooper 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  fail
 test-amd64-i386-xl   pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail
 test-amd64-amd64-rumpuserxen-amd64   blocked 
 test-amd64-amd64-xl-qemut-win7-amd64 fail
 test-amd64-i386-xl-qemut-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-armhf-armhf-xl-arndale  fail
 test-amd64-amd64-xl-credit2  pass
 test-armhf-armhf-xl-credit2  fail
 test-armhf-armhf-xl-cubietruck   fail
 

Re: [Xen-devel] Xen 4.7.0 boot PANIC on kernel 4.7.0-4 + UEFI ?

2016-07-28 Thread lists
> Hmmm Could you provide full console dump from Xen and Linux kernel?

Will serial console output with these options

kernel: earlyprintk=xen,keep debug loglevel=8
hypervisor: loglvl=all guest_loglvl=all sync_console console_to_ring

do?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.7.0 boot PANIC on kernel 4.7.0-4 + UEFI ?

2016-07-28 Thread Daniel Kiper
On Wed, Jul 27, 2016 at 09:09:52PM -0400, Konrad Rzeszutek Wilk wrote:
> > > > Sadly not.  The debug symbols need to be specific to the exact binary
> > > > you booted.
> > > >
> > > > Any change in the compilation will result in the translation being
> > > > useless.  What addr2line is doing is saying "which specific bit of
> > > > source code did the compiler/linker end up putting at $X".
> > >
> > > Got it.  Weird that they don't put the .debuginfo rpms in there.  While I 
> > > was searching around kernel bug reports over at the distro there's lots 
> > > of posts telling people to debug.  Not sure then how you do it without 
> > > the debug symbols.
> > >
> > > Guess you have to build your own kernel.
> >
> > I got my hands on a 'matched set'
> >
> > rpm -qa kernel-default\*
> > kernel-default-4.7.0-5.1.x86_64
> > kernel-default-devel-4.7.0-5.1.x86_64
> > kernel-default-debuginfo-4.7.0-5.1.x86_64
> >
> > reboot to Xen, still crashes
> >
> > (XEN) [2016-07-28 00:13:18] [ Xen-4.7.0_08-452  x86_64  
> > debug=n  Tainted:C ]
> > (XEN) [2016-07-28 00:13:18] CPU:0
> > >>> (XEN) [2016-07-28 00:13:18] RIP:e033:[]
> > (XEN) [2016-07-28 00:13:18] RFLAGS: 0246   EM: 1   
> > CONTEXT: pv guest (d0v0)
> > (XEN) [2016-07-28 00:13:18] rax:    rbx: 
> >    rcx: 00016f144000
> > (XEN) [2016-07-28 00:13:18] rdx: 0001   rsi: 
> > 00016f144000   rdi: f000
> > (XEN) [2016-07-28 00:13:18] rbp: 0100   rsp: 
> > 81e03e50   r8:  81efb0c0
> > (XEN) [2016-07-28 00:13:18] r9:     r10: 
> >    r11: 0001
> > (XEN) [2016-07-28 00:13:18] r12:    r13: 
> >    r14: 81e03f28
> > (XEN) [2016-07-28 00:13:18] r15:    cr0: 
> > 80050033   cr4: 001526e0
> > (XEN) [2016-07-28 00:13:18] cr3: 000841e06000   cr2: 
> > 0018
> > (XEN) [2016-07-28 00:13:18] ds:    es:    fs:    
> > gs:    ss: e02b   cs: e033
> > (XEN) [2016-07-28 00:13:18] Guest stack trace from 
> > rsp=81e03e50:
> >
> > check ar the RIP addr
> >
> > addr2line -e /usr/lib/debug/boot/vmlinux-4.7.0-5-default.debug 
> > 81f63eb0
> > 
> > /usr/src/debug/kernel-default-4.7.0/linux-4.7/linux-obj/../arch/x86/platform/efi/efi.c:123
> >
> > in source
> >
> > @ 
> > https://github.com/torvalds/linux/blob/v4.7/arch/x86/platform/efi/efi.c
> >
> > ...
> > void __init efi_find_mirror(void)
> > {
> > efi_memory_desc_t *md;
> > u64 mirror_size = 0, total_size = 0;
> >
> > for_each_efi_memory_desc(md) {
> > unsigned long long start = md->phys_addr;
> > 123 unsigned long long size = md->num_pages << 
> > EFI_PAGE_SHIFT;
> >
> > total_size += size;
> > if (md->attribute & EFI_MEMORY_MORE_RELIABLE) {
> > memblock_mark_mirror(start, size);
> > mirror_size += size;
> > }
> > }
> > if (mirror_size)
> > pr_info("Memory: %lldM/%lldM mirrored memory\n",
> > mirror_size>>20, total_size>>20);
> > }
> > ...
> >
>
> +CC-ing Daniel.

Hmmm Could you provide full console dump from Xen and Linux kernel?

Daniel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [ovmf test] 99721: all pass - PUSHED

2016-07-28 Thread osstest service owner
flight 99721 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/99721/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 39dbc4d5534790b5efcd67ce6b0f82ac23c6db6d
baseline version:
 ovmf 136c648f5985a725fbd399085c16932a4c2f65d7

Last test of basis99697  2016-07-26 04:55:11 Z2 days
Testing same since99721  2016-07-27 18:00:18 Z1 days1 attempts


People who touched revisions under test:
  Hao Wu 
  Laszlo Ersek 
  Ruiyu Ni 
  Satya Yarlagadda 
  Thomas Palmer 
  Yarlagadda, Satya P 
  Yonghong Zhu 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=ovmf
+ revision=39dbc4d5534790b5efcd67ce6b0f82ac23c6db6d
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push ovmf 
39dbc4d5534790b5efcd67ce6b0f82ac23c6db6d
+ branch=ovmf
+ revision=39dbc4d5534790b5efcd67ce6b0f82ac23c6db6d
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=ovmf
+ xenbranch=xen-unstable
+ '[' xovmf = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable
+ prevxenbranch=xen-4.7-testing
+ '[' x39dbc4d5534790b5efcd67ce6b0f82ac23c6db6d = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local 'options=[fetch=try]'
 getconfig GitCacheProxy
 perl -e '
use Osstest;

Re: [Xen-devel] [DRAFT v3] XenSock protocol design document

2016-07-28 Thread Stefano Stabellini
ping

On Wed, 20 Jul 2016, Stefano Stabellini wrote:
> Hi all,
> 
> This is the design document of the XenSock protocol. You can find
> prototypes of the Linux frontend and backend drivers here:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git xensock-3
> 
> To use them, make sure to enable CONFIG_XENSOCK in your kernel config
> and add "xensock=1" to the command line of your DomU Linux kernel. You
> also need the toolstack to create the initial xenstore nodes for the
> protocol. To do that, please apply the attached patch to libxl (the
> patch is based on Xen 4.7.0-rc3) and add "xensock=1" to your DomU config
> file.
> 
> Cheers,
> 
> Stefano
> 
> 
> Changes in v3:
> - add a dummy element to struct xen_xensock_request to make sure the
>   size of the struct is the same on both x86_32 and x86_64
> 
> Changes in v2:
> - add max-dataring-page-order
> - add "Publish backend features and transport parameters" to backend
>   xenbus workflow
> - update new cmd values
> - update xen_xensock_request
> - add backlog parameter to listen and binary layout
> - add description of new data ring format (interface+data)
> - modify connect and accept to reflect new data ring format
> - add link to POSIX docs
> - add error numbers
> - add address format section and relevant numeric definitions
> - add explicit mention of unimplemented commands
> - add protocol node name
> - add xenbus shutdown diagram
> - add socket operation
> 
> ---
> 
> 
> # XenSocks Protocol v1
> 
> ## Rationale
> 
> XenSocks is a paravirtualized protocol for the POSIX socket API.
> 
> The purpose of XenSocks is to allow the implementation of a specific set
> of POSIX functions to be done in a domain other than your own. It allows
> connect, accept, bind, release, listen, poll, recvmsg and sendmsg to be
> implemented in another domain.
> 
> XenSocks provides the following benefits:
> * guest networking works out of the box with VPNs, wireless networks and
>   any other complex configurations on the host
> * guest services listen on ports bound directly to the backend domain IP
>   addresses
> * localhost becomes a secure namespace for inter-VMs communications
> * full visibility of the guest behavior on the backend domain, allowing
>   for inexpensive filtering and manipulation of any guest calls
> * excellent performance
> 
> 
> ## Design
> 
> ### Xenstore
> 
> The frontend and the backend connect to each other exchanging information via
> xenstore. The toolstack creates front and back nodes with state
> XenbusStateInitialising. The protocol node name is **xensock**. There can only
> be one XenSock frontend per domain.
> 
>  Frontend XenBus Nodes
> 
> port
>  Values: 
> 
>  The identifier of the Xen event channel used to signal activity
>  in the ring buffer.
> 
> ring-ref
>  Values: 
> 
>  The Xen grant reference granting permission for the backend to map
>  the sole page in a single page sized ring buffer.
> 
>  Backend XenBus Nodes
> 
> max-dataring-page-order
> Values: 
> 
> The maximum supported size of the data ring in units of lb(machine
> pages). (e.g. 0 == 1 page,  1 = 2 pages, 2 == 4 pages, etc.).
> 
>  State Machine
> 
> Initialization:
> 
> *Front*   *Back*
> XenbusStateInitialising   XenbusStateInitialising
> - Query virtual device- Query backend device
>   properties.   identification data.
> - Setup OS device instance.   - Publish backend features
> - Allocate and initialize the   and transport parameters
>   request ring.  |
> - Publish transport parameters   |
>   that will be in effect during  V
>   this connection.XenbusStateInitWait
>  |
>  |
>  V
>XenbusStateInitialised
> 
>   - Query frontend transport 
> parameters.
>   - Connect to the request ring and
> event channel.
>  |
>  |
>  V
>  XenbusStateConnected
> 
>  - Query backend device properties.
>  - Finalize OS virtual device
>instance.
>  |
>  |
>  V
> XenbusStateConnected
> 
> Once frontend and backend are connected, they have a shared page, which
> will is used to exchange messages over a ring, and an event channel,
> which is used to send notifications.
> 
> Shutdown:
> 
> *Front**Back*
> XenbusStateConnected   XenbusStateConnected
> 

Re: [Xen-devel] [RFC 13/22] xen/arm: p2m: Replace all usage of __p2m_lookup with p2m_get_entry

2016-07-28 Thread Julien Grall

Hello Tamas,

On 28/07/2016 18:29, Tamas K Lengyel wrote:

On Thu, Jul 28, 2016 at 8:51 AM, Julien Grall  wrote:


[...]


---
 xen/arch/arm/p2m.c | 18 --
 1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8676b9d..9a9c85c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -398,24 +398,13 @@ out:
 return mfn;
 }

-/*
- * Lookup the MFN corresponding to a domain's GFN.
- *
- * There are no processor functions to do a stage 2 only lookup therefore we
- * do a a software walk.
- */
-static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
-{
-return p2m_get_entry(>arch.p2m, gfn, t, NULL, NULL);
-}
-
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
 {
 mfn_t ret;
 struct p2m_domain *p2m = >arch.p2m;

 p2m_read_lock(p2m);
-ret = __p2m_lookup(d, gfn, t);
+ret = p2m_get_entry(p2m, gfn, t, NULL, NULL);
 p2m_read_unlock(p2m);

 return ret;
@@ -679,7 +668,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
  * No setting was found in the Radix tree. Check if the
  * entry exists in the page-tables.
  */
-mfn_t mfn = __p2m_lookup(d, gfn, NULL);
+mfn_t mfn = p2m_get_entry(p2m, gfn, NULL, NULL, NULL);

 if ( mfn_eq(mfn, INVALID_MFN) )
 return -ESRCH;
@@ -1595,6 +1584,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag)
 xenmem_access_t xma;
 p2m_type_t t;
 struct page_info *page = NULL;
+struct p2m_domain *p2m = >domain->arch.p2m;


I think this would be a good time to change this and pass p2m as an
input to p2m_mem_access_check_and_get_page. This would help with our
altp2m series as well.


This function can only work with the current p2m because of the call to 
gva_to_ipa. So I don't think it is a good idea to pass the p2m in parameter.


If you want to pass the p2m in parameter, you have to context switch 
properly all the registers (i.e VTTBR_EL2, TTBR{0,1}_EL1 and SCTLR_EL1).


Actually, this function is buggy if memaccess has changed the permission 
on memory holding the stage-1 page-table. Because we are using the 
hardware to translate the VA -> PA, the translate may fail due to memaccess.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 00/22] xen/arm: Rework the P2M code to follow break-before-make sequence

2016-07-28 Thread Tamas K Lengyel
Hi Julien,

> I sent this patch series as an RFC because there are still some TODOs
> in the code (mostly sanity check and possible optimization) and I have
> done limited testing. However, I think it is a good shape to start reviewing,
> get more feedback and have wider testing on different board.

I've tested this series on my Cubietruck but when I try to enable
xen-access on a domain I get the following errors:

~/xen/tools/tests/xen-access# ./xen-access 1 write
xenaccess init
max_gpfn = 48000
starting write 1
(XEN) traps.c:2569:d1v0 HSR=0x904f pc=0xc029eb10 gva=0xc0e013c0
gpa=0x0040e013c0
Error -1 setting all memory to access type 5

The same thing works fine on the latest staging build, so this series
introduces some regression along the way.

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 13/22] xen/arm: p2m: Replace all usage of __p2m_lookup with p2m_get_entry

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 11:29 AM, Tamas K Lengyel  wrote:
> On Thu, Jul 28, 2016 at 8:51 AM, Julien Grall  wrote:
>> __p2m_lookup is just a wrapper to p2m_get_entry.
>>
>> Signed-off-by: Julien Grall 
>> Cc: Razvan Cojocaru 
>> Cc: Tamas K Lengyel 
>>
>> ---
>> It might be possible to rework the memaccess code to take advantage
>> of all the parameters. I will defer this to the memaccess folks.
>
> Could you elaborate on what you mean?
>

Never mind, I see it. Yes, doing __p2m_get_mem_access and then
p2m_get_entry later duplicates work. I would suggest just replacing
__p2m_get_mem_access with a single call to p2m_get_entry to get both
the type and the mem_access setting on the page in a single run.

Thanks,
Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 13/22] xen/arm: p2m: Replace all usage of __p2m_lookup with p2m_get_entry

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 8:51 AM, Julien Grall  wrote:
> __p2m_lookup is just a wrapper to p2m_get_entry.
>
> Signed-off-by: Julien Grall 
> Cc: Razvan Cojocaru 
> Cc: Tamas K Lengyel 
>
> ---
> It might be possible to rework the memaccess code to take advantage
> of all the parameters. I will defer this to the memaccess folks.

Could you elaborate on what you mean?

> ---
>  xen/arch/arm/p2m.c | 18 --
>  1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 8676b9d..9a9c85c 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -398,24 +398,13 @@ out:
>  return mfn;
>  }
>
> -/*
> - * Lookup the MFN corresponding to a domain's GFN.
> - *
> - * There are no processor functions to do a stage 2 only lookup therefore we
> - * do a a software walk.
> - */
> -static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
> -{
> -return p2m_get_entry(>arch.p2m, gfn, t, NULL, NULL);
> -}
> -
>  mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
>  {
>  mfn_t ret;
>  struct p2m_domain *p2m = >arch.p2m;
>
>  p2m_read_lock(p2m);
> -ret = __p2m_lookup(d, gfn, t);
> +ret = p2m_get_entry(p2m, gfn, t, NULL, NULL);
>  p2m_read_unlock(p2m);
>
>  return ret;
> @@ -679,7 +668,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t 
> gfn,
>   * No setting was found in the Radix tree. Check if the
>   * entry exists in the page-tables.
>   */
> -mfn_t mfn = __p2m_lookup(d, gfn, NULL);
> +mfn_t mfn = p2m_get_entry(p2m, gfn, NULL, NULL, NULL);
>
>  if ( mfn_eq(mfn, INVALID_MFN) )
>  return -ESRCH;
> @@ -1595,6 +1584,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
> long flag)
>  xenmem_access_t xma;
>  p2m_type_t t;
>  struct page_info *page = NULL;
> +struct p2m_domain *p2m = >domain->arch.p2m;

I think this would be a good time to change this and pass p2m as an
input to p2m_mem_access_check_and_get_page. This would help with our
altp2m series as well.

Thanks,
Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/2] xen: fix a (latent) cpupool-related race during domain destroy

2016-07-28 Thread Dario Faggioli
On Mon, 2016-07-18 at 16:09 +0200, Juergen Gross wrote:
> Acked-by: Juergen Gross 
> 
> for this patch then.
> 
George,

Ping about this series.

It's not terribly urgent, but it should be easy enough, so I guess
there is a chance that you can have a quick look.

If you can't, sorry for the noise, I'll re-ping you in a bit. :-)

Thanks and Regards,
Dario
-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R Ltd., Cambridge (UK)



signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v3 07/13] tables.h: add linker table support

2016-07-28 Thread H. Peter Anvin
t...@linutronix.de,mi...@redhat.com,b...@alien8.de,li...@arm.linux.org.uk,masami.hiramatsu...@hitachi.com,jba...@akamai.com,heiko.carst...@de.ibm.com,ana...@linux.vnet.ibm.com,anil.s.keshavamur...@intel.com,da...@davemloft.net,real...@gmail.com,x...@kernel.org,l...@amacapital.net,keesc...@chromium.org,torva...@linux-foundation.org,gre...@linuxfoundation.org,ru...@rustcorp.com.au,gno...@lxorguk.ukuu.org.uk,a...@linux.intel.com,dw...@infradead.org,a...@arndb.de,ming@canonical.com,linux-a...@vger.kernel.org,b...@kernel.crashing.org,ana...@in.ibm.com,pebo...@tiscali.nl,font...@sharpeleven.org,ciaran.farr...@suse.com,christopher.denic...@suse.com,david.vra...@citrix.com,konrad.w...@oracle.com,mc...@ipxe.org,jgr...@suse.com,andrew.coop...@citrix.com,andriy.shevche...@linux.intel.com,paul.gortma...@windriver.com,xen-de...@lists.xensource.com,a...@linux.intel.com,pali.ro...@gmail.com,dvh...@infradead.org,platform-driver-...@vger.kernel.org,mma...@suse.com,li...@rasmusvillemoes.dk,jko!
 sina@suse
.cz,korea.dr...@gmail.com,linux-kbu...@vger.kernel.org,tony.l...@intel.com,a...@linux-foundation.org,linux-i...@vger.kernel.org,linux-arm-ker...@lists.infradead.org,linux...@vger.kernel.org,sparcli...@vger.kernel.org,catalin.mari...@arm.com,will.dea...@arm.com,rost...@goodmis.org,jpoim...@redhat.com
Message-ID: <01fd20b1-e788-4cc6-81cf-ba26f000f...@zytor.com>

On July 27, 2016 4:02:18 PM PDT, "Luis R. Rodriguez"  wrote:
>On Tue, Jul 26, 2016 at 12:30:14AM +0900, Masami Hiramatsu wrote:
>> On Fri, 22 Jul 2016 14:24:41 -0700
>> "Luis R. Rodriguez"  wrote:
>> 
>> > +/**
>> > + * LINKTABLE_RUN_ALL - iterate and run through all entries on a
>linker table
>> > + *
>> > + * @tbl: linker table
>> > + * @func: structure name for the function name we want to call.
>> > + * @args...: arguments to pass to func
>> > + *
>> > + * Example usage:
>> > + *
>> > + *   LINKTABLE_RUN_ALL(frobnicator_fns, some_run,);
>> > + */
>> > +#define LINKTABLE_RUN_ALL(tbl, func, args...) 
>> > \
>> > +do {  
>> > \
>> > +  size_t i;   \
>> > +  for (i = 0; i < LINUX_SECTION_SIZE(tbl); i++)   \
>> > +  (tbl[i]).func (args);   \
>> > +} while (0);
>> > +
>> > +/**
>> > + * LINKTABLE_RUN_ERR - run each linker table entry func and return
>error if any
>> > + *
>> > + * @tbl: linker table
>> > + * @func: structure name for the function name we want to call.
>> > + * @args...: arguments to pass to func
>> > + *
>> > + * Example usage:
>> > + *
>> > + *   unsigned int err = LINKTABLE_RUN_ERR(frobnicator_fns,
>some_run,);
>> > + */
>> > +#define LINKTABLE_RUN_ERR(tbl, func, args...) 
>> > \
>> > +({
>> > \
>> > +  size_t i;   \
>> > +  int err = 0;\
>> > +  for (i = 0; !err && i < LINUX_SECTION_SIZE(tbl); i++)   \
>> > +  err = (tbl[i]).func (args); \
>> > +  err; \
>> > +})
>> 
>> These iteration APIs are a bit dangerous, at least for these APIs
>we'd better change
>> name like as FUNCTABLE_RUN etc. because LINKTABLE can contain not
>only function address
>> but also some data (or address of data).
>
>Sure will do, thanks for the review.
>
>  Luis

I don't know if they are dangerous.  Keep in mind C type checking is still 
present.
-- 
Sent from my Android device with K-9 Mail. Please excuse brevity and formatting.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 21/22] xen/arm: p2m: Re-implement p2m_set_mem_access using p2m_{set, get}_entry

2016-07-28 Thread Tamas K Lengyel
On Thu, Jul 28, 2016 at 8:51 AM, Julien Grall  wrote:
> The function p2m_set_mem_access can be re-implemented using the generic
> functions p2m_get_entry and __p2m_set_entry.
>
> Note that because of the implementation of p2m_get_entry, a TLB
> invalidation instruction will be issued for each 4KB page. Therefore the
> performance of memaccess will be impacted, however the function is now
> safe on all the processors.
>
> Also the function apply_p2m_changes is dropped completely as it is not
> unused anymore.

Typo, (not *used*).

[...]

> @@ -2069,6 +1780,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, 
> uint32_t nr,
>  {
>  struct p2m_domain *p2m = p2m_get_hostp2m(d);
>  p2m_access_t a;
> +unsigned int order;
>  long rc = 0;
>
>  static const p2m_access_t memaccess[] = {
> @@ -2111,8 +1823,43 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, 
> uint32_t nr,
>  return 0;
>  }
>
> -rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
> -   (nr - start), INVALID_MFN, mask, 0, a);
> +p2m_write_lock(p2m);
> +
> +for ( gfn = gfn_add(gfn, start); nr > start; gfn = gfn_add(gfn, 1UL << 
> order) )

Long line here (84 width).

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.7.0 boot PANIC on kernel 4.7.0-4 + UEFI ?

2016-07-28 Thread lists
anyone need any addl info from my end to help ?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH linux] xen: change the type of xen_vcpu_id to uint32_t

2016-07-28 Thread David Vrabel
On 28/07/16 17:24, Vitaly Kuznetsov wrote:
> We pass xen_vcpu_id mapping information to hypercalls which require
> uint32_t type so it would be cleaner to have it as uint32_t. The
> initializer to -1 can be dropped as we always do the mapping before using
> it and we never check the 'not set' value anyway.
[...]
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -9,7 +9,7 @@
>  
>  DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu);
>  
> -DECLARE_PER_CPU(int, xen_vcpu_id);
> +DECLARE_PER_CPU(uint32_t, xen_vcpu_id);
>  static inline int xen_vcpu_nr(int cpu)

Should the return type of this change to uint32_t as well?

>  {
>   return per_cpu(xen_vcpu_id, cpu);
> 

David


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] XenProject/XenServer QEMU working group, Friday 8th July, 2016, 15:00.

2016-07-28 Thread Jennifer Herbert


XenProject/XenServer QEMU working group, Friday 8th July, 2016, 15:00.


Date and Attendees
==

XenProject/XenServer QEMU working group, held on Friday the 8th of July,
2016 at 15:00.

The Following were present:

Ian Jackson [platform team]
David Vrabel [ring0]
Andrew Cooper [ring0]
Jennifer Herbert [ring0]


Purpose of meeting
==

Both XenServer (currently using qemu-trad with de-priv) and XenProject
(using QEMU (upstream) without dep-priv), would like to move to using
QEMU (upstream) de-privelaged.

This meeting was intended to restart XenServer/ring0 and XenProject's
collaboration.

Agenda includes:

* Discuss requirements
* Review the our status and strategy on achieving these requirements.
* Agree next steps.

Meeting Actions
===

Ian: Find all the Xenstore keys used by QEMU, and evaluate how much work
 it would be to either read before privileges are dropped, or
 otherwise stop reading/writing them when de-privelaged.

Ian: Write up DM_Opp design (discussed during meeting), talk to Jan
 about this.

XenServer: Go though All the priv-cmd ops, check how they would fit
 within the dm-opp design.

David: Post Event channel / priv command patch to upstream.  (Now done)


Meeting Transcript - abridged
=


Discussed Goals
---

Everyone agrees we want to stop anything gaining control of QEMU from
also gaining access to the platform.

XenProject would like to use depriv by default.  Should not preclude
other configurations such as PCI pass though.

Ian: There is a project in progress to use stub domains with QEMU –
 would expect a user to run depriv or stubdoms, but not at the same
 time.
XS: states that is does not want to go down the stub domains route, as
is not considered scalable. (Boot storms etc;)

Ian: In upstream we expect to pursue both depriv qemu, and stub qemu, as
   implementation strategies.  Depriv qemu is probably deliverable
   sooner.  Stub qemu is more secure but not suitable for all users
   (eg, maybe not for XenServer as David suggested).


Going though list technology areas listed in document shared beforehand.

Disk / Nsetwork / Host IO
-

Can be addressed by running as unprivileged user.  Andrew: Rather
linux centric - not posix solution for all these problem:  Ian: There
are similar interfaces on other platforms.  In upstream we expect to at
least be able to run as an unprivileged user, which is indeed portable.

Mem map


XenServer has a solution.  Ian: Would like to see this shared
immediately.  Andrew: Consider rest of design first.

HyperCalls
--

Three options:
1: XSM
2: Fix ABIan:  Re-arrange priv-cmd to make it easier to parse and
   restrict.
3: Loadable pattern matching solution.  All agree this would be
   horrible to maintain.

Ian: Difference between not having Stable API, and the instability of
 ABI stopping the determination of domain.
Ian: Version compatibility out of scope.  Andrew:  Disagree, should do
 both together.

Andrew: Requirement: – no interdependence between QEMU and the kernel,
such that you need to update both together.

All: Agree xen interface should not be embedded in the kernel.

Jen: Asked why Ian was against using XSM.
Ian: XSM wholesale replace existing checks, and current XSM polices are
 unproven.
Andrew: XS is experimenting, and there are problems.
Ian & Andrew: Agree would need auditing.
Ian: Reluctant to link xsm to this work.
David: Can do both – XSM and 'Fix ABI'.
Ian: With both, wouldn't need the nesting.
David: Would still need to nesting to wrap domains.
Ian: Doesn't have to do all ….
David: 
David: (wearing a linux hat): Sceptical for stable API – don't want to
have a new kernel when API new features added.
Ian: Suggest ideas of DM-Space – D-ctrl space.
David: Would need to be a strong component.  Both ends would want
   commitment ~ no snow flakes.
Ian: Shouldn't be a ploblem.
Andrew: PVH would relpcaes QEMU.   Ian:  Not relevant.

Idea of DM-ops is discussed more.
Dm-op restricted to domain
No need to version this.
Important fields (domain) fixed, such that DM-ops can be added
changed, without affecting kernel interface.

Andrew bring up XenGT.   Is this considered part of QEMU.  QEMU
shouldn't set pci bars.
Question if DM-OP would just be HVM-OP.
David: Should enumerate hyper-calls, make sure in DM-ops.
Andrew & Ian : would need to discuss with Jan.
Some things may need to be in both dm-op & other things, ie guest
interface. - This is ok.

dm-ops is tentatively agreed on as a way forward.

XenStore Restrict.
--

Jen : Upstream discussing maybe dropping sockets.  (Necessary for
restrict)
David: Problem, is it gives the same permissions as the guest – does
this restrict too much?

Should look though all uses, see if they can be used before it drops
privileges.

Ian: Already looked at QEMU PV 

[Xen-devel] [PATCH linux] xen: change the type of xen_vcpu_id to uint32_t

2016-07-28 Thread Vitaly Kuznetsov
We pass xen_vcpu_id mapping information to hypercalls which require
uint32_t type so it would be cleaner to have it as uint32_t. The
initializer to -1 can be dropped as we always do the mapping before using
it and we never check the 'not set' value anyway.

Signed-off-by: Vitaly Kuznetsov 
---
 arch/arm/xen/enlighten.c | 2 +-
 arch/x86/xen/enlighten.c | 2 +-
 include/xen/xen-ops.h| 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 6d3a171..1752116 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -47,7 +47,7 @@ DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
 static struct vcpu_info __percpu *xen_vcpu_info;
 
 /* Linux <-> Xen vCPU id mapping */
-DEFINE_PER_CPU(int, xen_vcpu_id) = -1;
+DEFINE_PER_CPU(uint32_t, xen_vcpu_id);
 EXPORT_PER_CPU_SYMBOL(xen_vcpu_id);
 
 /* These are unused until we support booting "pre-ballooned" */
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 54eef1a..78a14a0 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -120,7 +120,7 @@ DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
 DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info);
 
 /* Linux <-> Xen vCPU id mapping */
-DEFINE_PER_CPU(int, xen_vcpu_id) = -1;
+DEFINE_PER_CPU(uint32_t, xen_vcpu_id);
 EXPORT_PER_CPU_SYMBOL(xen_vcpu_id);
 
 enum xen_domain_type xen_domain_type = XEN_NATIVE;
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index a4926f1..648ce814 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -9,7 +9,7 @@
 
 DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu);
 
-DECLARE_PER_CPU(int, xen_vcpu_id);
+DECLARE_PER_CPU(uint32_t, xen_vcpu_id);
 static inline int xen_vcpu_nr(int cpu)
 {
return per_cpu(xen_vcpu_id, cpu);
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCHv1] xen/privcmd: add IOCTL_PRIVCMD_RESTRICT_DOMID

2016-07-28 Thread David Vrabel
This restricts the file descriptor to only being able map foreign
memory belonging to a specific domain.  Once a file descriptor has
been restricted its restriction cannot be removed or changed.

A device model (e.g., QEMU) or similar can make use of this before
dropping privileges to prevent the file descriptor being used to
escalate privleges if the process is compromised.

FIXME: This is not good enough (yet) as it does not restrict what
hypercalls may be performed.  Fixing this requires a hypervisor ABI
change.

Signed-off-by: David Vrabel 
---
 drivers/xen/privcmd.c  | 75 ++
 include/uapi/xen/privcmd.h | 26 
 2 files changed, 96 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index df2e6f7..513d1c5 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -43,6 +43,18 @@ MODULE_LICENSE("GPL");
 
 #define PRIV_VMA_LOCKED ((void *)1)
 
+#define UNRESTRICTED_DOMID ((domid_t)-1)
+
+struct privcmd_data {
+   domid_t restrict_domid;
+};
+
+static bool privcmd_is_allowed(struct privcmd_data *priv, domid_t domid)
+{
+   return priv->restrict_domid == UNRESTRICTED_DOMID
+   || priv->restrict_domid == domid;
+}
+
 static int privcmd_vma_range_is_mapped(
struct vm_area_struct *vma,
unsigned long addr,
@@ -229,7 +241,7 @@ static int mmap_gfn_range(void *data, void *state)
return 0;
 }
 
-static long privcmd_ioctl_mmap(void __user *udata)
+static long privcmd_ioctl_mmap(struct privcmd_data *priv, void __user *udata)
 {
struct privcmd_mmap mmapcmd;
struct mm_struct *mm = current->mm;
@@ -245,6 +257,9 @@ static long privcmd_ioctl_mmap(void __user *udata)
if (copy_from_user(, udata, sizeof(mmapcmd)))
return -EFAULT;
 
+   if (!privcmd_is_allowed(priv, mmapcmd.dom))
+   return -EACCES;
+
rc = gather_array(,
  mmapcmd.num, sizeof(struct privcmd_mmap_entry),
  mmapcmd.entry);
@@ -416,7 +431,8 @@ static int alloc_empty_pages(struct vm_area_struct *vma, 
int numpgs)
 
 static const struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
+static long privcmd_ioctl_mmap_batch(struct privcmd_data *priv, void __user 
*udata,
+int version)
 {
int ret;
struct privcmd_mmapbatch_v2 m;
@@ -446,6 +462,9 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, 
int version)
return -EINVAL;
}
 
+   if (!privcmd_is_allowed(priv, m.dom))
+   return -EACCES;
+
nr_pages = DIV_ROUND_UP(m.num, XEN_PFN_PER_PAGE);
if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
return -EINVAL;
@@ -548,9 +567,28 @@ out_unlock:
goto out;
 }
 
+static int privcmd_ioctl_restrict_domid(struct privcmd_data *priv,
+   void __user *udata)
+{
+   struct privcmd_restrict_domid prd;
+
+   if (copy_from_user(, udata, sizeof(prd)))
+   return -EFAULT;
+
+   if (prd.domid >= DOMID_FIRST_RESERVED)
+   return -EINVAL;
+   if (priv->restrict_domid != UNRESTRICTED_DOMID)
+   return -EACCES;
+
+   priv->restrict_domid = prd.domid;
+
+   return 0;
+}
+
 static long privcmd_ioctl(struct file *file,
  unsigned int cmd, unsigned long data)
 {
+   struct privcmd_data *priv = file->private_data;
int ret = -ENOSYS;
void __user *udata = (void __user *) data;
 
@@ -560,15 +598,19 @@ static long privcmd_ioctl(struct file *file,
break;
 
case IOCTL_PRIVCMD_MMAP:
-   ret = privcmd_ioctl_mmap(udata);
+   ret = privcmd_ioctl_mmap(priv, udata);
break;
 
case IOCTL_PRIVCMD_MMAPBATCH:
-   ret = privcmd_ioctl_mmap_batch(udata, 1);
+   ret = privcmd_ioctl_mmap_batch(priv, udata, 1);
break;
 
case IOCTL_PRIVCMD_MMAPBATCH_V2:
-   ret = privcmd_ioctl_mmap_batch(udata, 2);
+   ret = privcmd_ioctl_mmap_batch(priv, udata, 2);
+   break;
+
+   case IOCTL_PRIVCMD_RESTRICT_DOMID:
+   ret = privcmd_ioctl_restrict_domid(priv, udata);
break;
 
default:
@@ -644,10 +686,33 @@ static int privcmd_vma_range_is_mapped(
   is_mapped_fn, NULL) != 0;
 }
 
+static int privcmd_open(struct inode *ino, struct file *filp)
+{
+   struct privcmd_data *priv;
+
+   priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+   if (!priv)
+   return -ENOMEM;
+
+   priv->restrict_domid = UNRESTRICTED_DOMID;
+
+   return 0;
+}
+
+static int privcmd_release(struct inode *inode, struct file *file)
+{
+   struct privcmd_data *priv = 

Re: [Xen-devel] [PATCH 2/2] x86/mm: Annotate gfn_get_* helpers as requiring non-NULL parameters

2016-07-28 Thread Andrew Cooper
On 28/07/16 16:58, George Dunlap wrote:
> On 27/07/16 19:08, Andrew Cooper wrote:
>> Introduce and use the nonnull attribute to help the compiler catch NULL
>> parameters being passed to function which require their parameters not to be
>> NULL.  Experimentally, GCC 4.9 on Debian Jessie only warns of non-NULL-ness
>> from immediate callers, so propagate the attributes out to all helpers.
>>
>> A sample error looks like:
>>
>> mem_sharing.c: In function ‘mem_sharing_nominate_page’:
>> mem_sharing.c:884:13: error: null argument where non-null required (argument 
>> 3) [-Werror=nonnull]
>>  amfn = get_gfn_type_access(ap2m, gfn, NULL, , 0, NULL);
>>  ^
>>
>> As part of this, replace the get_gfn_type_access() macro with an equivalent
>> static inline function for extra type safety, and the ability to be 
>> annotated.
>>
>> Signed-off-by: Andrew Cooper 
> At a high level this looks like it's probably an improvement; I'd like
> to hear opinions of people who tend to have stronger opinions here first.
>
> One technical comment...
>
>> ---
>> CC: Jan Beulich 
>> CC: Tim Deegan 
>> CC: George Dunlap 
>> CC: Tamas K Lengyel 
>> ---
>>  xen/include/asm-x86/p2m.h  | 19 +++
>>  xen/include/xen/compiler.h |  2 ++
>>  2 files changed, 13 insertions(+), 8 deletions(-)
>>
>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>> index 194020e..e35d59c 100644
>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -380,9 +380,9 @@ void p2m_unlock_and_tlb_flush(struct p2m_domain *p2m);
>>   * After calling any of the variants below, caller needs to use
>>   * put_gfn. /
>>  
>> -mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
>> -p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
>> -unsigned int *page_order, bool_t locked);
>> +mfn_t __nonnull(1, 3, 4) __get_gfn_type_access(
> __get_gfn_type_access() explicitly tolerates p2m being NULL, so '1'
> should be removed from the list (both here and below).

So it does.  I wonder why that is?  I presume PV guests don't have a p2m.

Looking though this code, it seems to be an unnecessarily complicated
tangle :s

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.4-testing test] 99711: trouble: blocked/broken/fail/pass

2016-07-28 Thread osstest service owner
flight 99711 xen-4.4-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/99711/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-raw3 host-install(3) broken REGR. vs. 95615
 test-amd64-i386-pv3 host-install(3) broken REGR. vs. 95615
 test-amd64-amd64-xl   3 host-install(3) broken REGR. vs. 95615
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 95615

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 95615
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 95615
 test-armhf-armhf-xl-multivcpu 15 guest-start/debian.repeatfail  like 95615
 test-amd64-i386-xend-qemut-winxpsp3  9 windows-install fail like 95615

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-installfail never pass
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 11 guest-start  fail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass

version targeted for testing:
 xen  0fe7d6961755812503694e9a4741b5f35a09d1f7
baseline version:
 xen  36a5a8785065ad4e3110a4bd30967b1410f99138

Last test of basis95615  2016-06-12 17:47:03 Z   45 days
Testing same since99711  2016-07-27 17:59:34 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 

jobs:
 build-amd64-xend pass
 build-i386-xend  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  broken  
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 

Re: [Xen-devel] [PATCH 2/2] x86/mm: Annotate gfn_get_* helpers as requiring non-NULL parameters

2016-07-28 Thread George Dunlap
On 27/07/16 19:08, Andrew Cooper wrote:
> Introduce and use the nonnull attribute to help the compiler catch NULL
> parameters being passed to function which require their parameters not to be
> NULL.  Experimentally, GCC 4.9 on Debian Jessie only warns of non-NULL-ness
> from immediate callers, so propagate the attributes out to all helpers.
> 
> A sample error looks like:
> 
> mem_sharing.c: In function ‘mem_sharing_nominate_page’:
> mem_sharing.c:884:13: error: null argument where non-null required (argument 
> 3) [-Werror=nonnull]
>  amfn = get_gfn_type_access(ap2m, gfn, NULL, , 0, NULL);
>  ^
> 
> As part of this, replace the get_gfn_type_access() macro with an equivalent
> static inline function for extra type safety, and the ability to be annotated.
> 
> Signed-off-by: Andrew Cooper 

At a high level this looks like it's probably an improvement; I'd like
to hear opinions of people who tend to have stronger opinions here first.

One technical comment...

> ---
> CC: Jan Beulich 
> CC: Tim Deegan 
> CC: George Dunlap 
> CC: Tamas K Lengyel 
> ---
>  xen/include/asm-x86/p2m.h  | 19 +++
>  xen/include/xen/compiler.h |  2 ++
>  2 files changed, 13 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 194020e..e35d59c 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -380,9 +380,9 @@ void p2m_unlock_and_tlb_flush(struct p2m_domain *p2m);
>   * After calling any of the variants below, caller needs to use
>   * put_gfn. /
>  
> -mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
> -p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
> -unsigned int *page_order, bool_t locked);
> +mfn_t __nonnull(1, 3, 4) __get_gfn_type_access(

__get_gfn_type_access() explicitly tolerates p2m being NULL, so '1'
should be removed from the list (both here and below).

 -George


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/2] x86/mm: Avoid NULL dereference when checking altp2m's for shareability

2016-07-28 Thread George Dunlap
On 27/07/16 19:08, Andrew Cooper wrote:
> Coverity identifies that __get_gfn_type_access() unconditionally writes to its
> type parameter under a number of circumstances.
> 
> Signed-off-by: Andrew Cooper 

Reviewed-by: George Dunlap 

> ---
> CC: Jan Beulich 
> CC: Tim Deegan 
> CC: George Dunlap 
> CC: Tamas K Lengyel 
> 
> There is a second complaint that ap2ma and p2ma are used before initialisation
> in the following line, although that is harder to reason about.  I think the
> code is OK...

Well there are paths through __get_gfn_type_access() which don't set the
access value -- namely if p2m is null or
!paging_mode_translate(p2m->domain) (which coverity has no way of knowing).

That probably could use being made more robust at some point.

 -George

> ---
>  xen/arch/x86/mm/mem_sharing.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 47e0820..14952ce 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -870,6 +870,7 @@ int mem_sharing_nominate_page(struct domain *d,
>  unsigned int i;
>  struct p2m_domain *ap2m;
>  mfn_t amfn;
> +p2m_type_t ap2mt;
>  p2m_access_t ap2ma;
>  
>  altp2m_list_lock(d);
> @@ -880,7 +881,7 @@ int mem_sharing_nominate_page(struct domain *d,
>  if ( !ap2m )
>  continue;
>  
> -amfn = get_gfn_type_access(ap2m, gfn, NULL, , 0, NULL);
> +amfn = get_gfn_type_access(ap2m, gfn, , , 0, NULL);
>  if ( mfn_valid(amfn) && (mfn_x(amfn) != mfn_x(mfn) || ap2ma != 
> p2ma) )
>  {
>  altp2m_list_unlock(d);
> 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] OVMF very slow on AMD

2016-07-28 Thread Andrew Cooper
On 28/07/16 16:17, Boris Ostrovsky wrote:
> On 07/28/2016 06:54 AM, Andrew Cooper wrote:
>> On 28/07/16 11:43, George Dunlap wrote:
>>> On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
>>>  wrote:
 On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
>> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
>>> I can try to describe how OVMF is setting up the memory.
>> From the start of the day:
>> setup gdt
>> cr0 = 0x4023
> I think this is slightly odd, with bit 30 (cache disable) set. I'd
> suspect that this would affect both Intel and AMD though.
>
> Can you try clearing this bit?
 That works...

 I wonder why it does not appear to affect Intel or KVM.
>>> Are those bits hard-coded, or are they set based on the hardware
>>> that's available?
>>>
>>> Is it possible that the particular combination of CPUID bits presented
>>> by Xen on AMD are causing a different value to be written?
>>>
>>> Or is it possible that the cache disable bit is being ignored (by Xen)
>>> on Intel and KVM?
>> If a guest has no hardware, then it has no reason to actually disable
>> caches.  We should have logic to catch this an avoid actually disabling
>> caches when the guest asks for it.
> Is this really safe to do? Can't a guest decide to disable cache to
> avoid having to deal with coherency in SW?

What SW coherency issue do you think can be solved with disabling the cache?

x86 has strict ordering of writes and reads with respect to each other. 
The only case which can be out of order is reads promoted ahead of
unaliasing writes.

>
> As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
> but no corresponding SVM code. Could it be that we need to set gPAT, for
> example?

A better approach would be to find out why ovmf insists on disabling
caches at all.  Even if we optimise the non-PCI-device case in the
hypervisor, a passthrough case will still run like treacle if caches are
disabled.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] nocera1 boot order fixed (was Re: [xen-4.7-testing test] 99713: regressions - trouble: blocked/broken/fail/pass)

2016-07-28 Thread Ian Jackson
Ian Jackson writes ("nocera1 boot order fixed (was Re: [xen-4.7-testing test] 
99713: regressions - trouble: blocked/broken/fail/pass)"):
> Ian Jackson writes ("Re: [xen-4.7-testing test] 99713: regressions - trouble: 
> blocked/broken/fail/pass"):
> > osstest service owner writes ("[xen-4.7-testing test] 99713: regressions - 
> > trouble: blocked/broken/fail/pass"):
> > >  test-amd64-i386-freebsd10-i386  3 host-install(3) broken REGR. vs. 96660
> > 
> > Lots of these.  Something is wrong with nocera1.  I am investigating.
> 
> It had forgotten its boot order.  I've reset it.  I will double-check
> all the other BIOS settings.

FTR: some of the PCI IRQs were different, and the TPM had been
enabled.  I have made nocera1 be like nocera0.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 21/22] xen/arm: p2m: Re-implement p2m_set_mem_access using p2m_{set, get}_entry

2016-07-28 Thread Julien Grall



On 28/07/16 16:04, Razvan Cojocaru wrote:

On 07/28/2016 05:51 PM, Julien Grall wrote:

The function p2m_set_mem_access can be re-implemented using the generic
functions p2m_get_entry and __p2m_set_entry.

Note that because of the implementation of p2m_get_entry, a TLB
invalidation instruction will be issued for each 4KB page. Therefore the
performance of memaccess will be impacted, however the function is now
safe on all the processors.

Also the function apply_p2m_changes is dropped completely as it is not
unused anymore.

Signed-off-by: Julien Grall 
Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 

---
I have not ran any performance test with memaccess for now, but I
expect an important and unavoidable impact because of how memaccess
has been designed to workaround hardware limitation. Note that might
be possible to re-work memaccess work on superpage but this should
be done in a separate patch.
---
 xen/arch/arm/p2m.c | 329 +++--
 1 file changed, 38 insertions(+), 291 deletions(-)


Thanks for the CC!


Hi Razvan,


This seems to only impact ARM, are there any planned changes for x86
along these lines as well?


The break-before-make sequence is required by the ARM architecture. I 
don't know the x86 architecture, you can ask the x86 maintainers if this 
is something necessary.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] OVMF very slow on AMD

2016-07-28 Thread Boris Ostrovsky
On 07/28/2016 06:54 AM, Andrew Cooper wrote:
> On 28/07/16 11:43, George Dunlap wrote:
>> On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
>>  wrote:
>>> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
 On 07/27/2016 07:35 AM, Anthony PERARD wrote:
> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
>> I can try to describe how OVMF is setting up the memory.
> From the start of the day:
> setup gdt
> cr0 = 0x4023
 I think this is slightly odd, with bit 30 (cache disable) set. I'd
 suspect that this would affect both Intel and AMD though.

 Can you try clearing this bit?
>>> That works...
>>>
>>> I wonder why it does not appear to affect Intel or KVM.
>> Are those bits hard-coded, or are they set based on the hardware
>> that's available?
>>
>> Is it possible that the particular combination of CPUID bits presented
>> by Xen on AMD are causing a different value to be written?
>>
>> Or is it possible that the cache disable bit is being ignored (by Xen)
>> on Intel and KVM?
> If a guest has no hardware, then it has no reason to actually disable
> caches.  We should have logic to catch this an avoid actually disabling
> caches when the guest asks for it.

Is this really safe to do? Can't a guest decide to disable cache to
avoid having to deal with coherency in SW?

As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
but no corresponding SVM code. Could it be that we need to set gPAT, for
example?

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] nocera1 boot order fixed (was Re: [xen-4.7-testing test] 99713: regressions - trouble: blocked/broken/fail/pass)

2016-07-28 Thread Ian Jackson
Ian Jackson writes ("Re: [xen-4.7-testing test] 99713: regressions - trouble: 
blocked/broken/fail/pass"):
> osstest service owner writes ("[xen-4.7-testing test] 99713: regressions - 
> trouble: blocked/broken/fail/pass"):
> >  test-amd64-i386-freebsd10-i386  3 host-install(3) broken REGR. vs. 96660
> 
> Lots of these.  Something is wrong with nocera1.  I am investigating.

It had forgotten its boot order.  I've reset it.  I will double-check
all the other BIOS settings.

Some machines do this (some very rarely).  I have no record of either
nocera* having done it before.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 21/22] xen/arm: p2m: Re-implement p2m_set_mem_access using p2m_{set, get}_entry

2016-07-28 Thread Razvan Cojocaru
On 07/28/2016 05:51 PM, Julien Grall wrote:
> The function p2m_set_mem_access can be re-implemented using the generic
> functions p2m_get_entry and __p2m_set_entry.
> 
> Note that because of the implementation of p2m_get_entry, a TLB
> invalidation instruction will be issued for each 4KB page. Therefore the
> performance of memaccess will be impacted, however the function is now
> safe on all the processors.
> 
> Also the function apply_p2m_changes is dropped completely as it is not
> unused anymore.
> 
> Signed-off-by: Julien Grall 
> Cc: Razvan Cojocaru 
> Cc: Tamas K Lengyel 
> 
> ---
> I have not ran any performance test with memaccess for now, but I
> expect an important and unavoidable impact because of how memaccess
> has been designed to workaround hardware limitation. Note that might
> be possible to re-work memaccess work on superpage but this should
> be done in a separate patch.
> ---
>  xen/arch/arm/p2m.c | 329 
> +++--
>  1 file changed, 38 insertions(+), 291 deletions(-)

Thanks for the CC!

This seems to only impact ARM, are there any planned changes for x86
along these lines as well?


Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-4.7-testing test] 99713: regressions - trouble: blocked/broken/fail/pass

2016-07-28 Thread Ian Jackson

osstest service owner writes ("[xen-4.7-testing test] 99713: regressions - 
trouble: blocked/broken/fail/pass"):
>  test-amd64-i386-freebsd10-i386  3 host-install(3) broken REGR. vs. 96660

Lots of these.  Something is wrong with nocera1.  I am investigating.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 16/22] xen/arm: p2m: Make p2m_{valid, table, mapping} helpers inline

2016-07-28 Thread Julien Grall
Those helpers are very small and often used. Let know the compiler they
can be inlined.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d0aba5b..ca2f1b0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -39,7 +39,7 @@ static const unsigned int level_shifts[] =
 static const unsigned int level_orders[] =
 { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
-static bool_t p2m_valid(lpae_t pte)
+static inline bool_t p2m_valid(lpae_t pte)
 {
 return pte.p2m.valid;
 }
@@ -48,11 +48,11 @@ static bool_t p2m_valid(lpae_t pte)
  * the table bit and therefore these would return the opposite to what
  * you would expect.
  */
-static bool_t p2m_table(lpae_t pte)
+static inline bool_t p2m_table(lpae_t pte)
 {
 return p2m_valid(pte) && pte.p2m.table;
 }
-static bool_t p2m_mapping(lpae_t pte)
+static inline bool_t p2m_mapping(lpae_t pte)
 {
 return p2m_valid(pte) && !pte.p2m.table;
 }
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 13/22] xen/arm: p2m: Replace all usage of __p2m_lookup with p2m_get_entry

2016-07-28 Thread Julien Grall
__p2m_lookup is just a wrapper to p2m_get_entry.

Signed-off-by: Julien Grall 
Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 

---
It might be possible to rework the memaccess code to take advantage
of all the parameters. I will defer this to the memaccess folks.
---
 xen/arch/arm/p2m.c | 18 --
 1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8676b9d..9a9c85c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -398,24 +398,13 @@ out:
 return mfn;
 }
 
-/*
- * Lookup the MFN corresponding to a domain's GFN.
- *
- * There are no processor functions to do a stage 2 only lookup therefore we
- * do a a software walk.
- */
-static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
-{
-return p2m_get_entry(>arch.p2m, gfn, t, NULL, NULL);
-}
-
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
 {
 mfn_t ret;
 struct p2m_domain *p2m = >arch.p2m;
 
 p2m_read_lock(p2m);
-ret = __p2m_lookup(d, gfn, t);
+ret = p2m_get_entry(p2m, gfn, t, NULL, NULL);
 p2m_read_unlock(p2m);
 
 return ret;
@@ -679,7 +668,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
  * No setting was found in the Radix tree. Check if the
  * entry exists in the page-tables.
  */
-mfn_t mfn = __p2m_lookup(d, gfn, NULL);
+mfn_t mfn = p2m_get_entry(p2m, gfn, NULL, NULL, NULL);
 
 if ( mfn_eq(mfn, INVALID_MFN) )
 return -ESRCH;
@@ -1595,6 +1584,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag)
 xenmem_access_t xma;
 p2m_type_t t;
 struct page_info *page = NULL;
+struct p2m_domain *p2m = >domain->arch.p2m;
 
 rc = gva_to_ipa(gva, , flag);
 if ( rc < 0 )
@@ -1655,7 +1645,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag)
  * We had a mem_access permission limiting the access, but the page type
  * could also be limiting, so we need to check that as well.
  */
-mfn = __p2m_lookup(current->domain, gfn, );
+mfn = p2m_get_entry(p2m, gfn, , NULL, NULL);
 if ( mfn_eq(mfn, INVALID_MFN) )
 goto err;
 
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 22/22] xen/arm: p2m: Do not handle shattering in p2m_create_table

2016-07-28 Thread Julien Grall
The helper p2m_create_table is only called to create a brand new table.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 51 ++-
 1 file changed, 6 insertions(+), 45 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 16ed393..4aaa96f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -264,8 +264,7 @@ static p2m_access_t p2m_mem_access_radix_get(struct 
p2m_domain *p2m, gfn_t gfn)
 #define GUEST_TABLE_SUPER_PAGE 1
 #define GUEST_TABLE_NORMAL_PAGE 2
 
-static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry,
-int level_shift);
+static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry);
 
 /*
  * Take the currently mapped table, find the corresponding GFN entry,
@@ -291,7 +290,7 @@ static int p2m_next_level(struct p2m_domain *p2m, bool 
read_only,
 if ( read_only )
 return GUEST_TABLE_MAP_FAILED;
 
-ret = p2m_create_table(p2m, entry, /* not used */ ~0);
+ret = p2m_create_table(p2m, entry);
 if ( ret )
 return GUEST_TABLE_MAP_FAILED;
 }
@@ -557,25 +556,14 @@ static inline void p2m_remove_pte(lpae_t *p, bool 
clean_pte)
 p2m_write_pte(p, pte, clean_pte);
 }
 
-/*
- * Allocate a new page table page and hook it in via the given entry.
- * apply_one_level relies on this returning 0 on success
- * and -ve on failure.
- *
- * If the existing entry is present then it must be a mapping and not
- * a table and it will be shattered into the next level down.
- *
- * level_shift is the number of bits at the level we want to create.
- */
-static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry,
-int level_shift)
+/* Allocate a new page table page and hook it in via the given entry. */
+static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 {
 struct page_info *page;
 lpae_t *p;
 lpae_t pte;
-int splitting = p2m_valid(*entry);
 
-BUG_ON(p2m_table(*entry));
+ASSERT(!p2m_valid(*entry));
 
 page = alloc_domheap_page(NULL, 0);
 if ( page == NULL )
@@ -584,35 +572,8 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t 
*entry,
 page_list_add(page, >pages);
 
 p = __map_domain_page(page);
-if ( splitting )
-{
-p2m_type_t t = entry->p2m.type;
-mfn_t mfn = _mfn(entry->p2m.base);
-int i;
 
-/*
- * We are either splitting a first level 1G page into 512 second level
- * 2M pages, or a second level 2M page into 512 third level 4K pages.
- */
- for ( i=0 ; i < LPAE_ENTRIES; i++ )
- {
- pte = mfn_to_p2m_entry(mfn_add(mfn, i << (level_shift - 
LPAE_SHIFT)),
-t, p2m->default_access);
-
- /*
-  * First and second level super pages set p2m.table = 0, but
-  * third level entries set table = 1.
-  */
- if ( level_shift - LPAE_SHIFT )
- pte.p2m.table = 0;
-
- write_pte([i], pte);
- }
-
- page->u.inuse.p2m_refcount = LPAE_ENTRIES;
-}
-else
-clear_page(p);
+clear_page(p);
 
 if ( p2m->clean_pte )
 clean_dcache_va_range(p, PAGE_SIZE);
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 18/22] xen/arm: p2m: Introduce p2m_set_entry and __p2m_set_entry

2016-07-28 Thread Julien Grall
The ARM architecture mandates to use of a break-before-make sequence
when changing translation entries if the page table is shared between
multiple CPUs whenever a valid entry is replaced by another valid entry
(see D4.7.1 in ARM DDI 0487A.j for more details).

The break-before-make sequence can be divided in the following steps:
1) Invalidate the old entry in the page table
2) Issue a TLB invalidation instruction for the address associated
to this entry
3) Write the new entry

The current P2M code implemented in apply_one_level does not respect
this sequence and may result to break coherency on some processors.

Adapting the current implementation to use the break-before-make
sequence would imply some code duplication and more TLBs invalidation
than necessary. For instance, if we are replacing a 4KB page and the
current mapping in the P2M is using a 1GB superpage, the following steps
will happen:
1) Shatter the 1GB superpage into a series of 2MB superpages
2) Shatter the 2MB superpage into a series of 4KB pages
3) Replace the 4KB page

As the current implementation is shattering while descending and install
the mapping, Xen would need to issue 3 TLB invalidation instructions
which is clearly inefficient.

Furthermore, all the operations which modify the page table are using
the same skeleton. It is more complicated to maintain different code paths
than having a generic function that set an entry and take care of the
break-before-make sequence.

The new implementation is based on the x86 EPT one which, I think,
fits quite well for the break-before-make sequence whilst keeping
the code simple.

The main function of the new implementation is __p2m_get_entry. It will
only work on mapping that are aligned to a block entry in the page table
(i.e 1GB, 2MB, 4KB when using a 4KB granularity).

Another function, p2m_get_entry, is provided to break down is region
into mapping that is aligned to a block entry.

Note that to keep this patch "small", there are no caller of those
functions added in this patch (they will be added in follow-up patches).

Signed-off-by: Julien Grall 

---
I need to find the impact of this new implementation on ARM32 because
the domheap is not always mapped. This means that Xen needs to map/unmap
everytime the page associated to the page table. It might be possible
to re-use some caching structure as the current implementation does
or rework the way a domheap is mapped/unmapped.

Also, this code still contain few TODOs mostly to add sanity
checks and few optimization. The IOMMU is not yet supported.
---
 xen/arch/arm/p2m.c | 335 +
 1 file changed, 335 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c93e554..297b176 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -750,6 +750,341 @@ static void p2m_put_l3_page(mfn_t mfn, p2m_type_t type)
 }
 }
 
+#if 0
+/* Free lpae sub-tree behind an entry */
+static void p2m_free_entry(struct p2m_domain *p2m,
+   lpae_t entry, unsigned int level)
+{
+unsigned int i;
+lpae_t *table;
+mfn_t mfn;
+
+/* Nothing to do if the entry is invalid or a super-page */
+if ( !p2m_valid(entry) || p2m_is_superpage(entry, level) )
+return;
+
+if ( level == 3 )
+{
+p2m_put_l3_page(_mfn(entry.p2m.base), entry.p2m.type);
+return;
+}
+
+table = map_domain_page(_mfn(entry.p2m.base));
+for ( i = 0; i < LPAE_ENTRIES; i++ )
+p2m_free_entry(p2m, *(table + i), level + 1);
+
+unmap_domain_page(table);
+
+/*
+ * Make sure all the references in the TLB have been removed before
+ * freing the intermediate page table.
+ * XXX: Should we defer the free of the page table to avoid the
+ * flush?
+ */
+if ( p2m->need_flush )
+p2m_flush_tlb_sync(p2m);
+
+mfn = _mfn(entry.p2m.base);
+ASSERT(mfn_valid(mfn_x(mfn)));
+
+free_domheap_page(mfn_to_page(mfn_x(mfn)));
+}
+
+static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
+unsigned int level, unsigned int target,
+const unsigned int *offsets)
+{
+struct page_info *page;
+unsigned int i;
+lpae_t pte, *table;
+bool rv = true;
+
+/* Convenience aliases */
+p2m_type_t t = entry->p2m.type;
+mfn_t mfn = _mfn(entry->p2m.base);
+
+/* Convenience aliases */
+unsigned int next_level = level + 1;
+unsigned int level_order = level_orders[next_level];
+
+/*
+ * This should only be called with target != level and the entry is
+ * a superpage.
+ */
+ASSERT(level < target);
+ASSERT(p2m_is_superpage(*entry, level));
+
+page = alloc_domheap_page(NULL, 0);
+if ( !page )
+return false;
+
+page_list_add(page, >pages);
+table = __map_domain_page(page);
+
+/*
+ * We are either 

[Xen-devel] [RFC 21/22] xen/arm: p2m: Re-implement p2m_set_mem_access using p2m_{set, get}_entry

2016-07-28 Thread Julien Grall
The function p2m_set_mem_access can be re-implemented using the generic
functions p2m_get_entry and __p2m_set_entry.

Note that because of the implementation of p2m_get_entry, a TLB
invalidation instruction will be issued for each 4KB page. Therefore the
performance of memaccess will be impacted, however the function is now
safe on all the processors.

Also the function apply_p2m_changes is dropped completely as it is not
unused anymore.

Signed-off-by: Julien Grall 
Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 

---
I have not ran any performance test with memaccess for now, but I
expect an important and unavoidable impact because of how memaccess
has been designed to workaround hardware limitation. Note that might
be possible to re-work memaccess work on superpage but this should
be done in a separate patch.
---
 xen/arch/arm/p2m.c | 329 +++--
 1 file changed, 38 insertions(+), 291 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 707c7be..16ed393 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1081,295 +1081,6 @@ static int p2m_set_entry(struct p2m_domain *p2m,
 return rc;
 }
 
-#define P2M_ONE_DESCEND0
-#define P2M_ONE_PROGRESS_NOP   0x1
-#define P2M_ONE_PROGRESS   0x10
-
-static int p2m_shatter_page(struct p2m_domain *p2m,
-lpae_t *entry,
-unsigned int level)
-{
-const paddr_t level_shift = level_shifts[level];
-int rc = p2m_create_table(p2m, entry, level_shift - PAGE_SHIFT);
-
-if ( !rc )
-{
-p2m->stats.shattered[level]++;
-p2m->stats.mappings[level]--;
-p2m->stats.mappings[level+1] += LPAE_ENTRIES;
-}
-
-return rc;
-}
-
-/*
- * 0   == (P2M_ONE_DESCEND) continue to descend the tree
- * +ve == (P2M_ONE_PROGRESS_*) handled at this level, continue, flush,
- *entry, addr and maddr updated.  Return value is an
- *indication of the amount of work done (for preemption).
- * -ve == (-Exxx) error.
- */
-static int apply_one_level(struct domain *d,
-   lpae_t *entry,
-   unsigned int level,
-   enum p2m_operation op,
-   paddr_t start_gpaddr,
-   paddr_t end_gpaddr,
-   paddr_t *addr,
-   paddr_t *maddr,
-   bool_t *flush,
-   p2m_type_t t,
-   p2m_access_t a)
-{
-const paddr_t level_size = level_sizes[level];
-
-struct p2m_domain *p2m = >arch.p2m;
-lpae_t pte;
-const lpae_t orig_pte = *entry;
-int rc;
-
-BUG_ON(level > 3);
-
-switch ( op )
-{
-case MEMACCESS:
-if ( level < 3 )
-{
-if ( !p2m_valid(orig_pte) )
-{
-*addr += level_size;
-return P2M_ONE_PROGRESS_NOP;
-}
-
-/* Shatter large pages as we descend */
-if ( p2m_mapping(orig_pte) )
-{
-rc = p2m_shatter_page(p2m, entry, level);
-if ( rc < 0 )
-return rc;
-} /* else: an existing table mapping -> descend */
-
-return P2M_ONE_DESCEND;
-}
-else
-{
-pte = orig_pte;
-
-if ( p2m_valid(pte) )
-{
-rc = p2m_mem_access_radix_set(p2m, _gfn(paddr_to_pfn(*addr)),
-  a);
-if ( rc < 0 )
-return rc;
-
-p2m_set_permission(, pte.p2m.type, a);
-p2m_write_pte(entry, pte, p2m->clean_pte);
-}
-
-*addr += level_size;
-*flush = true;
-return P2M_ONE_PROGRESS;
-}
-}
-
-BUG(); /* Should never get here */
-}
-
-/*
- * The page is only used by the P2M code which is protected by the p2m->lock.
- * So we can avoid to use atomic helpers.
- */
-static void update_reference_mapping(struct page_info *page,
- lpae_t old_entry,
- lpae_t new_entry)
-{
-if ( p2m_valid(old_entry) && !p2m_valid(new_entry) )
-page->u.inuse.p2m_refcount--;
-else if ( !p2m_valid(old_entry) && p2m_valid(new_entry) )
-page->u.inuse.p2m_refcount++;
-}
-
-static int apply_p2m_changes(struct domain *d,
- enum p2m_operation op,
- gfn_t sgfn,
- unsigned long nr,
- mfn_t smfn,
- uint32_t mask,
- p2m_type_t t,
- p2m_access_t a)
-{
-paddr_t start_gpaddr = pfn_to_paddr(gfn_x(sgfn));
-paddr_t end_gpaddr = pfn_to_paddr(gfn_x(sgfn) + nr);
-paddr_t maddr = 

[Xen-devel] [RFC 20/22] xen/arm: p2m: Re-implement p2m_insert_mapping using p2m_set_entry

2016-07-28 Thread Julien Grall
The function p2m_insert_mapping can be re-implemented using the generic
function p2m_set_entry.

Note that the mapping is not reverted anymore if Xen fails to insert a
mapping. This was added to ensure the MMIO are not kept half-mapped
in case of failure and to follow the x86 counterpart. This was removed
on the x86 part by commit c3c756bd "x86/p2m: use large pages for MMIO
mappings" and I think we should let the caller taking care of it.

Finally drop the operation INSERT in apply_* as nobody is using it
anymore. Note that the functios could have been dropped in one go at the
end, however I find easier to drop the operations one by one avoiding a
big deletion in the patch that convert the last operation.

Signed-off-by: Julien Grall 

---
Whilst there is no safety checks on what is replaced in the P2M
(such as foreign/grant mapping as x86 does), we may want to add it
ensuring the guest is not doing something dumb. Any opinions?
---
 xen/arch/arm/p2m.c | 148 +
 1 file changed, 12 insertions(+), 136 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0920222..707c7be 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -724,7 +724,6 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, 
gfn_t gfn,
 }
 
 enum p2m_operation {
-INSERT,
 MEMACCESS,
 };
 
@@ -1082,41 +1081,6 @@ static int p2m_set_entry(struct p2m_domain *p2m,
 return rc;
 }
 
-/*
- * Returns true if start_gpaddr..end_gpaddr contains at least one
- * suitably aligned level_size mappping of maddr.
- *
- * So long as the range is large enough the end_gpaddr need not be
- * aligned (callers should create one superpage mapping based on this
- * result and then call this again on the new range, eventually the
- * slop at the end will cause this function to return false).
- */
-static bool_t is_mapping_aligned(const paddr_t start_gpaddr,
- const paddr_t end_gpaddr,
- const paddr_t maddr,
- const paddr_t level_size)
-{
-const paddr_t level_mask = level_size - 1;
-
-/* No hardware superpages at level 0 */
-if ( level_size == ZEROETH_SIZE )
-return false;
-
-/*
- * A range smaller than the size of a superpage at this level
- * cannot be superpage aligned.
- */
-if ( ( end_gpaddr - start_gpaddr ) < level_size - 1 )
-return false;
-
-/* Both the gpaddr and maddr must be aligned */
-if ( start_gpaddr & level_mask )
-return false;
-if ( maddr & level_mask )
-return false;
-return true;
-}
-
 #define P2M_ONE_DESCEND0
 #define P2M_ONE_PROGRESS_NOP   0x1
 #define P2M_ONE_PROGRESS   0x10
@@ -1168,81 +1132,6 @@ static int apply_one_level(struct domain *d,
 
 switch ( op )
 {
-case INSERT:
-if ( is_mapping_aligned(*addr, end_gpaddr, *maddr, level_size) &&
-   /*
-* We do not handle replacing an existing table with a superpage
-* or when mem_access is in use.
-*/
- (level == 3 || (!p2m_table(orig_pte) && 
!p2m->mem_access_enabled)) )
-{
-rc = p2m_mem_access_radix_set(p2m, _gfn(paddr_to_pfn(*addr)), a);
-if ( rc < 0 )
-return rc;
-
-/* New mapping is superpage aligned, make it */
-pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), t, a);
-if ( level < 3 )
-pte.p2m.table = 0; /* Superpage entry */
-
-p2m_write_pte(entry, pte, p2m->clean_pte);
-
-*flush |= p2m_valid(orig_pte);
-
-*addr += level_size;
-*maddr += level_size;
-
-if ( p2m_valid(orig_pte) )
-{
-/*
- * We can't currently get here for an existing table
- * mapping, since we don't handle replacing an
- * existing table with a superpage. If we did we would
- * need to handle freeing (and accounting) for the bit
- * of the p2m tree which we would be about to lop off.
- */
-BUG_ON(level < 3 && p2m_table(orig_pte));
-if ( level == 3 )
-p2m_put_l3_page(_mfn(orig_pte.p2m.base),
-orig_pte.p2m.type);
-}
-else /* New mapping */
-p2m->stats.mappings[level]++;
-
-return P2M_ONE_PROGRESS;
-}
-else
-{
-/* New mapping is not superpage aligned, create a new table entry 
*/
-
-/* L3 is always suitably aligned for mapping (handled, above) */
-BUG_ON(level == 3);
-
-/* Not present -> create table entry and descend */
-if ( !p2m_valid(orig_pte) )
-{
-rc = p2m_create_table(p2m, entry, 0);
-if ( 

[Xen-devel] [RFC 11/22] xen/arm: p2m: Introduce p2m_get_root_pointer and use it in __p2m_lookup

2016-07-28 Thread Julien Grall
Mapping the root table is always done the same way. To avoid duplicating
the code in a later patch, move the code in a separate helper.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 53 +++--
 1 file changed, 35 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ea582c8..d4a4b62 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -204,6 +204,37 @@ static void p2m_flush_tlb_sync(struct p2m_domain *p2m)
 }
 
 /*
+ * Find and map the root page table. The caller is responsible for
+ * unmapping the table.
+ *
+ * The function will return NULL if the offset of the root table is
+ * invalid.
+ */
+static lpae_t *p2m_get_root_pointer(struct p2m_domain *p2m,
+gfn_t gfn)
+{
+unsigned int root_table;
+
+if ( P2M_ROOT_PAGES == 1 )
+return __map_domain_page(p2m->root);
+
+/*
+ * Concatenated root-level tables. The table number will be the
+ * offset at the previous level. It is not possible to
+ * concatenate a level-0 root.
+ */
+ASSERT(P2M_ROOT_LEVEL > 0);
+
+root_table = gfn_x(gfn) >>  (level_shifts[P2M_ROOT_LEVEL - 1] - 
PAGE_SHIFT);
+root_table &= LPAE_ENTRY_MASK;
+
+if ( root_table >= P2M_ROOT_PAGES )
+return NULL;
+
+return __map_domain_page(p2m->root + root_table);
+}
+
+/*
  * Lookup the MFN corresponding to a domain's GFN.
  *
  * There are no processor functions to do a stage 2 only lookup therefore we
@@ -226,7 +257,7 @@ static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, 
p2m_type_t *t)
 mfn_t mfn = INVALID_MFN;
 paddr_t mask = 0;
 p2m_type_t _t;
-unsigned int level, root_table;
+unsigned int level;
 
 ASSERT(p2m_is_locked(p2m));
 BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
@@ -236,22 +267,9 @@ static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, 
p2m_type_t *t)
 
 *t = p2m_invalid;
 
-if ( P2M_ROOT_PAGES > 1 )
-{
-/*
- * Concatenated root-level tables. The table number will be
- * the offset at the previous level. It is not possible to
- * concatenate a level-0 root.
- */
-ASSERT(P2M_ROOT_LEVEL > 0);
-root_table = offsets[P2M_ROOT_LEVEL - 1];
-if ( root_table >= P2M_ROOT_PAGES )
-goto err;
-}
-else
-root_table = 0;
-
-map = __map_domain_page(p2m->root + root_table);
+map = p2m_get_root_pointer(p2m, gfn);
+if ( !map )
+return INVALID_MFN;
 
 ASSERT(P2M_ROOT_LEVEL < 4);
 
@@ -286,7 +304,6 @@ static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, 
p2m_type_t *t)
 *t = pte.p2m.type;
 }
 
-err:
 return mfn;
 }
 
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 07/22] xen/arm: p2m: Rework p2m_put_l3_page

2016-07-28 Thread Julien Grall
Modify the prototype to directly pass the mfn and the type in
parameters. This will be useful later when we do not have the entry in
hand.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 17 +++--
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index aecdd1e..6b29cf0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -584,10 +584,8 @@ enum p2m_operation {
  * TODO: Handle superpages, for now we only take special references for leaf
  * pages (specifically foreign ones, which can't be super mapped today).
  */
-static void p2m_put_l3_page(const lpae_t pte)
+static void p2m_put_l3_page(mfn_t mfn, p2m_type_t type)
 {
-ASSERT(p2m_valid(pte));
-
 /*
  * TODO: Handle other p2m types
  *
@@ -595,12 +593,10 @@ static void p2m_put_l3_page(const lpae_t pte)
  * flush the TLBs if the page is reallocated before the end of
  * this loop.
  */
-if ( p2m_is_foreign(pte.p2m.type) )
+if ( p2m_is_foreign(type) )
 {
-unsigned long mfn = pte.p2m.base;
-
-ASSERT(mfn_valid(mfn));
-put_page(mfn_to_page(mfn));
+ASSERT(mfn_valid(mfn_x(mfn)));
+put_page(mfn_to_page(mfn_x(mfn)));
 }
 }
 
@@ -734,7 +730,8 @@ static int apply_one_level(struct domain *d,
  */
 BUG_ON(level < 3 && p2m_table(orig_pte));
 if ( level == 3 )
-p2m_put_l3_page(orig_pte);
+p2m_put_l3_page(_mfn(orig_pte.p2m.base),
+orig_pte.p2m.type);
 }
 else /* New mapping */
 p2m->stats.mappings[level]++;
@@ -834,7 +831,7 @@ static int apply_one_level(struct domain *d,
 p2m->stats.mappings[level]--;
 
 if ( level == 3 )
-p2m_put_l3_page(orig_pte);
+p2m_put_l3_page(_mfn(orig_pte.p2m.base), orig_pte.p2m.type);
 
 /*
  * This is still a single pte write, no matter the level, so no need to
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 19/22] xen/arm: p2m: Re-implement p2m_remove_using using p2m_set_entry

2016-07-28 Thread Julien Grall
The function p2m_insert_mapping can be re-implemented using the generic
function p2m_set_entry.

Also drop the operation REMOVE in apply_* as nobody is using it anymore.
Note that the functions could have been dropped in one go at the end,
however I find easier to drop the operations one by one avoiding a big
deletion in the patch that converts the last operation.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 127 ++---
 1 file changed, 13 insertions(+), 114 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 297b176..0920222 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -725,7 +725,6 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, 
gfn_t gfn,
 
 enum p2m_operation {
 INSERT,
-REMOVE,
 MEMACCESS,
 };
 
@@ -750,7 +749,6 @@ static void p2m_put_l3_page(mfn_t mfn, p2m_type_t type)
 }
 }
 
-#if 0
 /* Free lpae sub-tree behind an entry */
 static void p2m_free_entry(struct p2m_domain *p2m,
lpae_t entry, unsigned int level)
@@ -1083,7 +1081,6 @@ static int p2m_set_entry(struct p2m_domain *p2m,
 
 return rc;
 }
-#endif
 
 /*
  * Returns true if start_gpaddr..end_gpaddr contains at least one
@@ -1161,7 +1158,6 @@ static int apply_one_level(struct domain *d,
p2m_access_t a)
 {
 const paddr_t level_size = level_sizes[level];
-const paddr_t level_mask = level_masks[level];
 
 struct p2m_domain *p2m = >arch.p2m;
 lpae_t pte;
@@ -1247,74 +1243,6 @@ static int apply_one_level(struct domain *d,
 
 break;
 
-case REMOVE:
-if ( !p2m_valid(orig_pte) )
-{
-/* Progress up to next boundary */
-*addr = (*addr + level_size) & level_mask;
-*maddr = (*maddr + level_size) & level_mask;
-return P2M_ONE_PROGRESS_NOP;
-}
-
-if ( level < 3 )
-{
-if ( p2m_table(orig_pte) )
-return P2M_ONE_DESCEND;
-
-if ( op == REMOVE &&
- !is_mapping_aligned(*addr, end_gpaddr,
- 0, /* maddr doesn't matter for remove */
- level_size) )
-{
-/*
- * Removing a mapping from the middle of a superpage. Shatter
- * and descend.
- */
-*flush = true;
-rc = p2m_shatter_page(p2m, entry, level);
-if ( rc < 0 )
-return rc;
-
-return P2M_ONE_DESCEND;
-}
-}
-
-/*
- * Ensure that the guest address addr currently being
- * handled (that is in the range given as argument to
- * this function) is actually mapped to the corresponding
- * machine address in the specified range. maddr here is
- * the machine address given to the function, while
- * orig_pte.p2m.base is the machine frame number actually
- * mapped to the guest address: check if the two correspond.
- */
- if ( op == REMOVE &&
-  pfn_to_paddr(orig_pte.p2m.base) != *maddr )
- printk(XENLOG_G_WARNING
-"p2m_remove dom%d: mapping at %"PRIpaddr" is of maddr 
%"PRIpaddr" not %"PRIpaddr" as expected\n",
-d->domain_id, *addr, pfn_to_paddr(orig_pte.p2m.base),
-*maddr);
-
-*flush = true;
-
-p2m_remove_pte(entry, p2m->clean_pte);
-p2m_mem_access_radix_set(p2m, _gfn(paddr_to_pfn(*addr)),
- p2m_access_rwx);
-
-*addr += level_size;
-*maddr += level_size;
-
-p2m->stats.mappings[level]--;
-
-if ( level == 3 )
-p2m_put_l3_page(_mfn(orig_pte.p2m.base), orig_pte.p2m.type);
-
-/*
- * This is still a single pte write, no matter the level, so no need to
- * scale.
- */
-return P2M_ONE_PROGRESS;
-
 case MEMACCESS:
 if ( level < 3 )
 {
@@ -1526,43 +1454,6 @@ static int apply_p2m_changes(struct domain *d,
 }
 
 BUG_ON(level > 3);
-
-if ( op == REMOVE )
-{
-for ( ; level > P2M_ROOT_LEVEL; level-- )
-{
-lpae_t old_entry;
-lpae_t *entry;
-unsigned int offset;
-
-pg = pages[level];
-
-/*
- * No need to try the previous level if the current one
- * still contains some mappings.
- */
-if ( pg->u.inuse.p2m_refcount )
-break;
-
-offset = offsets[level - 1];
-entry = [level - 1][offset];
-old_entry = *entry;
-
-page_list_del(pg, >pages);
-
-p2m_remove_pte(entry, p2m->clean_pte);
-
-

[Xen-devel] [RFC 14/22] xen/arm: p2m: Re-implement p2m_cache_flush using p2m_get_entry

2016-07-28 Thread Julien Grall
The function p2m_cache_flush can be re-implemented using the generic
function p2m_get_entry by iterating over the range and using the mapping
order given by the callee.

As the current implementation, no preemption is implemented, although
the comment in the current code claimed it. As the function is called by
a DOMCTL with a region of 1GB maximum, I think the preemption can be
left unimplemented for now.

Finally drop the operation CACHEFLUSH in apply_one_level as nobody is
using it anymore. Note that the function could have been dropped in one
go at the end, however I find easier to drop the operations one by one
avoiding a big deletion in the patch that convert the last operation.

Signed-off-by: Julien Grall 

---
The loop pattern will be very for the reliquish function. It might
be possible to extract it in a separate function.
---
 xen/arch/arm/p2m.c | 67 +++---
 1 file changed, 34 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 9a9c85c..e7697bb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -722,7 +722,6 @@ enum p2m_operation {
 INSERT,
 REMOVE,
 RELINQUISH,
-CACHEFLUSH,
 MEMACCESS,
 };
 
@@ -978,36 +977,6 @@ static int apply_one_level(struct domain *d,
  */
 return P2M_ONE_PROGRESS;
 
-case CACHEFLUSH:
-if ( !p2m_valid(orig_pte) )
-{
-*addr = (*addr + level_size) & level_mask;
-return P2M_ONE_PROGRESS_NOP;
-}
-
-if ( level < 3 && p2m_table(orig_pte) )
-return P2M_ONE_DESCEND;
-
-/*
- * could flush up to the next superpage boundary, but would
- * need to be careful about preemption, so just do one 4K page
- * now and return P2M_ONE_PROGRESS{,_NOP} so that the caller will
- * continue to loop over the rest of the range.
- */
-if ( p2m_is_ram(orig_pte.p2m.type) )
-{
-unsigned long offset = paddr_to_pfn(*addr & ~level_mask);
-flush_page_to_ram(orig_pte.p2m.base + offset);
-
-*addr += PAGE_SIZE;
-return P2M_ONE_PROGRESS;
-}
-else
-{
-*addr += PAGE_SIZE;
-return P2M_ONE_PROGRESS_NOP;
-}
-
 case MEMACCESS:
 if ( level < 3 )
 {
@@ -1555,12 +1524,44 @@ int p2m_cache_flush(struct domain *d, gfn_t start, 
unsigned long nr)
 {
 struct p2m_domain *p2m = >arch.p2m;
 gfn_t end = gfn_add(start, nr);
+p2m_type_t t;
+unsigned int order;
 
 start = gfn_max(start, p2m->lowest_mapped_gfn);
 end = gfn_min(end, p2m->max_mapped_gfn);
 
-return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
- 0, p2m_invalid, d->arch.p2m.default_access);
+/* XXX: Should we use write lock here? */
+p2m_read_lock(p2m);
+
+for ( ; gfn_x(start) < gfn_x(end); start = gfn_add(start, 1UL << order) )
+{
+mfn_t mfn = p2m_get_entry(p2m, start, , NULL, );
+
+/* Skip hole and non-RAM page */
+if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_ram(t) )
+{
+/*
+ * the order corresponds to the order of the mapping in the
+ * page table. so we need to align the gfn before
+ * incrementing.
+ */
+start = _gfn(gfn_x(start) & ~((1UL << order) - 1));
+continue;
+}
+
+/*
+ * Could flush up to the next superpage boundary, but we would
+ * need to be careful about preemption, so just do one 4K page
+ * now.
+ * XXX: Implement preemption.
+ */
+flush_page_to_ram(mfn_x(mfn));
+order = 0;
+}
+
+p2m_read_unlock(p2m);
+
+return 0;
 }
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 17/22] xen/arm: p2m: Introduce a helper to check if an entry is a superpage

2016-07-28 Thread Julien Grall
Use the level and the entry to know whether an entry is a superpage.
A superpage can only happen below level 3.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ca2f1b0..c93e554 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -57,6 +57,11 @@ static inline bool_t p2m_mapping(lpae_t pte)
 return p2m_valid(pte) && !pte.p2m.table;
 }
 
+static inline bool_t p2m_is_superpage(lpae_t pte, unsigned int level)
+{
+return (level < 3) && p2m_mapping(pte);
+}
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
 write_lock(>lock);
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 15/22] xen/arm: p2m: Re-implement relinquish_p2m_mapping using p2m_get_entry

2016-07-28 Thread Julien Grall
The current implementation of relinquish_p2m_mapping is modifying the
page table to erase the entry one by one. However, this is not necessary
because the domain is not running anymore and therefore will speed up
the domain destruction.

The function relinquish_p2m_mapping can be re-implemented using
p2m_get_entry by iterating over the range mapped and using the mapping
order given by the callee.

Given that the preemption was chosen arbitrarily, it is no done on every
512 iterations. Meaning that Xen may check more often if the function is
preempted when there are no mappings.

Finally drop the operation RELINQUISH in apply_* as nobody is using it
anymore. Note that the functions could have been dropped in one go at
the end, however I find easier to drop the operations one by one
avoiding a big deletion in the patch that remove the last operation.

Signed-off-by: Julien Grall 

---
Further investigation needs to be done before applying this patch to
check if someone could take advantage of this change (such
modifying an entry which was relinquished).
---
 xen/arch/arm/p2m.c | 70 --
 1 file changed, 52 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e7697bb..d0aba5b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -721,7 +721,6 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, 
gfn_t gfn,
 enum p2m_operation {
 INSERT,
 REMOVE,
-RELINQUISH,
 MEMACCESS,
 };
 
@@ -908,7 +907,6 @@ static int apply_one_level(struct domain *d,
 
 break;
 
-case RELINQUISH:
 case REMOVE:
 if ( !p2m_valid(orig_pte) )
 {
@@ -1092,17 +1090,6 @@ static int apply_p2m_changes(struct domain *d,
 {
 switch ( op )
 {
-case RELINQUISH:
-/*
- * Arbitrarily, preempt every 512 operations or 8192 nops.
- * 512*P2M_ONE_PROGRESS == 8192*P2M_ONE_PROGRESS_NOP == 0x2000
- * This is set in preempt_count_limit.
- *
- */
-p2m->lowest_mapped_gfn = _gfn(addr >> PAGE_SHIFT);
-rc = -ERESTART;
-goto out;
-
 case MEMACCESS:
 {
 /*
@@ -1508,16 +1495,63 @@ int p2m_init(struct domain *d)
 return rc;
 }
 
+/*
+ * The function will go through the p2m and remove page reference when it
+ * is required.
+ * The mapping are left intact in the p2m. This is fine because the
+ * domain will never run at that point.
+ *
+ * XXX: Check what does it mean for other part (such as lookup)
+ */
 int relinquish_p2m_mapping(struct domain *d)
 {
 struct p2m_domain *p2m = >arch.p2m;
-unsigned long nr;
+unsigned long count = 0;
+p2m_type_t t;
+int rc = 0;
+unsigned int order;
+
+/* Convenience alias */
+gfn_t start = p2m->lowest_mapped_gfn;
+gfn_t end = p2m->max_mapped_gfn;
 
-nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
+p2m_write_lock(p2m);
 
-return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
- INVALID_MFN, 0, p2m_invalid,
- d->arch.p2m.default_access);
+for ( ; gfn_x(start) < gfn_x(end); start = gfn_add(start, 1UL << order) )
+{
+mfn_t mfn = p2m_get_entry(p2m, start, , NULL, );
+
+count++;
+/*
+ * Arbitrarily preempt every 512 iterations.
+ */
+if ( !(count % 512) && hypercall_preempt_check() )
+{
+rc = -ERESTART;
+break;
+}
+
+/* Skip hole and any superpage */
+if ( mfn_eq(mfn, INVALID_MFN) || order != 0 )
+/*
+ * The order corresponds to the order of the mapping in the
+ * page table. So we need to align the GFN before
+ * incrementing.
+ */
+start = _gfn(gfn_x(start) & ~((1UL << order) - 1));
+else
+p2m_put_l3_page(mfn, t);
+}
+
+/*
+ * Update lowest_mapped_gfn so on the next call we still start where
+ * we stopped.
+ */
+p2m->lowest_mapped_gfn = start;
+
+p2m_write_unlock(p2m);
+
+return rc;
 }
 
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 08/22] xen/arm: p2m: Invalidate the TLBs when write unlocking the p2m

2016-07-28 Thread Julien Grall
Sometimes the invalidation of the TLBs can be deferred until the p2m is
unlocked. This is for instance the case when multiple mappings are
removed. In other case, such as shattering a superpage, an immediate
flush is required.

Keep track whether a flush is needed directly in the p2m_domain structure
to allow serializing multiple changes. The TLBs will be invalidated when
write unlocking the p2m if necessary.

Also a new helper, p2m_flush_sync, has been introduced to force a
synchronous TLB invalidation.

Finally, replace the call to p2m_flush_tlb by p2m_flush_tlb_sync in
apply_p2m_changes.

Note this patch is not useful today, however follow-up patches will make
advantage of it.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c| 33 -
 xen/include/asm-arm/p2m.h | 11 +++
 2 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6b29cf0..a6dce0c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -52,8 +52,21 @@ static inline void p2m_write_lock(struct p2m_domain *p2m)
 write_lock(>lock);
 }
 
+static void p2m_flush_tlb(struct p2m_domain *p2m);
+
 static inline void p2m_write_unlock(struct p2m_domain *p2m)
 {
+if ( p2m->need_flush )
+{
+p2m->need_flush = false;
+/*
+ * The final flush is done with the P2M write lock taken to
+ * to avoid someone else modify the P2M before the TLB
+ * invalidation has completed.
+ */
+p2m_flush_tlb(p2m);
+}
+
 write_unlock(>lock);
 }
 
@@ -72,6 +85,11 @@ static inline int p2m_is_locked(struct p2m_domain *p2m)
 return rw_is_locked(>lock);
 }
 
+static inline int p2m_is_write_locked(struct p2m_domain *p2m)
+{
+return rw_is_write_locked(>lock);
+}
+
 void p2m_dump_info(struct domain *d)
 {
 struct p2m_domain *p2m = >arch.p2m;
@@ -165,6 +183,19 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
 }
 
 /*
+ * Force a synchronous P2M TLB flush.
+ *
+ * Must be called with the p2m lock held.
+ */
+static void p2m_flush_tlb_sync(struct p2m_domain *p2m)
+{
+ASSERT(p2m_is_write_locked(p2m));
+
+p2m_flush_tlb(p2m);
+p2m->need_flush = false;
+}
+
+/*
  * Lookup the MFN corresponding to a domain's GFN.
  *
  * There are no processor functions to do a stage 2 only lookup therefore we
@@ -1142,7 +1173,7 @@ static int apply_p2m_changes(struct domain *d,
 out:
 if ( flush )
 {
-p2m_flush_tlb(>arch.p2m);
+p2m_flush_tlb_sync(>arch.p2m);
 ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
 if ( !rc )
 rc = ret;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 03bfd5e..e6be3ea 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -51,6 +51,17 @@ struct p2m_domain {
 /* Indicate if it is required to clean the cache when writing an entry */
 bool_t clean_pte;
 
+/*
+ * P2M updates may required TLBs to be flushed (invalidated).
+ *
+ * Flushes may be deferred by setting 'need_flush' and then flushing
+ * when the p2m write lock is released.
+ *
+ * If an immediate flush is required (e.g, if a super page is
+ * shattered), call p2m_tlb_flush_sync().
+ */
+bool need_flush;
+
 /* Gather some statistics for information purposes only */
 struct {
 /* Number of mappings at each p2m tree level */
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 00/22] xen/arm: Rework the P2M code to follow break-before-make sequence

2016-07-28 Thread Julien Grall
Hello all,

The ARM architecture mandates the use of a break-before-make sequence when
changing translation entries if the page table is shared between multiple
CPUs whenever a valid entry is replaced by another valid entry (see D4.7.1
in ARM DDI 0487A.j for more details).

The current P2M code does not respect this sequence and may result to
break coherency on some processors.

Adapting the current implementation to use break-before-make sequence would
imply some code duplication and more TLBs invalidations than necessary.
For instance, if we are replacing a 4KB page and the current mapping in
the P2M is using a 1GB superpage, the following steps will happen:
1) Shatter the 1GB superpage into a series of 2MB superpages
2) Shatter the 2MB superpage into a series of 4KB superpages
3) Replace the 4KB page

As the current implementation is shattering while descending and install
the mapping before continuing to the next level, Xen would need to issue 3
TLB invalidation instructions which is clearly inefficient.

Furthermore, all the operations which modify the page table are using the
same skeleton. It is more complicated to maintain different code paths than
having a generic function that set an entry and take care of the break-before-
make sequence.

The new implementation is based on the x86 EPT one which, I think, fits
quite well for the break-before-make sequence whilst keeping the code
simple.

I sent this patch series as an RFC because there are still some TODOs
in the code (mostly sanity check and possible optimization) and I have
done limited testing. However, I think it is a good shape to start reviewing,
get more feedback and have wider testing on different board.

Also, I need to figure out the impact on ARM32 because the domheap is not
always mapped.

This series has dependencies on some rework sent separately ([1] and [2]).
I have provided a branch with all the dependencies and this series applied:

git://xenbits.xen.org/people/julieng/xen-unstable.git branch p2m-rfc

Comments are welcome.

Yours sincerely,

Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 
Cc: Shanker Donthineni 
Cc: Dirk Behme 
Cc: Edgar E. Iglesias 

[1] https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02936.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02830.html

Julien Grall (22):
  xen/arm: do_trap_instr_abort_guest: Move the IPA computation out of
the switch
  xen/arm: p2m: Store in p2m_domain whether we need to clean the entry
  xen/arm: p2m: Rename parameter in p2m_{remove,write}_pte...
  xen/arm: p2m: Use typesafe gfn in p2m_mem_access_radix_set
  xen/arm: traps: Move MMIO emulation code in a separate helper
  xen/arm: traps: Check the P2M before injecting a data/instruction
abort
  xen/arm: p2m: Rework p2m_put_l3_page
  xen/arm: p2m: Invalidate the TLBs when write unlocking the p2m
  xen/arm: p2m: Change the type of level_shifts from paddr_t to unsigned
int
  xen/arm: p2m: Move the lookup helpers at the top of the file
  xen/arm: p2m: Introduce p2m_get_root_pointer and use it in
__p2m_lookup
  xen/arm: p2m: Introduce p2m_get_entry and use it to implement
__p2m_lookup
  xen/arm: p2m: Replace all usage of __p2m_lookup with p2m_get_entry
  xen/arm: p2m: Re-implement p2m_cache_flush using p2m_get_entry
  xen/arm: p2m: Re-implement relinquish_p2m_mapping using p2m_get_entry
  xen/arm: p2m: Make p2m_{valid,table,mapping} helpers inline
  xen/arm: p2m: Introduce a helper to check if an entry is a superpage
  xen/arm: p2m: Introduce p2m_set_entry and __p2m_set_entry
  xen/arm: p2m: Re-implement p2m_remove_using using p2m_set_entry
  xen/arm: p2m: Re-implement p2m_insert_mapping using p2m_set_entry
  xen/arm: p2m: Re-implement p2m_set_mem_access using
p2m_{set,get}_entry
  xen/arm: p2m: Do not handle shattering in p2m_create_table

 xen/arch/arm/domain.c  |8 +-
 xen/arch/arm/p2m.c | 1274 ++--
 xen/arch/arm/traps.c   |  126 +++--
 xen/include/asm-arm/p2m.h  |   14 +
 xen/include/asm-arm/page.h |4 +
 5 files changed, 742 insertions(+), 684 deletions(-)

-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 05/22] xen/arm: traps: Move MMIO emulation code in a separate helper

2016-07-28 Thread Julien Grall
Currently, a stage-2 fault translation will likely access an emulated
region. All the checks are pre-sanitity check for MMIO emulation.

A follow-up patch will handle a new case that could lead to a stage-2
translation. To improve the clarity of the code and the changes, the
current implementation is move in a separate helper.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/traps.c | 58 ++--
 1 file changed, 33 insertions(+), 25 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 46e0663..b46284c 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2444,6 +2444,38 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
 inject_iabt_exception(regs, gva, hsr.len);
 }
 
+static bool_t try_handle_mmio(struct cpu_user_regs *regs,
+  mmio_info_t *info)
+{
+const struct hsr_dabt dabt = info->dabt;
+int rc;
+
+/* stage-1 page table should never live in an emulated MMIO region */
+if ( dabt.s1ptw )
+return 0;
+
+/* All the instructions used on emulated MMIO region should be valid */
+if ( !dabt.valid )
+return 0;
+
+/*
+ * Erratum 766422: Thumb store translation fault to Hypervisor may
+ * not have correct HSR Rt value.
+ */
+if ( check_workaround_766422() && (regs->cpsr & PSR_THUMB) &&
+ dabt.write )
+{
+rc = decode_instruction(regs, >dabt);
+if ( rc )
+{
+gprintk(XENLOG_DEBUG, "Unable to decode instruction\n");
+return 0;
+}
+}
+
+return !!handle_mmio(info);
+}
+
 static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
  const union hsr hsr)
 {
@@ -2487,40 +2519,16 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
 break;
 }
 case FSC_FLT_TRANS:
-if ( dabt.s1ptw )
-goto bad_data_abort;
-
-/* XXX: Decode the instruction if ISS is not valid */
-if ( !dabt.valid )
-goto bad_data_abort;
-
-/*
- * Erratum 766422: Thumb store translation fault to Hypervisor may
- * not have correct HSR Rt value.
- */
-if ( check_workaround_766422() && (regs->cpsr & PSR_THUMB) &&
- dabt.write )
-{
-rc = decode_instruction(regs, );
-if ( rc )
-{
-gprintk(XENLOG_DEBUG, "Unable to decode instruction\n");
-goto bad_data_abort;
-}
-}
-
-if ( handle_mmio() )
+if ( try_handle_mmio(regs, ) )
 {
 advance_pc(regs, hsr);
 return;
 }
-break;
 default:
 gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
 hsr.bits, dabt.dfsc);
 }
 
-bad_data_abort:
 gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
  " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, info.gva, info.gpa);
 inject_dabt_exception(regs, info.gva, hsr.len);
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 04/22] xen/arm: p2m: Use typesafe gfn in p2m_mem_access_radix_set

2016-07-28 Thread Julien Grall
p2m_mem_access_radix_set is expecting a gfn in a parameter. Rename the
parameter 'pfn' to 'gfn' to match its content and use the typesafe gfn
to avoid possible misusage.

Also rename the parameter to gfn to match its content.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 16 +---
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ff82f12..aecdd1e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -542,7 +542,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
 return 0;
 }
 
-static int p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long pfn,
+static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn,
 p2m_access_t a)
 {
 int rc;
@@ -552,18 +552,18 @@ static int p2m_mem_access_radix_set(struct p2m_domain 
*p2m, unsigned long pfn,
 
 if ( p2m_access_rwx == a )
 {
-radix_tree_delete(>mem_access_settings, pfn);
+radix_tree_delete(>mem_access_settings, gfn_x(gfn));
 return 0;
 }
 
-rc = radix_tree_insert(>mem_access_settings, pfn,
+rc = radix_tree_insert(>mem_access_settings, gfn_x(gfn),
radix_tree_int_to_ptr(a));
 if ( rc == -EEXIST )
 {
 /* If a setting already exists, change it to the new one */
 radix_tree_replace_slot(
 radix_tree_lookup_slot(
->mem_access_settings, pfn),
+>mem_access_settings, gfn_x(gfn)),
 radix_tree_int_to_ptr(a));
 rc = 0;
 }
@@ -707,7 +707,7 @@ static int apply_one_level(struct domain *d,
 */
  (level == 3 || (!p2m_table(orig_pte) && 
!p2m->mem_access_enabled)) )
 {
-rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+rc = p2m_mem_access_radix_set(p2m, _gfn(paddr_to_pfn(*addr)), a);
 if ( rc < 0 )
 return rc;
 
@@ -825,7 +825,8 @@ static int apply_one_level(struct domain *d,
 *flush = true;
 
 p2m_remove_pte(entry, p2m->clean_pte);
-p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), p2m_access_rwx);
+p2m_mem_access_radix_set(p2m, _gfn(paddr_to_pfn(*addr)),
+ p2m_access_rwx);
 
 *addr += level_size;
 *maddr += level_size;
@@ -896,7 +897,8 @@ static int apply_one_level(struct domain *d,
 
 if ( p2m_valid(pte) )
 {
-rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+rc = p2m_mem_access_radix_set(p2m, _gfn(paddr_to_pfn(*addr)),
+  a);
 if ( rc < 0 )
 return rc;
 
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 12/22] xen/arm: p2m: Introduce p2m_get_entry and use it to implement __p2m_lookup

2016-07-28 Thread Julien Grall
Currently, for a given GFN, the function __p2m_lookup will only return
the associated MFN and the p2m type of the mapping.

In some case we need the order of the mapping and the memaccess
permission. Rather than providing separate function for this purpose,
it is better to implement a generic function to return all the
information.

To avoid passing dummy parameter, a caller that does need a specific
information can use NULL instead.

The list of the informations retrieved is based on the x86 version. All
of them will be used in follow-up patches.

It might have been possible to extend __p2m_lookup, however I choose to
reimplement it from scratch to allow sharing some helpers with the
function that will update the P2M (will be added in a follow-up patch).

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 188 ++---
 xen/include/asm-arm/page.h |   4 +
 2 files changed, 149 insertions(+), 43 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d4a4b62..8676b9d 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -36,6 +36,8 @@ static const paddr_t level_masks[] =
 { ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK };
 static const unsigned int level_shifts[] =
 { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
+static const unsigned int level_orders[] =
+{ ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
 static bool_t p2m_valid(lpae_t pte)
 {
@@ -236,28 +238,99 @@ static lpae_t *p2m_get_root_pointer(struct p2m_domain 
*p2m,
 
 /*
  * Lookup the MFN corresponding to a domain's GFN.
+ * Lookup mem access in the ratrix tree.
+ * The entries associated to the GFN is considered valid.
+ */
+static p2m_access_t p2m_mem_access_radix_get(struct p2m_domain *p2m, gfn_t gfn)
+{
+void *ptr;
+
+if ( !p2m->mem_access_enabled )
+return p2m_access_rwx;
+
+ptr = radix_tree_lookup(>mem_access_settings, gfn_x(gfn));
+if ( !ptr )
+return p2m_access_rwx;
+else
+return radix_tree_ptr_to_int(ptr);
+}
+
+#define GUEST_TABLE_MAP_FAILED 0
+#define GUEST_TABLE_SUPER_PAGE 1
+#define GUEST_TABLE_NORMAL_PAGE 2
+
+static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry,
+int level_shift);
+
+/*
+ * Take the currently mapped table, find the corresponding GFN entry,
+ * and map the next table, if available.
  *
- * There are no processor functions to do a stage 2 only lookup therefore we
- * do a a software walk.
+ * Return values:
+ *  GUEST_TABLE_MAP_FAILED: Either read_only was set and the entry
+ *  was empty, or allocating a new page failed.
+ *  GUEST_TABLE_NORMAL_PAGE: next level mapped normally
+ *  GUEST_TABLE_SUPER_PAGE: The next entry points to a superpage.
  */
-static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
+static int p2m_next_level(struct p2m_domain *p2m, bool read_only,
+  lpae_t **table, unsigned int offset)
 {
-struct p2m_domain *p2m = >arch.p2m;
-const paddr_t paddr = pfn_to_paddr(gfn_x(gfn));
-const unsigned int offsets[4] = {
-zeroeth_table_offset(paddr),
-first_table_offset(paddr),
-second_table_offset(paddr),
-third_table_offset(paddr)
-};
-const paddr_t masks[4] = {
-ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK
-};
-lpae_t pte, *map;
+lpae_t *entry;
+int ret;
+mfn_t mfn;
+
+entry = *table + offset;
+
+if ( !p2m_valid(*entry) )
+{
+if ( read_only )
+return GUEST_TABLE_MAP_FAILED;
+
+ret = p2m_create_table(p2m, entry, /* not used */ ~0);
+if ( ret )
+return GUEST_TABLE_MAP_FAILED;
+}
+
+/* The function p2m_next_level is never called at the 3rd level */
+if ( p2m_mapping(*entry) )
+return GUEST_TABLE_SUPER_PAGE;
+
+mfn = _mfn(entry->p2m.base);
+
+unmap_domain_page(*table);
+*table = map_domain_page(mfn);
+
+return GUEST_TABLE_NORMAL_PAGE;
+}
+
+/*
+ * Get the details of a given gfn.
+ *
+ * If the entry is present, the associated MFN will be returned and the
+ * access and type filled up. The page_order will correspond to the
+ * order of the mapping in the page table (i.e it could be a superpage).
+ *
+ * If the entry is not present, INVALID_MFN will be returned and the
+ * page_order will be set according to the order of the invalid range.
+ */
+static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
+   p2m_type_t *t, p2m_access_t *a,
+   unsigned int *page_order)
+{
+paddr_t addr = pfn_to_paddr(gfn_x(gfn));
+unsigned int level = 0;
+lpae_t entry, *table;
+int rc;
 mfn_t mfn = INVALID_MFN;
-paddr_t mask = 0;
 p2m_type_t _t;
-unsigned int level;
+
+/* Convenience aliases */
+const unsigned int offsets[4] = {
+zeroeth_table_offset(addr),
+first_table_offset(addr),
+

[Xen-devel] [RFC 06/22] xen/arm: traps: Check the P2M before injecting a data/instruction abort

2016-07-28 Thread Julien Grall
A data/instruction abort may have occurred if another CPU was playing
with the stage-2 page table when following the break-before-make
sequence (see D4.7.1 in ARM DDI 0487A.j). Rather than injecting directly
the fault to the guest, we need to check whether the mapping exists. If
it exists, return to the guest to replay the instruction.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/traps.c | 40 ++--
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b46284c..da56cc0 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2404,6 +2404,7 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
 register_t gva = READ_SYSREG(FAR_EL2);
 uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
 paddr_t gpa;
+mfn_t mfn;
 
 if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
 gpa = get_faulting_ipa(gva);
@@ -2417,6 +2418,11 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
  */
 flush_tlb_local();
 
+/*
+ * We may not be able to translate because someone is
+ * playing with the Stage-2 page table of the domain.
+ * Return to the guest.
+ */
 rc = gva_to_ipa(gva, , GV2M_READ);
 if ( rc == -EFAULT )
 return; /* Try again */
@@ -2437,8 +2443,17 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
 /* Trap was triggered by mem_access, work here is done */
 if ( !rc )
 return;
+break;
 }
-break;
+case FSC_FLT_TRANS:
+/*
+ * The PT walk may have failed because someone was playing
+ * with the Stage-2 page table. Walk the Stage-2 PT to check
+ * if the entry exists. If it's the case, return to the guest
+ */
+mfn = p2m_lookup(current->domain, _gfn(paddr_to_pfn(gpa)), NULL);
+if ( !mfn_eq(mfn, INVALID_MFN) )
+return;
 }
 
 inject_iabt_exception(regs, gva, hsr.len);
@@ -2455,7 +2470,7 @@ static bool_t try_handle_mmio(struct cpu_user_regs *regs,
 return 0;
 
 /* All the instructions used on emulated MMIO region should be valid */
-if ( !dabt.valid )
+if ( !info->dabt.valid )
 return 0;
 
 /*
@@ -2483,6 +2498,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs 
*regs,
 int rc;
 mmio_info_t info;
 uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
+mfn_t mfn;
 
 info.dabt = dabt;
 #ifdef CONFIG_ARM_32
@@ -2496,6 +2512,11 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
 else
 {
 rc = gva_to_ipa(info.gva, , GV2M_READ);
+/*
+ * We may not be able to translate because someone is
+ * playing with the Stage-2 page table of the domain.
+ * Return to the guest.
+ */
 if ( rc == -EFAULT )
 return; /* Try again */
 }
@@ -2519,11 +2540,26 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
 break;
 }
 case FSC_FLT_TRANS:
+/*
+ * Attempt first to emulate the MMIO has the data abort will
+ * likely happen an emulated region.
+ */
 if ( try_handle_mmio(regs, ) )
 {
 advance_pc(regs, hsr);
 return;
 }
+
+/*
+ * The PT walk may have failed because someone was playing
+ * with the Stage-2 page table. Walk the Stage-2 PT to check
+ * if the entry exists. If it's the case, return to the guest
+ */
+mfn = p2m_lookup(current->domain, _gfn(paddr_to_pfn(info.gpa)), NULL);
+if ( !mfn_eq(mfn, INVALID_MFN) )
+return;
+
+break;
 default:
 gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
 hsr.bits, dabt.dfsc);
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 02/22] xen/arm: p2m: Store in p2m_domain whether we need to clean the entry

2016-07-28 Thread Julien Grall
Each entry in the page table has to table clean when the IOMMU does not
support coherent walk. Rather than querying every time the page table is
updated, it is possible to do it only once when the p2m is initialized.

This is because this value can never change, Xen would be in big trouble
otherwise.

With this change, the initialize of the IOMMU for a given domain has to
be done earlier in order to know whether the page table entries need to
be clean. It is fine to move the call earlier because it has no
dependency.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/domain.c |  8 +---
 xen/arch/arm/p2m.c| 47 ++-
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 30 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 20bb2ba..48f04c8 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -555,6 +555,11 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 return 0;
 
 ASSERT(config != NULL);
+
+/* p2m_init relies on some value initialized by the IOMMU subsystem */
+if ( (rc = iommu_domain_init(d)) != 0 )
+goto fail;
+
 if ( (rc = p2m_init(d)) != 0 )
 goto fail;
 
@@ -637,9 +642,6 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 if ( is_hardware_domain(d) && (rc = domain_vuart_init(d)) )
 goto fail;
 
-if ( (rc = iommu_domain_init(d)) != 0 )
-goto fail;
-
 return 0;
 
 fail:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 40a0b80..d389f2b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -416,7 +416,7 @@ static inline void p2m_remove_pte(lpae_t *p, bool_t 
flush_cache)
  * level_shift is the number of bits at the level we want to create.
  */
 static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry,
-int level_shift, bool_t flush_cache)
+int level_shift)
 {
 struct page_info *page;
 lpae_t *p;
@@ -462,7 +462,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t 
*entry,
 else
 clear_page(p);
 
-if ( flush_cache )
+if ( p2m->clean_pte )
 clean_dcache_va_range(p, PAGE_SIZE);
 
 unmap_domain_page(p);
@@ -470,7 +470,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t 
*entry,
 pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
p2m->default_access);
 
-p2m_write_pte(entry, pte, flush_cache);
+p2m_write_pte(entry, pte, p2m->clean_pte);
 
 return 0;
 }
@@ -653,12 +653,10 @@ static const paddr_t level_shifts[] =
 
 static int p2m_shatter_page(struct p2m_domain *p2m,
 lpae_t *entry,
-unsigned int level,
-bool_t flush_cache)
+unsigned int level)
 {
 const paddr_t level_shift = level_shifts[level];
-int rc = p2m_create_table(p2m, entry,
-  level_shift - PAGE_SHIFT, flush_cache);
+int rc = p2m_create_table(p2m, entry, level_shift - PAGE_SHIFT);
 
 if ( !rc )
 {
@@ -680,7 +678,6 @@ static int p2m_shatter_page(struct p2m_domain *p2m,
 static int apply_one_level(struct domain *d,
lpae_t *entry,
unsigned int level,
-   bool_t flush_cache,
enum p2m_operation op,
paddr_t start_gpaddr,
paddr_t end_gpaddr,
@@ -719,7 +716,7 @@ static int apply_one_level(struct domain *d,
 if ( level < 3 )
 pte.p2m.table = 0; /* Superpage entry */
 
-p2m_write_pte(entry, pte, flush_cache);
+p2m_write_pte(entry, pte, p2m->clean_pte);
 
 *flush |= p2m_valid(orig_pte);
 
@@ -754,7 +751,7 @@ static int apply_one_level(struct domain *d,
 /* Not present -> create table entry and descend */
 if ( !p2m_valid(orig_pte) )
 {
-rc = p2m_create_table(p2m, entry, 0, flush_cache);
+rc = p2m_create_table(p2m, entry, 0);
 if ( rc < 0 )
 return rc;
 return P2M_ONE_DESCEND;
@@ -764,7 +761,7 @@ static int apply_one_level(struct domain *d,
 if ( p2m_mapping(orig_pte) )
 {
 *flush = true;
-rc = p2m_shatter_page(p2m, entry, level, flush_cache);
+rc = p2m_shatter_page(p2m, entry, level);
 if ( rc < 0 )
 return rc;
 } /* else: an existing table mapping -> descend */
@@ -801,7 +798,7 @@ static int apply_one_level(struct domain *d,
  * and descend.
  */
 *flush = true;
-rc = p2m_shatter_page(p2m, entry, level, flush_cache);
+rc = 

[Xen-devel] [RFC 09/22] xen/arm: p2m: Change the type of level_shifts from paddr_t to unsigned int

2016-07-28 Thread Julien Grall
The level shift can be encoded with 32-bit. So it is not necessary to
use paddr_t (i.e 64-bit).

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a6dce0c..798faa8 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -675,7 +675,7 @@ static const paddr_t level_sizes[] =
 { ZEROETH_SIZE, FIRST_SIZE, SECOND_SIZE, THIRD_SIZE };
 static const paddr_t level_masks[] =
 { ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK };
-static const paddr_t level_shifts[] =
+static const unsigned int level_shifts[] =
 { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
 
 static int p2m_shatter_page(struct p2m_domain *p2m,
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 03/22] xen/arm: p2m: Rename parameter in p2m_{remove, write}_pte...

2016-07-28 Thread Julien Grall
to make clear of the usage. I.e it is used to inform whether Xen needs
to clean the entry after writing in the page table.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d389f2b..ff82f12 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -390,19 +390,19 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, 
p2m_access_t a)
 return e;
 }
 
-static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool_t flush_cache)
+static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool clean_pte)
 {
 write_pte(p, pte);
-if ( flush_cache )
+if ( clean_pte )
 clean_dcache(*p);
 }
 
-static inline void p2m_remove_pte(lpae_t *p, bool_t flush_cache)
+static inline void p2m_remove_pte(lpae_t *p, bool clean_pte)
 {
 lpae_t pte;
 
 memset(, 0x00, sizeof(pte));
-p2m_write_pte(p, pte, flush_cache);
+p2m_write_pte(p, pte, clean_pte);
 }
 
 /*
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 01/22] xen/arm: do_trap_instr_abort_guest: Move the IPA computation out of the switch

2016-07-28 Thread Julien Grall
A follow-up patch will add more case to the switch that will require the
IPA. So move the computation out of the switch.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/traps.c | 36 ++--
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 683bcb2..46e0663 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2403,35 +2403,35 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
 int rc;
 register_t gva = READ_SYSREG(FAR_EL2);
 uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
+paddr_t gpa;
+
+if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
+gpa = get_faulting_ipa(gva);
+else
+{
+/*
+ * Flush the TLB to make sure the DTLB is clear before
+ * doing GVA->IPA translation. If we got here because of
+ * an entry only present in the ITLB, this translation may
+ * still be inaccurate.
+ */
+flush_tlb_local();
+
+rc = gva_to_ipa(gva, , GV2M_READ);
+if ( rc == -EFAULT )
+return; /* Try again */
+}
 
 switch ( fsc )
 {
 case FSC_FLT_PERM:
 {
-paddr_t gpa;
 const struct npfec npfec = {
 .insn_fetch = 1,
 .gla_valid = 1,
 .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
 };
 
-if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
-gpa = get_faulting_ipa(gva);
-else
-{
-/*
- * Flush the TLB to make sure the DTLB is clear before
- * doing GVA->IPA translation. If we got here because of
- * an entry only present in the ITLB, this translation may
- * still be inaccurate.
- */
-flush_tlb_local();
-
-rc = gva_to_ipa(gva, , GV2M_READ);
-if ( rc == -EFAULT )
-return; /* Try again */
-}
-
 rc = p2m_mem_access_check(gpa, gva, npfec);
 
 /* Trap was triggered by mem_access, work here is done */
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC 10/22] xen/arm: p2m: Move the lookup helpers at the top of the file

2016-07-28 Thread Julien Grall
This will be used later in functions that will be defined earlier in the
file.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 798faa8..ea582c8 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -29,6 +29,14 @@ static unsigned int __read_mostly p2m_root_level;
 
 unsigned int __read_mostly p2m_ipa_bits;
 
+/* Helpers to lookup the properties of each level */
+static const paddr_t level_sizes[] =
+{ ZEROETH_SIZE, FIRST_SIZE, SECOND_SIZE, THIRD_SIZE };
+static const paddr_t level_masks[] =
+{ ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK };
+static const unsigned int level_shifts[] =
+{ ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
+
 static bool_t p2m_valid(lpae_t pte)
 {
 return pte.p2m.valid;
@@ -670,14 +678,6 @@ static bool_t is_mapping_aligned(const paddr_t 
start_gpaddr,
 #define P2M_ONE_PROGRESS_NOP   0x1
 #define P2M_ONE_PROGRESS   0x10
 
-/* Helpers to lookup the properties of each level */
-static const paddr_t level_sizes[] =
-{ ZEROETH_SIZE, FIRST_SIZE, SECOND_SIZE, THIRD_SIZE };
-static const paddr_t level_masks[] =
-{ ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK };
-static const unsigned int level_shifts[] =
-{ ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
-
 static int p2m_shatter_page(struct p2m_domain *p2m,
 lpae_t *entry,
 unsigned int level)
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 99750: tolerable all pass - PUSHED

2016-07-28 Thread osstest service owner
flight 99750 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/99750/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  b29f4c1e37c78874048a34700a967973bb31fbf9
baseline version:
 xen  d5438accceecc8172db2d37d98b695eb8bc43afc

Last test of basis99707  2016-07-26 10:01:43 Z2 days
Failing since 99722  2016-07-27 18:01:50 Z0 days7 attempts
Testing same since99750  2016-07-28 12:20:23 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Juergen Gross 
  Julien Grall 
  Shanker Donthineni 
  Stefano Stabellini 
  Tamas K Lengyel 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-smoke
+ revision=b29f4c1e37c78874048a34700a967973bb31fbf9
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable-smoke 
b29f4c1e37c78874048a34700a967973bb31fbf9
+ branch=xen-unstable-smoke
+ revision=b29f4c1e37c78874048a34700a967973bb31fbf9
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-smoke
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-smoke
+ prevxenbranch=xen-4.7-testing
+ '[' xb29f4c1e37c78874048a34700a967973bb31fbf9 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local 

Re: [Xen-devel] [PATCH v4] xen/arm: Add a clock property

2016-07-28 Thread Dirk Behme

On 28.07.2016 13:17, Julien Grall wrote:

Hi Dirk,

On 27/07/16 06:05, Dirk Behme wrote:

Hi Michael, Stefano and Julien,

On 22.07.2016 03:16, Stefano Stabellini wrote:

On Thu, 21 Jul 2016, Michael Turquette wrote:

Quoting Stefano Stabellini (2016-07-14 03:38:04)

On Thu, 14 Jul 2016, Dirk Behme wrote:

On 13.07.2016 23:03, Michael Turquette wrote:

Quoting Dirk Behme (2016-07-13 11:56:30)

On 13.07.2016 20:43, Stefano Stabellini wrote:

On Wed, 13 Jul 2016, Dirk Behme wrote:

On 13.07.2016 00:26, Michael Turquette wrote:

Quoting Dirk Behme (2016-07-12 00:46:45)

Clocks described by this property are reserved for use by
Xen, and
the OS
must not alter their state any way, such as disabling or
gating a
clock,
or modifying its rate. Ensuring this may impose
constraints on
parent
clocks or other resources used by the clock tree.


Note that clk_prepare_enable will not prevent the rate from
changing
(clk_set_rate) or a parent from changing (clk_set_parent). The
only
way
to do this currently would be to set the following flags on
the
effected
clocks:

CLK_SET_RATE_GATE
CLK_SET_PARENT_GATE




Regarding setting flags, I think we already talked about that.
I think
the
conclusion was that in our case its not possible to
manipulate the
flags in
the OS as this isn't intended to be done in cases like ours.
Therefore
no API
is exported for this.

I.e. if we need to set these flags, we have to do that in Xen
where we
add the
clocks to the hypervisor node in the device tree. And not in
the
kernel patch
discussed here.


These are internal Linux flags, aren't they?



I've been under the impression that you can set clock "flags" via
the
device tree. Seems I need to re-check that ;)


Right, you cannot set flags from the device tree. Also, setting
these
flags is done by the clock provider driver, not a consumer. Xen is
the
consumer.



Ok, thanks, then I think we can forget about using flags for the
issue we are
discussing here.

Best regards

Dirk

P.S.: Would it be an option to merge the v4 patch we are discussing
here,
then? From the discussion until here, it sounds to me that it's the
best
option we have at the moment. Maybe improving it in the future,
then.


It might be a step in the right direction, but it doesn't really
prevent
clk_set_rate from changing properties of a clock owned by Xen.  This
patch is incomplete. We need to understand at least what it would
take
to have a complete solution.

Michael, do you have any suggestions on how it would be possible
to set
CLK_SET_RATE_GATE and CLK_SET_PARENT_GATE for those clocks in a
proper
way?


No, there is no way for a consumer to do that. The provider must
do it.


All right. But could we design a new device tree binding which the Xen
hypervisor would use to politely ask the clock provider in Linux to
set
CLK_SET_RATE_GATE and CLK_SET_PARENT_GATE for a given clock?

Xen would have to modify the DTB before booting Linux with the new
binding.



Like you wrote, I would imagine it needs to be done by the clock
provider driver. Maybe to do that, it would be easier to have a new
device tree property on the clock node, rather than listing
phandle and
clock-specifier pairs under the Xen node?


Upon further reflection, I think that your clock consumer can
probably
use clk_set_rate_range() to "lock" in a rate. This is good because
it is
exactly what a clock consumer should do:

1) get the clk
2) enable the clk
3) set the required rate for the clock
4) set rate range constraints, or conversely,
5) lock in an exact rate; set the min/max rate to the same value

The problem with this solution is that it requires the consumer to
have
knowledge of the rates that it wants for that clock, which I guess is
something that Linux kernels in a Xen setup do not want/need?


Who is usually the component with knowledge of the clock rate to
set? If
it's a device driver, then neither the Xen hypervisor, nor the Xen
core
drivers in Linux would know anything about it. (Unless the clock
rate is
specified on device tree via assigned-clock-rates of course.)



Is it correct that you would prefer some sort of
never_touch_this_clk()
api?



From my understading, yes, never_touch_this_clk() would make things
easier.



Would it be somehow worth to wait for anything like this
never_touch_this_clk() api? Or should we try to proceed with
clk_prepare_enable() like done in this patch for the moment?


I am not sure who will write the new api never_touch_this_clk(). Could
you suggest an implementation based on the discussion?



As this was a proposal from Michael, I'm hoping for Michael here, 
somehow ;) At least for a hint if anything like never_touch_this_clk() 
would be realistic to get accepted. And if so, how this could look like.


If this is unrealistic, I think we should go the proposed 
clk_prepare_enable() way, as it seems this is the best we could do at 
the moment without never_touch_this_clk().


Best regards

Dirk

___
Xen-devel 

Re: [Xen-devel] Xen 4.7.0 boot PANIC on kernel 4.7.0-4 + UEFI ?

2016-07-28 Thread lists


On Thu, Jul 28, 2016, at 07:09 AM, Vitaly Kuznetsov wrote:
> While I see that you're running linux-4.7 could you please double-check
> that it has the following:
> 
> commit 55f1ea15216a5a14c96738bd5284100a00ffa9dc
> Author: Vitaly Kuznetsov 
> Date:   Tue May 31 11:23:43 2016 +0100
> 
> efi: Fix for_each_efi_memory_desc_in_map() for empty memmaps

Checking here

  rpm -q --changelog kernel-default | egrep -i 
"55f1ea15|for_each_efi_memory_desc_in_map|kuznets|memmaps"

returns nothing.  Doesn't look like it's in there.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 05/15] xen/arm: p2m: Remove unnecessary locking

2016-07-28 Thread Julien Grall
The p2m is not yet in use when p2m_init and p2m_allocate_table are
called. Furthermore the p2m is not used anymore when p2m_teardown is
called. So taking the p2m lock is not necessary.

Signed-off-by: Julien Grall 
Reviewed-by: Stefano Stabellini 

---
Changes in v2:
- Add Stefano's reviewed-by
---
 xen/arch/arm/p2m.c | 14 +-
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 08f3f17..bcccaa4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1266,8 +1266,6 @@ int p2m_alloc_table(struct domain *d)
 if ( page == NULL )
 return -ENOMEM;
 
-spin_lock(>lock);
-
 /* Clear both first level pages */
 for ( i = 0; i < P2M_ROOT_PAGES; i++ )
 clear_and_clean_page(page + i);
@@ -1283,8 +1281,6 @@ int p2m_alloc_table(struct domain *d)
  */
 flush_tlb_domain(d);
 
-spin_unlock(>lock);
-
 return 0;
 }
 
@@ -1349,8 +1345,6 @@ void p2m_teardown(struct domain *d)
 struct p2m_domain *p2m = >arch.p2m;
 struct page_info *pg;
 
-spin_lock(>lock);
-
 while ( (pg = page_list_remove_head(>pages)) )
 free_domheap_page(pg);
 
@@ -1362,8 +1356,6 @@ void p2m_teardown(struct domain *d)
 p2m_free_vmid(d);
 
 radix_tree_destroy(>mem_access_settings, NULL);
-
-spin_unlock(>lock);
 }
 
 int p2m_init(struct domain *d)
@@ -1374,12 +1366,11 @@ int p2m_init(struct domain *d)
 spin_lock_init(>lock);
 INIT_PAGE_LIST_HEAD(>pages);
 
-spin_lock(>lock);
 p2m->vmid = INVALID_VMID;
 
 rc = p2m_alloc_vmid(d);
 if ( rc != 0 )
-goto err;
+return rc;
 
 d->arch.vttbr = 0;
 
@@ -1392,9 +1383,6 @@ int p2m_init(struct domain *d)
 p2m->mem_access_enabled = false;
 radix_tree_init(>mem_access_settings);
 
-err:
-spin_unlock(>lock);
-
 return rc;
 }
 
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 03/15] xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO

2016-07-28 Thread Julien Grall
Currently, the p2m type p2m_mmio_direct is used to map in stage-2
cacheable MMIO (via map_regions_rw_cache) and non-cacheable one (via
map_mmio_regions). The p2m code is relying on the caller to give the
correct memory attribute.

In a follow-up patch, the p2m code will rely on the p2m type to find the
correct memory attribute. In preparation of this, introduce
p2m_mmio_direct_nc and p2m_mimo_direct_c to differentiate the
cacheability of the MMIO.

Signed-off-by: Julien Grall 
Reviewed-by: Stefano Stabellini 

---
Changes in v2:
- Add Stefano's reviewed-by
---
 xen/arch/arm/p2m.c| 7 ---
 xen/include/asm-arm/p2m.h | 3 ++-
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 851b110..cffb12e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -272,7 +272,8 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, 
p2m_access_t a)
 case p2m_iommu_map_rw:
 case p2m_map_foreign:
 case p2m_grant_map_rw:
-case p2m_mmio_direct:
+case p2m_mmio_direct_nc:
+case p2m_mmio_direct_c:
 e->p2m.xn = 1;
 e->p2m.write = 1;
 break;
@@ -1194,7 +1195,7 @@ int map_regions_rw_cache(struct domain *d,
  mfn_t mfn)
 {
 return p2m_insert_mapping(d, gfn, nr, mfn,
-  MATTR_MEM, p2m_mmio_direct);
+  MATTR_MEM, p2m_mmio_direct_c);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
@@ -1211,7 +1212,7 @@ int map_mmio_regions(struct domain *d,
  mfn_t mfn)
 {
 return p2m_insert_mapping(d, start_gfn, nr, mfn,
-  MATTR_DEV, p2m_mmio_direct);
+  MATTR_DEV, p2m_mmio_direct_nc);
 }
 
 int unmap_mmio_regions(struct domain *d,
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 78d37ab..20a220ea 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -87,7 +87,8 @@ typedef enum {
 p2m_invalid = 0,/* Nothing mapped here */
 p2m_ram_rw, /* Normal read/write guest RAM */
 p2m_ram_ro, /* Read-only; writes are silently dropped */
-p2m_mmio_direct,/* Read/write mapping of genuine MMIO area */
+p2m_mmio_direct_nc, /* Read/write mapping of genuine MMIO area 
non-cacheable */
+p2m_mmio_direct_c,  /* Read/write mapping of genuine MMIO area cacheable */
 p2m_map_foreign,/* Ram pages from foreign domain */
 p2m_grant_map_rw,   /* Read/write grant mapping */
 p2m_grant_map_ro,   /* Read-only grant mapping */
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 13/15] xen/arm: Don't export flush_tlb_domain

2016-07-28 Thread Julien Grall
The function flush_tlb_domain is not used outside of the file where it
has been declared.

Signed-off-by: Julien Grall 
Reviewed-by: Stefano Stabellini 

---
Changes in v2:
- Add Stefano's reviewed-by
---
 xen/arch/arm/p2m.c | 2 +-
 xen/include/asm-arm/flushtlb.h | 3 ---
 2 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6a9767c..bda9b97 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -137,7 +137,7 @@ void p2m_restore_state(struct vcpu *n)
 isb();
 }
 
-void flush_tlb_domain(struct domain *d)
+static void flush_tlb_domain(struct domain *d)
 {
 struct p2m_domain *p2m = >arch.p2m;
 unsigned long flags = 0;
diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
index c986b3f..329fbb4 100644
--- a/xen/include/asm-arm/flushtlb.h
+++ b/xen/include/asm-arm/flushtlb.h
@@ -25,9 +25,6 @@ do {  
  \
 /* Flush specified CPUs' TLBs */
 void flush_tlb_mask(const cpumask_t *mask);
 
-/* Flush CPU's TLBs for the specified domain */
-void flush_tlb_domain(struct domain *d);
-
 #endif /* __ASM_ARM_FLUSHTLB_H__ */
 /*
  * Local variables:
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 14/15] xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb

2016-07-28 Thread Julien Grall
The function to flush the TLBs for a given p2m does not need to know about
the domain. So pass directly the p2m in parameter.

At the same time rename the function to p2m_flush_tlb to match the
parameter change.

Signed-off-by: Julien Grall 
Reviewed-by: Stefano Stabellini 

---
Changes in v2:
- Add Stefano's reviewed-by
---
 xen/arch/arm/p2m.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index bda9b97..97a3a2b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -137,9 +137,8 @@ void p2m_restore_state(struct vcpu *n)
 isb();
 }
 
-static void flush_tlb_domain(struct domain *d)
+static void p2m_flush_tlb(struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = >arch.p2m;
 unsigned long flags = 0;
 uint64_t ovttbr;
 
@@ -1157,7 +1156,7 @@ static int apply_p2m_changes(struct domain *d,
 out:
 if ( flush )
 {
-flush_tlb_domain(d);
+p2m_flush_tlb(>arch.p2m);
 ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
 if ( !rc )
 rc = ret;
@@ -1302,7 +1301,7 @@ static int p2m_alloc_table(struct domain *d)
  * Make sure that all TLBs corresponding to the new VMID are flushed
  * before using it
  */
-flush_tlb_domain(d);
+p2m_flush_tlb(p2m);
 
 return 0;
 }
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


  1   2   >