[Xen-devel] [qemu-upstream-4.8-testing test] 105695: tolerable FAIL - PUSHED

2017-02-10 Thread osstest service owner
flight 105695 qemu-upstream-4.8-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105695/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-credit2  3 host-install(3) broken in 105678 pass in 105695
 test-amd64-i386-freebsd10-i386 3 host-install(3) broken in 105678 pass in 
105695
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 3 host-install(3) broken in 
105678 pass in 105695
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 3 host-install(3) broken in 
105678 pass in 105695
 test-armhf-armhf-libvirt-raw 9 debian-di-install fail in 105678 pass in 105695
 test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat  fail pass in 105678

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64   5 xen-buildfail   never pass
 build-arm64-xsm   5 xen-buildfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 build-arm64-pvops 5 kernel-build fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu46e1db013347a3356ac05b83c0243313d74d2193
baseline version:
 qemuu4220231eb22235e757d269722b9f6a594fbcb70f

Last test of basis   102941  2016-12-05 12:51:08 Z   67 days
Testing same since   105678  2017-02-09 23:14:16 Z1 days2 attempts


People who touched revisions under test:
  Gerd Hoffmann 
  Li Qiang 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  fail
 build-armhf-xsm

[Xen-devel] [qemu-upstream-unstable baseline-only test] 68546: regressions - trouble: blocked/broken/fail/pass

2017-02-10 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 68546 qemu-upstream-unstable real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/68546/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pvh-amd   6 xen-boot  fail REGR. vs. 68472
 test-amd64-amd64-pygrub   6 xen-boot  fail REGR. vs. 68472

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail   like 68472
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail   like 68472
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail   like 68472
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 68472
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 68472
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail like 68472

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64-xsm   2 hosts-allocate   broken never pass
 build-arm64   2 hosts-allocate   broken never pass
 build-arm64-pvops 2 hosts-allocate   broken never pass
 build-arm64-xsm   3 capture-logs broken never pass
 build-arm64   3 capture-logs broken never pass
 build-arm64-pvops 3 capture-logs broken never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 qemuu728e90b41d46c1c1c210ac496204efd51936db75
baseline version:
 qemuu5cd2e1739763915e6b4c247eef71f948dc808bd5

Last test of basis68472  2017-01-25 03:25:27 Z   17 days
Testing same since68546  2017-02-10 21:44:12 Z0 days1 attempts


People who touched revisions under test:
  Gerd Hoffmann 
  Li Qiang 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  broken  
 build-armhf-xsm  

[Xen-devel] [ovmf test] 105696: all pass - PUSHED

2017-02-10 Thread osstest service owner
flight 105696 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105696/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 35a461cb502877670062560e1edd231aeb35f738
baseline version:
 ovmf 8d127a5a3a23d960644d1bd78891ae7d55b66544

Last test of basis   105679  2017-02-10 02:15:38 Z1 days
Testing same since   105696  2017-02-10 12:12:30 Z0 days1 attempts


People who touched revisions under test:
  Liming Gao 
  Ruiyu Ni 
  Star Zeng 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=ovmf
+ revision=35a461cb502877670062560e1edd231aeb35f738
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push ovmf 
35a461cb502877670062560e1edd231aeb35f738
+ branch=ovmf
+ revision=35a461cb502877670062560e1edd231aeb35f738
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=ovmf
+ xenbranch=xen-unstable
+ '[' xovmf = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable
+ prevxenbranch=xen-4.8-testing
+ '[' x35a461cb502877670062560e1edd231aeb35f738 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/osstest/ext/linux-firmware.git
++ : 

[Xen-devel] [xen-4.6-testing baseline-only test] 68545: regressions - FAIL

2017-02-10 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 68545 xen-4.6-testing real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/68545/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm   5 xen-build fail REGR. vs. 68469

Regressions which are regarded as allowable (not blocking):
 test-xtf-amd64-amd64-1   20 xtf/test-hvm32-invlpg~shadow fail   like 68469
 test-xtf-amd64-amd64-1  32 xtf/test-hvm32pae-invlpg~shadow fail like 68469
 test-xtf-amd64-amd64-1   43 xtf/test-hvm64-invlpg~shadow fail   like 68469
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail   like 68469
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail   like 68469
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail   like 68469
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 68469
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 68469
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 68469
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail like 68469
 test-amd64-amd64-xl-qemut-winxpsp3  9 windows-install  fail like 68469

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-xtf-amd64-amd64-5   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-3   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-4   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-1   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-2   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass

version targeted for testing:
 xen  576f319a804bce8c9a7fb70a042f873f5eaf0151
baseline version:
 xen  09f521a077024d5955d766eef7a040d2af928ec2

Last test of basis68469  2017-01-25 03:22:22 Z   17 days
Testing same since68545  2017-02-10 20:13:19 Z0 days1 attempts


People who touched 

[Xen-devel] [qemu-upstream-4.6-testing test] 105693: tolerable FAIL - PUSHED

2017-02-10 Thread osstest service owner
flight 105693 qemu-upstream-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105693/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 102708
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 102708
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 102708
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 102708
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 102708

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu15c0f1500fc078b6411d2c86842cb2f3fd7393c0
baseline version:
 qemuuba9175c5bde6796851d3b9d888ee488fd0257d05

Last test of basis   102708  2016-11-29 06:57:36 Z   73 days
Testing same since   105677  2017-02-09 23:14:01 Z1 days2 attempts


People who touched revisions under test:
  Gerd Hoffmann 
  Li Qiang 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm

[Xen-devel] [qemu-mainline test] 105697: regressions - FAIL

2017-02-10 Thread osstest service owner
flight 105697 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105697/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm5 xen-buildfail REGR. vs. 105279
 build-amd64   5 xen-buildfail REGR. vs. 105279
 build-amd64-xsm   5 xen-buildfail REGR. vs. 105279
 build-i3865 xen-buildfail REGR. vs. 105279
 build-armhf   5 xen-buildfail REGR. vs. 105279
 build-armhf-xsm   5 xen-buildfail REGR. vs. 105279

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-amd   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-pvh-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 

[Xen-devel] [linux-3.10 test] 105694: tolerable FAIL - PUSHED

2017-02-10 Thread osstest service owner
flight 105694 linux-3.10 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105694/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 102077
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 102077
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 102077

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linuxec55e7c2bf49a426b6f8204505bd267c77554d37
baseline version:
 linux7828a9658951301a3fd83daa4ed0a607d370399e

Last test of basis   102077  2016-11-09 21:18:49 Z   93 days
Testing same since   105694  2017-02-10 11:14:58 Z0 days1 attempts


People who touched revisions under test:
  Al Viro 
  Alan Stern 
  Alex Deucher 
  Alex Vesker 
  Alex Williamson 
  Alexander Usyskin 
  Alexey Khoroshilov 
  Alexey Klimov 
  Amitkumar Karwar 
  Andrew Bresticker 
  Andrew Morton 
  Andrey Grodzovsky 
  Andrey Konovalov 
  Andrey Ryabinin 
  Andy Lutomirski 
  Anna Schumaker 
  Anoob Soman 
  Anton Blanchard 
  Ard Biesheuvel 
  Arend van Spriel 
  Arend van Spriel 
  Arnaldo Carvalho de Melo 
  Arnd Bergmann 
  Artem Bityutskiy 
  Ashish Samant 
  Balbir Singh 
  Baoquan He 
  Bart Van Assche 
  Ben Hutchings 
  Benjamin Herrenschmidt 
  Bjorn Helgaas 
  Boris Brezillon 
  Borislav Petkov 
  Brian King 
  Brian Norris 
  Brian Norris 
  Bruno Wolff III 
  Catalin Marinas 
  Ching Huang 
  Chris Mason 
  Chris Metcalf 
  Christian König 
  Christoph Lameter 
  Christoph Lechleitner 
  Chuck Lever 
  Cong Wang 
  Cyrille Pitchen 
  Daeho Jeong 
  Dan Carpenter 
  Daniel Glöckner 
  Daniel Jurgens 
  Daniel Mentz 
  Daniel Vetter 
  Daniel Vetter 
  Darrick J. Wong 
  Dave Airlie 
  Dave Chinner 
  Dave Chinner 
  Dave Gerlach 
  David Howells 
  David S. Miller 
  David Vrabel 
  Denys Vlasenko 
  Ding Tianhong 
  Dmitry Torokhov 
  Dmitry Vyukov 
  Doug Ledford 
  Douglas Caetano dos Santos 
  Eli Cooper 
  Emmanouil Maroudas 
  Emrah Demir 
  Enric Balletbo Serra 
  Enrico Mioso 
  Erez Shitrit 
  Eric Dumazet 
  Ewan D. Milne 
  Fabio Estevam 
  Felipe Balbi 
  Felipe Balbi 
  Felix Fietkau 

[Xen-devel] [PATCH v4 2/2] arm: proper ordering for correct execution of gic_update_one_lr and vgic_store_itargetsr

2017-02-10 Thread Stefano Stabellini
Concurrent execution of gic_update_one_lr and vgic_store_itargetsr can
result in the wrong pcpu being set as irq target, see
http://marc.info/?l=xen-devel=148218667104072.

To solve the issue, add barriers, remove an irq from the inflight
queue, only after the affinity has been set. On the other end, write the
new vcpu target, before checking GIC_IRQ_GUEST_MIGRATING and inflight.

Signed-off-by: Stefano Stabellini 
---
 xen/arch/arm/gic.c | 3 ++-
 xen/arch/arm/vgic-v2.c | 4 ++--
 xen/arch/arm/vgic-v3.c | 4 +++-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index a5348f2..bb52959 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -503,12 +503,13 @@ static void gic_update_one_lr(struct vcpu *v, int i)
  !test_bit(GIC_IRQ_GUEST_MIGRATING, >status) )
 gic_raise_guest_irq(v, irq, p->priority);
 else {
-list_del_init(>inflight);
 if ( test_and_clear_bit(GIC_IRQ_GUEST_MIGRATING, >status) )
 {
 struct vcpu *v_target = vgic_get_target_vcpu(v, irq);
 irq_set_affinity(p->desc, cpumask_of(v_target->processor));
 }
+smp_mb();
+list_del_init(>inflight);
 }
 }
 }
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index b30379e..f47286e 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -153,6 +153,8 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
 new_target--;
 
 old_target = read_atomic(>vcpu[offset]);
+write_atomic(>vcpu[offset], new_target);
+smp_mb();
 
 /* Only migrate the vIRQ if the target vCPU has changed */
 if ( new_target != old_target )
@@ -161,8 +163,6 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
  d->vcpu[new_target],
  virq);
 }
-
-write_atomic(>vcpu[offset], new_target);
 }
 }
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 7dc9b6f..e82 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -150,11 +150,13 @@ static void vgic_store_irouter(struct domain *d, struct 
vgic_irq_rank *rank,
 if ( !new_vcpu )
 return;
 
+write_atomic(>vcpu[offset], new_vcpu->vcpu_id);
+smp_mb();
+
 /* Only migrate the IRQ if the target vCPU has changed */
 if ( new_vcpu != old_vcpu )
 vgic_migrate_irq(old_vcpu, new_vcpu, virq);
 
-write_atomic(>vcpu[offset], new_vcpu->vcpu_id);
 }
 
 static inline bool vgic_reg64_check_access(struct hsr_dabt dabt)
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 1/2] arm: read/write rank->vcpu atomically

2017-02-10 Thread Stefano Stabellini
We don't need a lock in vgic_get_target_vcpu anymore, solving the
following lock inversion bug: the rank lock should be taken first, then
the vgic lock. However, gic_update_one_lr is called with the vgic lock
held, and it calls vgic_get_target_vcpu, which tries to obtain the rank
lock.

Coverity-ID: 1381855
Coverity-ID: 1381853

Signed-off-by: Stefano Stabellini 
---
 xen/arch/arm/vgic-v2.c |  6 +++---
 xen/arch/arm/vgic-v3.c |  6 +++---
 xen/arch/arm/vgic.c| 27 +--
 3 files changed, 11 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 3dbcfe8..b30379e 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -79,7 +79,7 @@ static uint32_t vgic_fetch_itargetsr(struct vgic_irq_rank 
*rank,
 offset &= ~(NR_TARGETS_PER_ITARGETSR - 1);
 
 for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++ )
-reg |= (1 << rank->vcpu[offset]) << (i * NR_BITS_PER_TARGET);
+reg |= (1 << read_atomic(>vcpu[offset])) << (i * 
NR_BITS_PER_TARGET);
 
 return reg;
 }
@@ -152,7 +152,7 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
 /* The vCPU ID always starts from 0 */
 new_target--;
 
-old_target = rank->vcpu[offset];
+old_target = read_atomic(>vcpu[offset]);
 
 /* Only migrate the vIRQ if the target vCPU has changed */
 if ( new_target != old_target )
@@ -162,7 +162,7 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
  virq);
 }
 
-rank->vcpu[offset] = new_target;
+write_atomic(>vcpu[offset], new_target);
 }
 }
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index d61479d..7dc9b6f 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -108,7 +108,7 @@ static uint64_t vgic_fetch_irouter(struct vgic_irq_rank 
*rank,
 /* Get the index in the rank */
 offset &= INTERRUPT_RANK_MASK;
 
-return vcpuid_to_vaffinity(rank->vcpu[offset]);
+return vcpuid_to_vaffinity(read_atomic(>vcpu[offset]));
 }
 
 /*
@@ -136,7 +136,7 @@ static void vgic_store_irouter(struct domain *d, struct 
vgic_irq_rank *rank,
 offset &= virq & INTERRUPT_RANK_MASK;
 
 new_vcpu = vgic_v3_irouter_to_vcpu(d, irouter);
-old_vcpu = d->vcpu[rank->vcpu[offset]];
+old_vcpu = d->vcpu[read_atomic(>vcpu[offset])];
 
 /*
  * From the spec (see 8.9.13 in IHI 0069A), any write with an
@@ -154,7 +154,7 @@ static void vgic_store_irouter(struct domain *d, struct 
vgic_irq_rank *rank,
 if ( new_vcpu != old_vcpu )
 vgic_migrate_irq(old_vcpu, new_vcpu, virq);
 
-rank->vcpu[offset] = new_vcpu->vcpu_id;
+write_atomic(>vcpu[offset], new_vcpu->vcpu_id);
 }
 
 static inline bool vgic_reg64_check_access(struct hsr_dabt dabt)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 364d5f0..3dd9044 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -85,7 +85,7 @@ static void vgic_rank_init(struct vgic_irq_rank *rank, 
uint8_t index,
 rank->index = index;
 
 for ( i = 0; i < NR_INTERRUPT_PER_RANK; i++ )
-rank->vcpu[i] = vcpu;
+write_atomic(>vcpu[i], vcpu);
 }
 
 int domain_vgic_register(struct domain *d, int *mmio_count)
@@ -218,28 +218,11 @@ int vcpu_vgic_free(struct vcpu *v)
 return 0;
 }
 
-/* The function should be called by rank lock taken. */
-static struct vcpu *__vgic_get_target_vcpu(struct vcpu *v, unsigned int virq)
-{
-struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
-
-ASSERT(spin_is_locked(>lock));
-
-return v->domain->vcpu[rank->vcpu[virq & INTERRUPT_RANK_MASK]];
-}
-
-/* takes the rank lock */
 struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq)
 {
-struct vcpu *v_target;
 struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
-unsigned long flags;
-
-vgic_lock_rank(v, rank, flags);
-v_target = __vgic_get_target_vcpu(v, virq);
-vgic_unlock_rank(v, rank, flags);
-
-return v_target;
+int target = read_atomic(>vcpu[virq & INTERRUPT_RANK_MASK]);
+return v->domain->vcpu[target];
 }
 
 static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
@@ -326,7 +309,7 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 
 while ( (i = find_next_bit(, 32, i)) < 32 ) {
 irq = i + (32 * n);
-v_target = __vgic_get_target_vcpu(v, irq);
+v_target = vgic_get_target_vcpu(v, irq);
 p = irq_to_pending(v_target, irq);
 clear_bit(GIC_IRQ_GUEST_ENABLED, >status);
 gic_remove_from_queues(v_target, irq);
@@ -368,7 +351,7 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
 
 while ( (i = find_next_bit(, 32, i)) < 32 ) {
 irq = i + (32 * n);
-v_target = __vgic_get_target_vcpu(v, irq);
+v_target = vgic_get_target_vcpu(v, irq);
 p = irq_to_pending(v_target, irq);
 

[Xen-devel] [qemu-upstream-4.7-testing test] 105691: trouble: blocked/broken/fail/pass

2017-02-10 Thread osstest service owner
flight 105691 qemu-upstream-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105691/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm3 host-install(3)broken REGR. vs. 102709

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 102709
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 102709
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 102709
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 102709

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64-xsm   5 xen-buildfail   never pass
 build-arm64   5 xen-buildfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 build-arm64-pvops 5 kernel-build fail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu7eaaf4ba68fab40f1945d761438bdaa44fbf37d7
baseline version:
 qemuue27a2f17bc2d9d7f8afce2c5918f4f23937b268e

Last test of basis   102709  2016-11-29 07:53:18 Z   73 days
Testing same since   105676  2017-02-09 23:13:26 Z1 days2 attempts


People who touched revisions under test:
  Gerd Hoffmann 
  Li Qiang 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  fail
 build-armhf-xsm  pass
 build-i386-xsm   broken  
 build-amd64   

Re: [Xen-devel] [PATCH v3] xen/arm: fix rank/vgic lock inversion bug

2017-02-10 Thread Stefano Stabellini
On Wed, 8 Feb 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 02/02/17 22:56, Stefano Stabellini wrote:
> > On Thu, 2 Feb 2017, Julien Grall wrote:
> > > On 01/02/17 23:23, Stefano Stabellini wrote:
> > > > On Wed, 1 Feb 2017, Julien Grall wrote:
> > > > > On 31/01/2017 23:49, Stefano Stabellini wrote:
> > > > > > On Fri, 27 Jan 2017, Julien Grall wrote:
> > > > > > > On 03/01/17 23:29, Stefano Stabellini wrote:
> > > > > For LPIs, there is no activate state. So as soon as they are EOIed,
> > > > > they
> > > > > might
> > > > > come up again. Depending on how will we handle irq migration, your
> > > > > scenario
> > > > > will become true. I am not sure if we should take into account LPIs
> > > > > right
> > > > > now.
> > > > > 
> > > > > To be honest, I don't much like the idea of kicking the other vCPU.
> > > > > But I
> > > > > don't have a better idea in order to clear the LRs.
> > 
> > What if we skip the interrupt if it's an LPI, and we kick the other vcpu
> > and wait if it's an SPI? Software should be more tolerant of lost
> > interrupts in case of LPIs. We are also considering rate-limiting them
> > anyway, which implies the possibility of skipping some LPIs at times.
> 
> I will skip the answer here, as your suggestion to solve the inversion lock
> sounds better.

OK


> > > > Me neither, that's why I was proposing a different solution instead. We
> > > > still have the option to take the right lock in vgic_migrate_irq:
> > > > 
> > > > http://marc.info/?l=xen-devel=148237289620471
> > > > 
> > > > The code is more complex, but I think it's safe in all cases.
> > > 
> > > It is not only complex but also really confusing as we would have a
> > > variable
> > > protected by two locks, both lock does not need to be taken at the same
> > > time.
> > 
> > Yes, but there is a large in-code comment about it :-)
> > 
> > 
> > > I may have an idea to avoid completely the lock in vgic_get_target_vcpu.
> > > The
> > > lock is only here to read the target vcpu in the rank, the rest does not
> > > need
> > > a lock, right? So could not we read the target vcpu atomically instead?
> > 
> > Yes, I think that could solve the lock inversion bug:
> > 
> > - remove the spin_lock in vgic_get_target_vcpu and replace it with an atomic
> > read
> > - replace rank->vcpu writes with atomic writes
> > 
> > However, it would not solve the other issu affecting the current code:
> > http://marc.info/?l=xen-devel=148218667104072, which is related to the
> > problem you mentioned about irq_set_affinity and list_del_init in
> > gic_update_one_lr not being separated by a barrier. Arguably, that bug
> > could be solved separately.  It would be easier to solve that bug by one
> > of these approaches:
> > 
> > 1) use the same (vgic) lock in gic_update_one_lr and vgic_migrate_irq
> > 2) remove the irq_set_affinity call from gic_update_one_lr
> > 
> > where 1) is the approach taken by v2 of this series and 2) is the
> > approach taken by this patch.
> 
> I think there is a easier solution. You want to make sure that any writes
> (particularly list_del_init) before routing the IRQ are visible to the other
> processors. So a simple barrier in both side should be enough here.

You are right

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus test] 105687: regressions - FAIL

2017-02-10 Thread osstest service owner
flight 105687 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105687/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-multivcpu 15 guest-localmigrate   fail REGR. vs. 59254
 test-amd64-amd64-xl-xsm  14 guest-saverestore fail REGR. vs. 59254
 test-amd64-amd64-xl-credit2  14 guest-saverestore fail REGR. vs. 59254
 test-amd64-i386-xl   14 guest-saverestore fail REGR. vs. 59254
 test-amd64-amd64-xl  17 guest-localmigrate/x10fail REGR. vs. 59254
 test-armhf-armhf-libvirt  6 xen-boot  fail REGR. vs. 59254
 test-armhf-armhf-xl   6 xen-boot  fail REGR. vs. 59254
 test-armhf-armhf-xl-credit2   6 xen-boot  fail REGR. vs. 59254
 test-armhf-armhf-libvirt-xsm  6 xen-boot  fail REGR. vs. 59254
 test-armhf-armhf-xl-arndale   6 xen-boot  fail REGR. vs. 59254
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail REGR. vs. 59254
 test-armhf-armhf-xl-xsm   6 xen-boot  fail REGR. vs. 59254

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 13 guest-localmigrate fail REGR. vs. 
59254
 test-armhf-armhf-xl-rtds  6 xen-boot  fail REGR. vs. 59254
 test-amd64-amd64-xl-rtds  9 debian-installfail REGR. vs. 59254
 test-amd64-i386-libvirt-pair 21 guest-migrate/src_host/dst_host fail baseline 
untested
 test-armhf-armhf-libvirt-raw  6 xen-bootfail baseline untested
 test-armhf-armhf-xl-vhd   6 xen-bootfail baseline untested
 test-amd64-i386-libvirt-xsm  14 guest-saverestorefail blocked in 59254
 test-amd64-amd64-libvirt-xsm 14 guest-saverestorefail blocked in 59254
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 59254
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 59254
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 59254

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64   5 xen-buildfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 build-arm64-xsm   5 xen-buildfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass

version targeted for testing:
 linux3d88460dbd285e7f32437b530d5bb7cb916142fa
baseline version:
 linux45820c294fe1b1a9df495d57f40585ef2d069a39

Last test of basis59254  2015-07-09 04:20:48 Z  582 days
Failing since 59348  2015-07-10 04:24:05 Z  581 days  263 attempts
Testing same since   105687  2017-02-10 07:30:28 Z0 days1 attempts


7572 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  fail
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  fail
 build-armhf 

[Xen-devel] [ARM] SMC (and HVC) handling in hypervisor

2017-02-10 Thread Volodymyr Babchuk
Hello,

This e-mail is sort of follow-up to the two threads: [1] (my thread
about TEE interaction) and [2] (Edgar's thread regarding handling SMC
calls in platform_hvc). I want to discuss more broad topic there.

Obviously, there are growing number of SMC users and current state of
SMC handling in Xen satisfies nobody. My team wants to handle SMCs in
secure way, Xilinx wants to forward some calls directly to Secure
Monitor, while allowing to handle other in userspace, etc.

My proposition is to gather all requirements to SMC (and HVC) handling
in one place (e.g. in this mail thread). After we' will have clear
picture of what we want, we will be able to develop some solution,
that will satisfy us all. At least, I hope so :)

Also I want to remind, that there are ARM document called "SMC Calling
Convention" [3]. According to it, any aarch64 hypervisor "must
implement the Standard Secure and Hypervisor Service calls". At this
moment XEN does not conform to this.

So, lets get started with the requirements:
0. There are no much difference between SMC and HVC handling (at least
according to SMCCC).
1. Hypervisor should at least provide own UUID and version while
called by SMC/HVC
2. Hypervisor should forward some calls from dom0 directly to Secure
Monitor (Xilinx use case)
3. Hypervisor should virtualize PSCI calls, CPU service calls, ARM
architecture service calls, etc.
4. Hypervisor should handle TEE calls in a secure way (e.g. no
untrusted handlers in Dom0 userspace).
5. Hypervisor should support multiple TEEs (at least at compilation time).
6. Hypervisor should do this as fast as possible (DRM playback use case).
7. All domains (including dom0) should be handled in the same way.
8. Not all domains will have right to issue certain SMCs.
9. Hypervisor will issue own SMCs in some cases.

This is high-level requirements. Feel free to expand this list.

Current SMC handling code does not even handle PSCI calls. Only HVC
trap handler have branch to handle PSCI calls. SMCs are forwarded to
VM monitor subsystem. There are even no advance_pc() call, so monitor
needs to advance PC by itself. Also, dom0 can't have monitor, so there
are no way to handle SMCs that originate from dom0. So, basically,
current code does not meet any requirements from above list. This
means that we can start from scratch and develop any solution.

But at this moment I only want to gather requirements. So feel free to
point at what I have missed.

[1] https://lists.xenproject.org/archives/html/xen-devel/2016-11/msg02220.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2017-02/msg00635.html
[3] 
http://infocenter.arm.com/help/topic/com.arm.doc.den0028b/ARM_DEN0028B_SMC_Calling_Convention.pdf
-- 
WBR Volodymyr Babchuk aka lorc [+380976646013]
mailto: vlad.babc...@gmail.com

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen Security Advisory 208 (CVE-2017-2615) - oob access in cirrus bitblt copy

2017-02-10 Thread Michael Young

On Fri, 10 Feb 2017, Xen.org security team wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

   Xen Security Advisory CVE-2017-2615 / XSA-208

  oob access in cirrus bitblt copy


The qemu-xen-traditional patch is malformed, as the file it tries to patch 
is at the xen-qemu location and the before and after line counts are 
wrong, so


--- a/hw/display/cirrus_vga.c
+++ b/hw/display/cirrus_vga.c
@@ -307,11 +307,9 @@ static bool blit_region_is_unsafe(struct CirrusVGAState *s,

should be (if I have got the offset right)

--- a/hw/cirrus_vga.c
+++ b/hw/cirrus_vga.c
@@ -308,10 +308,9 @@ static bool blit_region_is_unsafe(struct CirrusVGAState *s,

Michael Young

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v1] Make demu.git compiler under Xen 4.7 (and later)

2017-02-10 Thread Konrad Rzeszutek Wilk
Hey!

This patch lets me compile this emulator under Xen 4.7.

It probably can be done better (#ifdef magic?) but for right
now this gets me past the compile errors.

BTW, are there any other outstanding patches against this tree?


 demu.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Konrad Rzeszutek Wilk (1):
  Make it compiler under Xen 4.7.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] Make it compiler under Xen 4.7.

2017-02-10 Thread Konrad Rzeszutek Wilk
With b7f76a699dcfadc0a52ab45b33cc72dbf3a69e7b
Author: Ian Campbell 
Date:   Mon Jun 1 16:20:09 2015 +0100

tools: Refactor /dev/xen/evtchn wrappers into libxenevtchn.

commit 32486916793fd78a41fc25e53d2b53a5aa0b1bd5
Author: Ian Campbell 
Date:   Thu Jun 18 16:30:19 2015 +0100

tools: Refactor foreign memory mapping into libxenforeignmemory

We need to use the compat layer.

Signed-off-by: Konrad Rzeszutek Wilk 

---
CC: paul.durr...@citrix.com

v1: First version
---
 demu.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/demu.c b/demu.c
index 2933efb..7d73a69 100644
--- a/demu.c
+++ b/demu.c
@@ -56,7 +56,12 @@
 
 #include 
 
+#define XC_WANT_COMPAT_MAP_FOREIGN_API 1
+#define XC_WANT_COMPAT_EVTCHN_API 1
+
 #include 
+#include 
+
 #include 
 
 #include "debug.h"
@@ -126,7 +131,7 @@ typedef enum {
 typedef struct demu_state {
 demu_seq_t  seq;
 xc_interface*xch;
-xc_interface*xceh;
+xc_evtchn  *xceh;
 domid_t domid;
 unsigned intvcpus;
 ioservid_t  ioservid;
-- 
2.9.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v6 07/24] x86: refactor psr: implement get value flow.

2017-02-10 Thread Konrad Rzeszutek Wilk
On Wed, Feb 08, 2017 at 04:15:59PM +0800, Yi Sun wrote:
> This patch implements get value flow including L3 CAT callback
> function.
> 
> It also changes domctl interface to make it more general.
> 
> With this patch, 'psr-cat-show' can work for L3 CAT but not for
> L3 code/data which is implemented in patch "x86: refactor psr:
> implement get value flow for CDP.".
> 
> Signed-off-by: Yi Sun 

Nice thinking with:
> +if ( d )
> +{
> +cos = d->arch.psr_cos_ids[socket];
> +if ( feat->ops.get_val(feat, cos, type, val) )
> +return 0;
> +else
> +break;
> +}
> +

.. snip..
Reviewed-by: Konrad Rzeszutek Wilk 

And sorry for recommending the __ in the function name - I forgot
the C standard uses for __ as reserved for compiler/etc needs.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-upstream-unstable test] 105689: tolerable trouble: blocked/broken/fail/pass - PUSHED

2017-02-10 Thread osstest service owner
flight 105689 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105689/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-multivcpu  3 host-install(3) broken pass in 105675
 test-armhf-armhf-xl-credit2  18 leak-check/check fail in 105675 pass in 105689
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 windows-install fail in 105675 pass 
in 105689

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 104067
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 104067
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 104067
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 104067

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check fail in 105675 never 
pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check fail in 105675 
never pass
 build-arm64   5 xen-buildfail   never pass
 build-arm64-xsm   5 xen-buildfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 build-arm64-pvops 5 kernel-build fail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass

version targeted for testing:
 qemuu728e90b41d46c1c1c210ac496204efd51936db75
baseline version:
 qemuu5cd2e1739763915e6b4c247eef71f948dc808bd5

Last test of basis   104067  2017-01-06 23:12:11 Z   34 days
Testing same since   105675  2017-02-09 23:12:05 Z0 days2 attempts


People who touched revisions under test:
  Gerd Hoffmann 
  Li Qiang 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  fail
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64 

Re: [Xen-devel] [PATCH v6 06/24] x86: refactor psr: implement get hw info flow.

2017-02-10 Thread Konrad Rzeszutek Wilk
On Wed, Feb 08, 2017 at 04:15:58PM +0800, Yi Sun wrote:
> This patch implements get HW info flow including L3 CAT callback
> function.
> 
> It also changes sysctl interface to make it more general.
> 
> With this patch, 'psr-hwinfo' can work for L3 CAT.
> 
> Signed-off-by: Yi Sun 

Reviewed-by: Konrad Rzeszutek Wilk 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [DOC v8] PV Calls protocol design

2017-02-10 Thread Konrad Rzeszutek Wilk
On Fri, Feb 10, 2017 at 12:09:36PM -0800, Stefano Stabellini wrote:
> On Fri, 10 Feb 2017, Konrad Rzeszutek Wilk wrote:
> > .snip..
> > > > > Request fields:
> > > > > 
> > > > > - **cmd** value: 0
> > > > > - additional fields:
> > > > >   - **id**: identifies the socket
> > > > >   - **addr**: address to connect to, see [Socket families and address 
> > > > > format]
> > > > 
> > > > 
> > > > Hm, so what do we do if we want to support AF_UNIX which has an addr of
> > > > 108 bytes?
> > > 
> > > We write a protocol extension and bump the protocol version. However, we
> > 
> > Right. How would you change the protocol for this?
> > 
> > I not asking to have this in this protocol but I just want us to think
> > of what we could do so that if somebody was to implement this - how
> > could we make this easier for this?
> > 
> > My initial thought was to spread the request over two "old" structures.
> > And if so .. would it make sense to include an extra flag or such?
> 
> That's a possibility, but I don't think we need an extra flag. It would
> be easier to introduce a new command, such as PVCALLS_CONNECT_EXTENDED
> or PVCALLS_CONNECT_V2, with the appropriate flags to say that it will
> make use of two request slots instead of one.

Fair enough. Perhaps include a section in the document about how one
could expand the protocol and include this? That would make it easier
for folks to follow an 'paved' way?


> 
> 
> > > could make the addr array size larger now to be more future proof, but
> > > it takes up memory and I have no use for it, given that we can use
> > > loopback for the same purpose.
> > > 
> > 
> > ..snip..
> > > > >  Indexes Page Structure
> > > > > 
> > > > > typedef uint32_t PVCALLS_RING_IDX;
> > > > > 
> > > > > struct pvcalls_data_intf {
> > > > >   PVCALLS_RING_IDX in_cons, in_prod;
> > > > >   int32_t in_error;
> > > > 
> > > > You don't want to perhaps include in_event?
> > > > > 
> > > > >   uint8_t pad[52];
> > > > > 
> > > > >   PVCALLS_RING_IDX out_cons, out_prod;
> > > > >   int32_t out_error;
> > > > 
> > > > And out_event as way to do some form of interrupt mitigation
> > > > (similar to what you had proposed?)
> > > 
> > > Yes, the in_event / out_event optimization that I wrote for the 9pfs
> > > protocol could work here too. However, I thought you preferred to remove
> > > it for now as it is not required and increases complexity?
> > 
> > I did. But I am coming to it by looking at the ring.h header.
> > 
> > My recollection was that your optimization was a bit different than
> > what ring.h has.
> 
> Right. They are similar, but different because in this protocol we have
> two rings: the `in' ring and the `out' ring. Each ring is
> mono-directional and there is no static request size: the producer
> writes opaque data to the ring. In ring.h they are combined together and
> the request size is static and well-known. In PVCalls:
> 
> in -> backend to frontend only
> out-> frontend to backend only
> 
> Let talk about the `in' ring, where the frontend is the consumer
> and the backend is the producer. Everything is the same but mirrored for
> the `out' ring.
> 
> The producer doesn't need any notifications unless the ring is full.
> The producer, the backend in this case, never reads from the `in' ring.
> Thus, I disabled notifications to the producer by default and added an
> in_event field for the producer to ask for notifications only when
> necessary, that is when the ring is full.
> 
> On the other end, the consumer always require notifications, unless the
> consumer is already actively reading from the ring. The
> producer could figure it out without any additional fields in the
> protocol. It can simply compare the indexes at the beginning and at the
> end of the function, that's similar to what the ring protocol does.

I like your description! Could you include this in a section
titled 'Why ring.h macros are not needed.' please?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3] displif: add ABI for para-virtual display

2017-02-10 Thread Konrad Rzeszutek Wilk
On Fri, Feb 10, 2017 at 09:29:58AM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko 
> 
> This is the ABI for the two halves of a para-virtualized
> display driver.
> 
> This protocol aims to provide a unified protocol which fits more
> sophisticated use-cases than a framebuffer device can handle. At the
> moment basic functionality is supported with the intention to extend:
>   o multiple dynamically allocated/destroyed framebuffers
>   o buffers of arbitrary sizes
>   o better configuration options including multiple display support
> 
> Note: existing fbif can be used together with displif running at the
> same time, e.g. on Linux one provides framebuffer and another DRM/KMS
> 
> Future extensions to the existing protocol may include:
>   o allow display/connector cloning
>   o allow allocating objects other than display buffers
>   o add planes/overlays support
>   o support scaling
>   o support rotation
> 
> ==
> Rationale for introducing this protocol instead of
> using the existing fbif:
> ==
> 
> 1. In/out event sizes
>   o fbif - 40 octets
>   o displif - 40 octets
> This is only the initial version of the displif protocol
> which means that there could be requests which will not fit
> (WRT introducing some GPU related functionality
> later on). In that case we cannot alter fbif sizes as we need to
> be backward compatible an will be forced to handle those
> apart of fbif.
> 
> 2. Shared page
> Displif doesn't use anything like struct xenfb_page, but
> DEFINE_RING_TYPES(xen_displif, struct xendispl_req, struct
> xendispl_resp) which is a better and more common way.
> Output events use a shared page which only has in_cons and in_prod
> and all the rest is used for incoming events. Here struct xenfb_page
> could probably be used as is despite the fact that it only has a half
> of a page for incoming events which is only 50 events. (consider
> something like 60Hz display)
> 
> 3. Amount of changes.
> fbif only provides XENFB_TYPE_UPDATE and XENFB_TYPE_RESIZE
> events, so it looks like it is easier to get fb support into displif

.. would it make sense to reserve some of those values (2, 3)
in the XENDISPL_OP_ values? So that if this happens there is a nice
fit in there? Thought looking at the structure there is no easy
way to 'overlay' the xenfb_out_event structure as it is missing the 'id'.

I guess one can get creative.

Or you could swap positions of 'id' and 'type'? And then it would fit much
nicer?

> than vice versa. displif at the moment has 6 requests and 1 event,
> multiple connector support, etc.
> 
> Changes since v2:
>  * updated XenStore configuration template/pattern
>  * added "Recovery flow" to state diagram description
>  * renamed gref_directory_start to gref_directory
>  * added missing "versions" and "version" string constants
> 
> Changes since v1:
>  * fixed xendispl_event_page padding size
>  * added versioning support
>  * explicitly define value types for XenStore fields
>  * text decoration re-work
>  * added offsets to ASCII box notation
> 
> Changes since initial:
>  * DRM changed to DISPL, protocol made generic
>  * major re-work addressing issues raised for sndif
> 
> Signed-off-by: Oleksandr Grytsov 
> Signed-off-by: Oleksandr Andrushchenko 
> ---
>  xen/include/public/io/displif.h | 778 
> 
>  1 file changed, 778 insertions(+)
>  create mode 100644 xen/include/public/io/displif.h
> 
> diff --git a/xen/include/public/io/displif.h b/xen/include/public/io/displif.h
> new file mode 100644
> index ..849f27fe5f1d
> --- /dev/null
> +++ b/xen/include/public/io/displif.h
> @@ -0,0 +1,778 @@
> +/**
> + * displif.h
> + *
> + * Unified display device I/O interface for Xen guest OSes.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER 

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document

2017-02-10 Thread Stefano Stabellini
On Fri, 10 Feb 2017, Paul Durrant wrote:
> > -Original Message-
> [snip]
> > > Neither NVIDIA vGPU nor Intel GVT-g are pass-through. They both use
> > emulation to synthesize GPU devices for guests and then use the actual GPU
> > to service the commands sent by the guest driver to the virtual GPU. So, I
> > think they fall outside the discussion here.
> > 
> > So in this case those devices would simply be assigned to Dom0, and
> > everything
> > would be trapped/emulated there? (by QEMU or whatever dm we are using)
> > 
> 
> Basically, yes. (Actually QEMU isn't the dm in either case).
> 
> > > AMD MxGPU is somewhat different in that it is an almost-SRIOV solution. I
> > say 'almost' because the VF's are not truly independent and so some
> > interception of accesses to certain registers is required, so that 
> > arbitration
> > can be applied, or they can be blocked. In this case a dedicated driver in
> > dom0 is required, and I believe it needs access to both the PF and all the 
> > VFs
> > to function correctly. However, once initial set-up is done, I think the VFs
> > could then be hidden from dom0. The PF is never passed-through and so
> > there should be no issue in leaving it visible to dom0.
> > 
> > The approach we where thinking of is hiding everything from Dom0 when it
> > boots, so that Dom0 would never really see those devices. This would be
> > done by
> > Xen scanning the PCI bus and any ECAM areas. DEvices that first need to be
> > assigned to Dom0 and then hidden where not part of the approach here.
> 
> That won't work for MxGPU then.
> 
> > 
> > > There is a further complication with GVT-d (Intel's term for GPU pass-
> > through) also because I believe there is also some initial set-up required 
> > and
> > some supporting emulation (e.g. Intel's guest driver expects there to be an
> > ISA bridge along with the GPU) which may need access to the real GPU. It is
> > also possible that, once this set-up is done, the GPU can then be hidden 
> > from
> > dom0 but I'm not sure because I was not involved with that code.
> > 
> > And then I guess some MMIO regions are assigned to the guest, and some
> > dm
> > performs the trapping of the accesses to the configuration space?
> > 
> 
> Well, that's how passthrough to HVM guests works in general at the moment. My 
> point was that there's still some need to see the device in the tools domain 
> before it gets passed through.

I understand and I think it is OK. Pretty much like you wrote, these are
not passthrough scenarios, they are a sort of hardware supported
emulated/PV graphics (for a lack of better term), so it's natural for
these devices to be assigned to dom0 (or another backend domain).


> > > Full pass-through of NVIDIA and AMD GPUs does not involve access from
> > dom0 at all though, so I don't think there should be any complication there.
> > 
> > Yes, in that case they would be treated as regular PCI devices, no
> > involvement
> > from Dom0 would be needed. I'm more worried about this mixed cases,
> > where some
> > Dom0 interaction is needed in order to perform the passthrough.
> > 
> > > Does that all make sense?
> > 
> > I guess, could you please keep an eye on further design documents? Just to
> > make sure that what's described here would work for the more complex
> > passthrough scenarios that XenServer supports.
> 
> Ok, I will watch the list more closely for pass-through discussions, but 
> please keep me cc-ed on anything you think may be relevant.

Thank you, Paul

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.6-testing test] 105685: tolerable FAIL - PUSHED

2017-02-10 Thread osstest service owner
flight 105685 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105685/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair  20 guest-start/debian fail in 105673 pass in 105685
 test-armhf-armhf-xl-credit2 15 guest-start/debian.repeat fail in 105673 pass 
in 105685
 test-amd64-amd64-rumprun-amd64 16 rumprun-demo-xenstorels/xenstorels.repeat 
fail pass in 105673

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeat fail in 105673 blocked 
in 104585
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 104585
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 104585
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 104585
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 104585
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 104585
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 104585
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 104585

Tests which did not succeed, but are not blocking:
 test-xtf-amd64-amd64-4   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-2   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-1   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-xtf-amd64-amd64-3   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-xtf-amd64-amd64-5   62 xtf/test-pv32pae-xsa-194 fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass

version targeted for testing:
 xen  576f319a804bce8c9a7fb70a042f873f5eaf0151
baseline version:
 xen  09f521a077024d5955d766eef7a040d2af928ec2

Last test of basis   104585  2017-01-22 08:19:51 Z   19 days
Testing same since   105664  2017-02-09 10:14:26 Z1 days3 attempts


People who touched revisions under test:
  George Dunlap 
  Jan Beulich 
  Joao Martins 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   

Re: [Xen-devel] [DOC v8] PV Calls protocol design

2017-02-10 Thread Stefano Stabellini
On Fri, 10 Feb 2017, Konrad Rzeszutek Wilk wrote:
> .snip..
> > > > Request fields:
> > > > 
> > > > - **cmd** value: 0
> > > > - additional fields:
> > > >   - **id**: identifies the socket
> > > >   - **addr**: address to connect to, see [Socket families and address 
> > > > format]
> > > 
> > > 
> > > Hm, so what do we do if we want to support AF_UNIX which has an addr of
> > > 108 bytes?
> > 
> > We write a protocol extension and bump the protocol version. However, we
> 
> Right. How would you change the protocol for this?
> 
> I not asking to have this in this protocol but I just want us to think
> of what we could do so that if somebody was to implement this - how
> could we make this easier for this?
> 
> My initial thought was to spread the request over two "old" structures.
> And if so .. would it make sense to include an extra flag or such?

That's a possibility, but I don't think we need an extra flag. It would
be easier to introduce a new command, such as PVCALLS_CONNECT_EXTENDED
or PVCALLS_CONNECT_V2, with the appropriate flags to say that it will
make use of two request slots instead of one.


> > could make the addr array size larger now to be more future proof, but
> > it takes up memory and I have no use for it, given that we can use
> > loopback for the same purpose.
> > 
> 
> ..snip..
> > > >  Indexes Page Structure
> > > > 
> > > > typedef uint32_t PVCALLS_RING_IDX;
> > > > 
> > > > struct pvcalls_data_intf {
> > > > PVCALLS_RING_IDX in_cons, in_prod;
> > > > int32_t in_error;
> > > 
> > > You don't want to perhaps include in_event?
> > > > 
> > > > uint8_t pad[52];
> > > > 
> > > > PVCALLS_RING_IDX out_cons, out_prod;
> > > > int32_t out_error;
> > > 
> > > And out_event as way to do some form of interrupt mitigation
> > > (similar to what you had proposed?)
> > 
> > Yes, the in_event / out_event optimization that I wrote for the 9pfs
> > protocol could work here too. However, I thought you preferred to remove
> > it for now as it is not required and increases complexity?
> 
> I did. But I am coming to it by looking at the ring.h header.
> 
> My recollection was that your optimization was a bit different than
> what ring.h has.

Right. They are similar, but different because in this protocol we have
two rings: the `in' ring and the `out' ring. Each ring is
mono-directional and there is no static request size: the producer
writes opaque data to the ring. In ring.h they are combined together and
the request size is static and well-known. In PVCalls:

in -> backend to frontend only
out-> frontend to backend only

Let talk about the `in' ring, where the frontend is the consumer
and the backend is the producer. Everything is the same but mirrored for
the `out' ring.

The producer doesn't need any notifications unless the ring is full.
The producer, the backend in this case, never reads from the `in' ring.
Thus, I disabled notifications to the producer by default and added an
in_event field for the producer to ask for notifications only when
necessary, that is when the ring is full.

On the other end, the consumer always require notifications, unless the
consumer is already actively reading from the ring. The
producer could figure it out without any additional fields in the
protocol. It can simply compare the indexes at the beginning and at the
end of the function, that's similar to what the ring protocol does.

 
> > We could always add it later, if we reserved some padding here for it.
> > Something like:
> > 
> >struct pvcalls_data_intf {
> > PVCALLS_RING_IDX in_cons, in_prod;
> > int32_t in_error;
> > 
> > uint8_t pad[52];
> > 
> > PVCALLS_RING_IDX out_cons, out_prod;
> > int32_t out_error;
> > 
> > uint8_t pad[52]; <--- this is new
> > 
> > uint32_t ring_order;
> > grant_ref_t ref[];
> >};
> > 
> > We have plenty of space for the grant refs anyway. This way, we can
> > introduce in_event and out_event by eating up 4 bytes from each pad
> > array.
> 
> That is true.

I think it makes sense to start simple. The optimization could be a
decent first feature flag :-)


> > > > 
> > > > uint32_t ring_order;
> > > > grant_ref_t ref[];
> > > > };
> > > > 
> > > > /* not actually C compliant (ring_order changes from socket to 
> > > > socket) */
> > > > struct pvcalls_data {
> > > > char in[((1< > > > char out[((1< > > > };
> > > > 
> > > > - **ring_order**
> > > >   It represents the order of the data ring. The following list of grant
> > > >   references is of `(1 << ring_order)` elements. It cannot be greater 
> > > > than
> > > >   **max-page-order**, as specified by the backend on XenBus. It has to
> > > >   be one at minimum.
> > > 
> > > Oh? Why not zero? (4KB) as the 'max-page-order' has an example of zero 
> > > order?
> > > Perhaps if it MUST be one or more then the 'max-page-order' 

Re: [Xen-devel] [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP

2017-02-10 Thread kbuild test robot
Hi Paul,

[auto build test ERROR on xen-tip/linux-next]
[also build test ERROR on v4.10-rc7 next-20170210]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Paul-Durrant/xen-privcmd-support-for-dm_op-and-restriction/20170211-001520
base:   https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git linux-next
config: arm64-defconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
wget 
https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross
 -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

   drivers/xen/privcmd.c: In function 'privcmd_ioctl_dm_op':
>> drivers/xen/privcmd.c:673:7: error: implicit declaration of function 
>> 'HYPERVISOR_dm_op' [-Werror=implicit-function-declaration]
 rc = HYPERVISOR_dm_op(kdata.dom, kdata.num, xbufs);
  ^~~~
   cc1: some warnings being treated as errors

vim +/HYPERVISOR_dm_op +673 drivers/xen/privcmd.c

   667  for (i = 0; i < kdata.num; i++) {
   668  set_xen_guest_handle(xbufs[i].h, kbufs[i].uptr);
   669  xbufs[i].size = kbufs[i].size;
   670  }
   671  
   672  xen_preemptible_hcall_begin();
 > 673  rc = HYPERVISOR_dm_op(kdata.dom, kdata.num, xbufs);
   674  xen_preemptible_hcall_end();
   675  
   676  out:

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 105683: trouble: blocked/broken/fail/pass

2017-02-10 Thread osstest service owner
flight 105683 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105683/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 3 host-install(3) broken 
REGR. vs. 105629
 test-amd64-i386-libvirt-xsm   3 host-install(3)broken REGR. vs. 105629
 test-amd64-i386-xl-qemuu-winxpsp3  3 host-install(3)   broken REGR. vs. 105629

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 105629
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 105629
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 105629
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 105629
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 105629
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 105629
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 105629
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 105629

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 build-arm64-xsm   5 xen-buildfail   never pass
 build-arm64   5 xen-buildfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 build-arm64-pvops 5 kernel-build fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen  ac6e7fd7a4826c14b85b9da59fc800a3a1bd3fd0
baseline version:
 xen  63e1d01b8fd948b3e0fa3beea494e407668aa43b

Last test of basis   105629  2017-02-08 06:54:04 Z2 days
Failing since105640  2017-02-08 14:19:37 Z2 days5 attempts
Testing same since   105683  2017-02-10 04:21:46 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Baptiste Daroussin 
  Fatih Acar 
  George Dunlap 
  Ian Jackson 

Re: [Xen-devel] [PATCH] xen-netfront: Delete rx_refill_timer in xennet_disconnect_backend()

2017-02-10 Thread David Miller
From: Boris Ostrovsky 
Date: Thu, 9 Feb 2017 08:42:59 -0500

> Are you going to take this to your tree or would you rather it goes
> via Xen tree?

Ok, I just did.

> And the same question for
> 
> https://lists.xenproject.org/archives/html/xen-devel/2017-02/msg00625.html

As I stated in the thread, I applied this one.

> https://lists.xenproject.org/archives/html/xen-devel/2017-02/msg00754.html

Likewise.

In the future, if you use netdev patchwork URLs, two things will
happen.  You will see immediately in the discussion log and the patch
state whether I applied it or not.  And second, I will be able to
reference and do something with the patch that much more quickly
and easily.

Thank you.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [DOC v4] Xen transport for 9pfs

2017-02-10 Thread Konrad Rzeszutek Wilk
On Thu, Feb 09, 2017 at 05:31:46PM -0800, Stefano Stabellini wrote:
> On Wed, 8 Feb 2017, Konrad Rzeszutek Wilk wrote:
> > > ## Ring Setup
> > > 
> > > The shared page has the following layout:
> > > 
> > > typedef uint32_t XEN_9PFS_RING_IDX;
> > > 
> > > struct xen_9pfs_intf {
> > >   XEN_9PFS_RING_IDX in_cons, in_prod;
> > >   uint8_t pad[56];
> > >   XEN_9PFS_RING_IDX out_cons, out_prod;
> > > 
> > >   uint32_t ring_order;
> > > /* this is an array of (1 << ring_order) elements */
> > >   grant_ref_t ref[1];
> > > };
> > > 
> > > /* not actually C compliant (ring_order changes from ring to ring) */
> > > struct ring_data {
> > > char in[((1 << ring_order) << PAGE_SHIFT) / 2];
> > > char out[((1 << ring_order) << PAGE_SHIFT) / 2];
> > > };
> > > 
> > 
> > This is the same comment about the the PV Calls structure.
> > 
> > Would it make sense to add the 'in_events' and 'out_events'
> > as a notification mechanism?
> 
> As I wrote in the case of PV Calls, given that it's just an optimization
> and increases complexity, what if we add some padding right after
> 
>   XEN_9PFS_RING_IDX out_cons, out_prod;
> 
> so that if we want to add it in the future, we can just place there,
> instead of the first 4 bytes of the padding array?

Yeah. Padding makes me sleep easy at night :-)

> 
> struct xen_9pfs_intf {
>   XEN_9PFS_RING_IDX in_cons, in_prod;
>   uint8_t pad[56];
>   XEN_9PFS_RING_IDX out_cons, out_prod;
>   uint8_t pad[56];
> 
>   uint32_t ring_order;
> /* this is an array of (1 << ring_order) elements */
>   grant_ref_t ref[1];
> };
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [DOC v8] PV Calls protocol design

2017-02-10 Thread Konrad Rzeszutek Wilk
.snip..
> > > Request fields:
> > > 
> > > - **cmd** value: 0
> > > - additional fields:
> > >   - **id**: identifies the socket
> > >   - **addr**: address to connect to, see [Socket families and address 
> > > format]
> > 
> > 
> > Hm, so what do we do if we want to support AF_UNIX which has an addr of
> > 108 bytes?
> 
> We write a protocol extension and bump the protocol version. However, we

Right. How would you change the protocol for this?

I not asking to have this in this protocol but I just want us to think
of what we could do so that if somebody was to implement this - how
could we make this easier for this?

My initial thought was to spread the request over two "old" structures.
And if so .. would it make sense to include an extra flag or such?

> could make the addr array size larger now to be more future proof, but
> it takes up memory and I have no use for it, given that we can use
> loopback for the same purpose.
> 

..snip..
> > >  Indexes Page Structure
> > > 
> > > typedef uint32_t PVCALLS_RING_IDX;
> > > 
> > > struct pvcalls_data_intf {
> > >   PVCALLS_RING_IDX in_cons, in_prod;
> > >   int32_t in_error;
> > 
> > You don't want to perhaps include in_event?
> > > 
> > >   uint8_t pad[52];
> > > 
> > >   PVCALLS_RING_IDX out_cons, out_prod;
> > >   int32_t out_error;
> > 
> > And out_event as way to do some form of interrupt mitigation
> > (similar to what you had proposed?)
> 
> Yes, the in_event / out_event optimization that I wrote for the 9pfs
> protocol could work here too. However, I thought you preferred to remove
> it for now as it is not required and increases complexity?

I did. But I am coming to it by looking at the ring.h header.

My recollection was that your optimization was a bit different than
what ring.h has.

> 
> We could always add it later, if we reserved some padding here for it.
> Something like:
> 
>struct pvcalls_data_intf {
>   PVCALLS_RING_IDX in_cons, in_prod;
>   int32_t in_error;
> 
>   uint8_t pad[52];
> 
>   PVCALLS_RING_IDX out_cons, out_prod;
>   int32_t out_error;
> 
>   uint8_t pad[52]; <--- this is new
> 
>   uint32_t ring_order;
>   grant_ref_t ref[];
>};
> 
> We have plenty of space for the grant refs anyway. This way, we can
> introduce in_event and out_event by eating up 4 bytes from each pad
> array.

That is true.
> 
> 
> > > 
> > >   uint32_t ring_order;
> > >   grant_ref_t ref[];
> > > };
> > > 
> > > /* not actually C compliant (ring_order changes from socket to 
> > > socket) */
> > > struct pvcalls_data {
> > > char in[((1< > > char out[((1< > > };
> > > 
> > > - **ring_order**
> > >   It represents the order of the data ring. The following list of grant
> > >   references is of `(1 << ring_order)` elements. It cannot be greater than
> > >   **max-page-order**, as specified by the backend on XenBus. It has to
> > >   be one at minimum.
> > 
> > Oh? Why not zero? (4KB) as the 'max-page-order' has an example of zero 
> > order?
> > Perhaps if it MUST be one or more then the 'max-page-order' should say
> > that at least it MUST be one?
> 
> So that each in and out array gets to have its own dedicated page,
> although I don't think it's strictly necessary. With zero, they would
> get half a page each.

That is fine. Just pls document 'max-page-order' to make it clear it MUST
be 1 or higher.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen on ARM IRQ latency and scheduler overhead

2017-02-10 Thread Stefano Stabellini
On Fri, 10 Feb 2017, Dario Faggioli wrote:
> On Thu, 2017-02-09 at 16:54 -0800, Stefano Stabellini wrote:
> > Hi all,
> > 
> Hi,
> 
> > I have run some IRQ latency measurements on Xen on ARM on a Xilinx
> > ZynqMP board (four Cortex A53 cores, GICv2).
> > 
> > Dom0 has 1 vcpu pinned to cpu0, DomU has 1 vcpu pinned to cpu2.
> > Dom0 is Ubuntu. DomU is an ad-hoc baremetal app to measure interrupt
> > latency: https://github.com/edgarigl/tbm
> > 
> Right, interesting use case. I'm glad to see there's some interest in
> it, and am happy to help investigating, and trying to make things
> better.

Thank you!


> > I modified the app to use the phys_timer instead of the virt_timer. 
> > You
> > can build it with:
> > 
> > make CFG=configs/xen-guest-irq-latency.cfg 
> > 
> Ok, do you (or anyone) mind explaining in a little bit more details
> what the app tries to measure and how it does that.

Give a look at app/xen/guest_irq_latency/apu.c:

https://github.com/edgarigl/tbm/blob/master/app/xen/guest_irq_latency/apu.c

This is my version which uses the phys_timer (instead of the virt_timer):

https://github.com/sstabellini/tbm/blob/phys-timer/app/xen/guest_irq_latency/apu.c

Edgar can jump in to add more info if needed (he is the author of the
app), but as you can see from the code, the app is very simple. It sets
a timer event in the future, then, after receiving the event, it checks
the current time and compare it with the deadline.


> As a matter of fact, I'm quite familiar with the scenario (I've spent a
> lot of time playing with cyclictest https://rt.wiki.kernel.org/index.ph
> p/Cyclictest ) but I don't immediately understand the meaning of way
> the timer is programmed, what is supposed to be in the various
> variables/register, what actually is 'freq', etc.

The timer is programmed by writing the compare value to the cntp_cval
system register, see a64_write_timer_cval. The counter is read by
reading the cntpct system register, see
arch-aarch64/aarch64-excp.c:aarch64_irq. freq is the frequency of the
timer (which is lower than the cpu frequency). freq_k is the
multiplication factor to convert timer counter numbers into nanosec, on
my platform it's 10.

If you want more info on the timer, give a look at "Generic Timer" in
the ARM Architecture Reference Manual.


> > These are the results, in nanosec:
> > 
> >     AVG MIN MAX WARM MAX
> > 
> > NODEBUG no WFI  1890    1800    3170    2070
> > NODEBUG WFI 4850    4810    7030    4980
> > NODEBUG no WFI credit2  2217    2090    3420    2650
> > NODEBUG WFI credit2 8080    7890    10320   8300
> > 
> > DEBUG no WFI    2252    2080    3320    2650
> > DEBUG WFI   6500    6140    8520    8130
> > DEBUG WFI, credit2  8050    7870    10680   8450
> > 
> > DEBUG means Xen DEBUG build.
> >
> Mmm, and Credit2 (with WFI) behave almost the same (and even a bit
> better in some cases) with debug enabled. While in Credit1, debug yes
> or no makes quite a few difference, AFAICT, especially in the WFI case.
> 
> That looks a bit strange, as I'd have expected the effect to be similar
> (there's actually quite a bit of debug checks in Credit2, maybe even
> more than in Credit1).
> 
> > WARM MAX is the maximum latency, taking out the first few interrupts
> > to
> > warm the caches.
> > WFI is the ARM and ARM64 sleeping instruction, trapped and emulated
> > by
> > Xen by calling vcpu_block.
> > 
> > As you can see, depending on whether the guest issues a WFI or not
> > while
> > waiting for interrupts, the results change significantly.
> > Interestingly,
> > credit2 does worse than credit1 in this area.
> > 
> This is with current staging right? 

That's right.


> If yes, in Credit1, you on ARM
> never stop the scheduler tick, like we do in x86. This means the system
> is, in general, "more awake" than Credit2, which does not have a
> periodic tick (and FWIW, also "more awake" of Credit1 in x86, as far as
> the scheduler is concerned, at least).
> 
> Whether or not this impact significantly your measurements, I don't
> know, as it depends on a bunch of factors. What we know is that this
> has enough impact to trigger the RCU bug Julien discovered (in a
> different scenario, I know), so I would not rule it out.
> 
> I can try sending a quick patch for disabling the tick when a CPU is
> idle, but I'd need your help in testing it.

That might be useful, however, if I understand this right, we don't
actually want a periodic timer in Xen just to make the system more
responsive, do we?


> > Trying to figure out where those 3000-4000ns of difference between
> > the
> > WFI and non-WFI cases come from, I wrote a patch to zero the latency
> > introduced by xen/arch/arm/domain.c:schedule_tail. That saves about
> > 1000ns. There are no other arch specific context switch functions
> > worth
> > optimizing.
> > 
> Yeah. It would be interesting to see a trace, but we still don't have
> that for ARM. :-(

indeed



Re: [Xen-devel] [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP

2017-02-10 Thread Boris Ostrovsky
On 02/10/2017 11:28 AM, Paul Durrant wrote:
>> -Original Message-
>> From: Boris Ostrovsky [mailto:boris.ostrov...@oracle.com]
>> Sent: 10 February 2017 16:18
>> To: Paul Durrant ; xen-de...@lists.xenproject.org;
>> linux-ker...@vger.kernel.org
>> Cc: Juergen Gross 
>> Subject: Re: [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP
>>
>> On 02/10/2017 09:24 AM, Paul Durrant wrote:
>>> +static long privcmd_ioctl_dm_op(void __user *udata)
>>> +{
>>> +   struct privcmd_dm_op kdata;
>>> +   struct privcmd_dm_op_buf *kbufs;
>>> +   unsigned int nr_pages = 0;
>>> +   struct page **pages = NULL;
>>> +   struct xen_dm_op_buf *xbufs = NULL;
>>> +   unsigned int i;
>>> +   long rc;
>>> +
>>> +   if (copy_from_user(, udata, sizeof(kdata)))
>>> +   return -EFAULT;
>>> +
>>> +   if (kdata.num == 0)
>>> +   return 0;
>>> +
>>> +   /*
>>> +* Set a tolerable upper limit on the number of buffers
>>> +* without being overly restrictive, since we can't easily
>>> +* predict what future dm_ops may require.
>>> +*/
>> I think this deserves its own macro since it really has nothing to do
>> with page size, has it? Especially since you are referencing it again
>> below too.
>>
>>
>>> +   if (kdata.num * sizeof(*kbufs) > PAGE_SIZE)
>>> +   return -E2BIG;
>>> +
>>> +   kbufs = kcalloc(kdata.num, sizeof(*kbufs), GFP_KERNEL);
>>> +   if (!kbufs)
>>> +   return -ENOMEM;
>>> +
>>> +   if (copy_from_user(kbufs, kdata.ubufs,
>>> +  sizeof(*kbufs) * kdata.num)) {
>>> +   rc = -EFAULT;
>>> +   goto out;
>>> +   }
>>> +
>>> +   for (i = 0; i < kdata.num; i++) {
>>> +   if (!access_ok(VERIFY_WRITE, kbufs[i].uptr,
>>> +  kbufs[i].size)) {
>>> +   rc = -EFAULT;
>>> +   goto out;
>>> +   }
>>> +
>>> +   nr_pages += DIV_ROUND_UP(
>>> +   offset_in_page(kbufs[i].uptr) + kbufs[i].size,
>>> +   PAGE_SIZE);
>>> +   }
>>> +
>>> +   /*
>>> +* Again, set a tolerable upper limit on the number of pages
>>> +* needed to lock all the buffers without being overly
>>> +* restrictive, since we can't easily predict the size of
>>> +* buffers future dm_ops may use.
>>> +*/
>> OTOH, these two cases describe different types of copying (the first one
>> is for buffer descriptors and the second is for buffers themselves). And
>> so should they be limited by the same value?
>>
> I think there needs to be some limit and limiting the allocation to a page 
> was the best I came up with. Can you think of a better one?

How about something like (with rather arbitrary values)

#define PRIVCMD_DMOP_MAX_NUM_BUFFERS   16
#define PRIVCMD_DMOP_MAX_TOT_BUFFER_SZ 4096

and make them part of the interface (i.e. put them into privcmd.h)?

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH v1 17/21] ARM: NUMA: Extract memory proximity from SRAT table

2017-02-10 Thread Konrad Rzeszutek Wilk
On Fri, Feb 10, 2017 at 12:33:33PM -0500, Konrad Rzeszutek Wilk wrote:
> On Thu, Feb 09, 2017 at 09:27:09PM +0530, vijay.kil...@gmail.com wrote:
> > From: Vijaya Kumar K 
> > 
> > Register SRAT entry handler for type
> > ACPI_SRAT_TYPE_MEMORY_AFFINITY to parse SRAT table
> > and extract proximity for all memory mappings.
> 
> Why can't you use arch/x86/srat.c code? Or move parts of that code to an
> common code?

And to be clear - I meant the 'acpi_numa_memory_affinity_init' function?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH v1 17/21] ARM: NUMA: Extract memory proximity from SRAT table

2017-02-10 Thread Konrad Rzeszutek Wilk
On Thu, Feb 09, 2017 at 09:27:09PM +0530, vijay.kil...@gmail.com wrote:
> From: Vijaya Kumar K 
> 
> Register SRAT entry handler for type
> ACPI_SRAT_TYPE_MEMORY_AFFINITY to parse SRAT table
> and extract proximity for all memory mappings.

Why can't you use arch/x86/srat.c code? Or move parts of that code to an
common code?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH v1 00/21] ARM: Add Xen NUMA support

2017-02-10 Thread Konrad Rzeszutek Wilk
On Thu, Feb 09, 2017 at 09:26:52PM +0530, vijay.kil...@gmail.com wrote:
> From: Vijaya Kumar K 
> 
> With this RFC patch series, NUMA support is added for arm platform.
> Both DT and ACPI based NUMA support is added.
> Only Xen is made aware of NUMA platform. Dom0 is awareness is not
> added.
> 
> As part of this series, the code under x86 architecture is
> reused by moving into common files.
> New files xen/common/numa.c and xen/commom/srat.c files are
> added which are common for both x86 and arm.
> 
> Patches 1 - 12 & 20 are for DT NUMA and 13 - 19 & 21 are for
> ACPI NUMA.
> 
> DT NUMA: The following major changes are performed
>  - Dropped numa-node-id information from Dom0 DT.
>So that Dom0 devices make allocation from node 0 for
>devmalloc requests.
>  - Memory DT is not deleted by EFI. It is exposed to Xen
>to extract numa information.
>  - On NUMA failure, Fallback to Non-NUMA booting.
>Assuming all the memory and CPU's are under node 0.
>  - CONFIG_NUMA is introduced.
> 
> ACPI NUMA:
>  - MADT is parsed before parsing SRAT table to extract
>CPU_ID to MPIDR mapping info. In Linux, while parsing SRAT
>table, MADT table is opened and extract MPIDR. However this
>approach is not working on Xen it allows only one table to
>be open at a time because when ACPI table is opened, Xen
>maps to single region. So opening ACPI tables recursively
>leads to overwriting of contents.

Huh? Why can't you use vmap APIs to map them?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 02/11] x86emul: flatten twobyte_table[]

2017-02-10 Thread Andrew Cooper
On 01/02/17 11:13, Jan Beulich wrote:
> +static const struct {
> +opcode_desc_t desc;
> +} twobyte_table[256] = {
> +[0x00] = { ModRM },

This is definitely an improvement in readability, so Acked-by: Andrew
Cooper  (I have briefly checked that
everything appears to be the same, but not checked thoroughly)

I had a plan to do this anyway, including the onebyte table, and adding
instruction/group comments like the case statements for emulation.  Is
that something you can introduce in your series, or shall I wait and
retrofit a patch later?

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function

2017-02-10 Thread Waiman Long
On 02/10/2017 11:35 AM, Waiman Long wrote:
> On 02/10/2017 11:19 AM, Peter Zijlstra wrote:
>> On Fri, Feb 10, 2017 at 10:43:09AM -0500, Waiman Long wrote:
>>> It was found when running fio sequential write test with a XFS ramdisk
>>> on a VM running on a 2-socket x86-64 system, the %CPU times as reported
>>> by perf were as follows:
>>>
>>>  69.75%  0.59%  fio  [k] down_write
>>>  69.15%  0.01%  fio  [k] call_rwsem_down_write_failed
>>>  67.12%  1.12%  fio  [k] rwsem_down_write_failed
>>>  63.48% 52.77%  fio  [k] osq_lock
>>>   9.46%  7.88%  fio  [k] __raw_callee_save___kvm_vcpu_is_preempt
>>>   3.93%  3.93%  fio  [k] __kvm_vcpu_is_preempted
>>>
>> Thinking about this again, wouldn't something like the below also work?
>>
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 099fcba4981d..6aa33702c15c 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -589,6 +589,7 @@ static void kvm_wait(u8 *ptr, u8 val)
>>  local_irq_restore(flags);
>>  }
>>  
>> +#ifdef CONFIG_X86_32
>>  __visible bool __kvm_vcpu_is_preempted(int cpu)
>>  {
>>  struct kvm_steal_time *src = _cpu(steal_time, cpu);
>> @@ -597,6 +598,31 @@ __visible bool __kvm_vcpu_is_preempted(int cpu)
>>  }
>>  PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted);
>>  
>> +#else
>> +
>> +extern bool __raw_callee_save___kvm_vcpu_is_preempted(int);
>> +
>> +asm(
>> +".pushsection .text;"
>> +".global __raw_callee_save___kvm_vcpu_is_preempted;"
>> +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
>> +"__raw_callee_save___kvm_vcpu_is_preempted:"
>> +FRAME_BEGIN
>> +"push %rdi;"
>> +"push %rdx;"
>> +"movslq  %edi, %rdi;"
>> +"movq$steal_time+16, %rax;"
>> +"movq__per_cpu_offset(,%rdi,8), %rdx;"
>> +"cmpb$0, (%rdx,%rax);"
>> +"setne   %al;"
>> +"pop %rdx;"
>> +"pop %rdi;"
>> +FRAME_END
>> +"ret;"
>> +".popsection");
>> +
>> +#endif
>> +
>>  /*
>>   * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
>>   */
> That should work for now. I have done something similar for
> __pv_queued_spin_unlock. However, this has the problem of creating a
> dependency on the exact layout of the steal_time structure. Maybe the
> constant 16 can be passed in as a parameter offsetof(struct
> kvm_steal_time, preempted) to the asm call.
>
> Cheers,
> Longman

One more thing, that will improve KVM performance, but it won't help Xen.

I looked into the assembly code for rwsem_spin_on_owner, It need to save
and restore 2 additional registers with my patch. Doing it your way,
will transfer the save and restore overhead to the assembly code.
However, __kvm_vcpu_is_preempted() is called multiple times per
invocation of rwsem_spin_on_owner. That function is simple enough that
making __kvm_vcpu_is_preempted() callee-save won't produce much compiler
optimization opportunity. The outer function rwsem_down_write_failed()
does appear to be a bit bigger (from 866 bytes to 884 bytes) though.

Cheers,
Longman



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 01/11] x86emul: catch exceptions occurring in stubs

2017-02-10 Thread Andrew Cooper
On 01/02/17 11:12, Jan Beulich wrote:
> Before adding more use of stubs cloned from decoded guest insns, guard
> ourselves against mistakes there: Should an exception (with the
> noteworthy exception of #PF) occur inside the stub, forward it to the
> guest.

Why exclude #PF ? Nothing in a stub should be hitting a pagefault in the
first place.

>
> Since the exception fixup table entry can't encode the address of the
> faulting insn itself, attach it to the return address instead. This at
> once provides a convenient place to hand the exception information
> back: The return address is being overwritten by it before branching to
> the recovery code.
>
> Take the opportunity and (finally!) add symbol resolution to the
> respective log messages (the new one is intentionally not being coded
> that way, as it covers stub addresses only, which don't have symbols
> associated).
>
> Also take the opportunity and make search_one_extable() static again.
>
> Suggested-by: Andrew Cooper 
> Signed-off-by: Jan Beulich 
> ---
> There's one possible caveat here: A stub invocation immediately
> followed by another instruction having fault revovery attached to it
> would not work properly, as the table lookup can only ever find one of
> the two entries. Such CALL instructions would then need to be followed
> by a NOP for disambiguation (even if only a slim chance exists for the
> compiler to emit things that way).

Why key on return address at all?  %rip being in the stubs should be
good enough.

>
> TBD: Instead of adding a 2nd search_exception_table() invocation to
>  do_trap(), we may want to consider moving the existing one down:
>  Xen code (except when executing stubs) shouldn't be raising #MF
>  or #XM, and hence fixups attached to instructions shouldn't care
>  about getting invoked for those. With that, doing the HVM special
>  case for them before running search_exception_table() would be
>  fine.
>
> Note that the two SIMD related stub invocations in the insn emulator
> intentionally don't get adjusted here, as subsequent patches will
> replace them anyway.
>
> --- a/xen/arch/x86/extable.c
> +++ b/xen/arch/x86/extable.c
> @@ -6,6 +6,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  
> @@ -62,7 +63,7 @@ void __init sort_exception_tables(void)
>  sort_exception_table(__start___pre_ex_table, __stop___pre_ex_table);
>  }
>  
> -unsigned long
> +static unsigned long
>  search_one_extable(const struct exception_table_entry *first,
> const struct exception_table_entry *last,
> unsigned long value)
> @@ -85,15 +86,88 @@ search_one_extable(const struct exceptio
>  }
>  
>  unsigned long
> -search_exception_table(unsigned long addr)
> +search_exception_table(const struct cpu_user_regs *regs, bool check_stub)
>  {
> -const struct virtual_region *region = find_text_region(addr);
> +const struct virtual_region *region = find_text_region(regs->rip);
> +unsigned long stub = this_cpu(stubs.addr);
>  
>  if ( region && region->ex )
> -return search_one_extable(region->ex, region->ex_end - 1, addr);
> +return search_one_extable(region->ex, region->ex_end - 1, regs->rip);
> +
> +if ( check_stub &&
> + regs->rip >= stub + STUB_BUF_SIZE / 2 &&
> + regs->rip < stub + STUB_BUF_SIZE &&
> + regs->rsp > (unsigned long)_stub &&
> + regs->rsp < (unsigned long)get_cpu_info() )

How much do we care about accidentally clobbering %rsp in a stub?

If we encounter a fault with %rip in the stubs, we should terminate
obviously if %rsp it outside of the main stack.  Nothing good can come
from continuing.

> +{
> +unsigned long retptr = *(unsigned long *)regs->rsp;
> +
> +region = find_text_region(retptr);
> +retptr = region && region->ex
> + ? search_one_extable(region->ex, region->ex_end - 1, retptr)
> + : 0;
> +if ( retptr )
> +{
> +/*
> + * Put trap number and error code on the stack (in place of the
> + * original return address) for recovery code to pick up.
> + */
> +*(unsigned long *)regs->rsp = regs->error_code |
> +((uint64_t)(uint8_t)regs->entry_vector << 32);
> +return retptr;

I have found an alternative which has proved very neat in XTF.

By calling the stub like this:

asm volatile ("call *%[stub]" : "=a" (exn) : "a" (0));

and having this fixup write straight into %rax, the stub ends up
behaving as having an unsigned long return value.  This avoids the need
for any out-of-line code recovering the exception information and
redirecting back as if the call had completed normally.

http://xenbits.xen.org/gitweb/?p=xtf.git;a=blob;f=include/arch/x86/exinfo.h;hb=master

One subtle trap I fell over is you also need a valid bit to help
distinguish #DE, which always 

Re: [Xen-devel] [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP

2017-02-10 Thread Paul Durrant
> -Original Message-
> From: Boris Ostrovsky [mailto:boris.ostrov...@oracle.com]
> Sent: 10 February 2017 16:18
> To: Paul Durrant ; xen-de...@lists.xenproject.org;
> linux-ker...@vger.kernel.org
> Cc: Juergen Gross 
> Subject: Re: [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP
> 
> On 02/10/2017 09:24 AM, Paul Durrant wrote:
> > +static long privcmd_ioctl_dm_op(void __user *udata)
> > +{
> > +   struct privcmd_dm_op kdata;
> > +   struct privcmd_dm_op_buf *kbufs;
> > +   unsigned int nr_pages = 0;
> > +   struct page **pages = NULL;
> > +   struct xen_dm_op_buf *xbufs = NULL;
> > +   unsigned int i;
> > +   long rc;
> > +
> > +   if (copy_from_user(, udata, sizeof(kdata)))
> > +   return -EFAULT;
> > +
> > +   if (kdata.num == 0)
> > +   return 0;
> > +
> > +   /*
> > +* Set a tolerable upper limit on the number of buffers
> > +* without being overly restrictive, since we can't easily
> > +* predict what future dm_ops may require.
> > +*/
> 
> I think this deserves its own macro since it really has nothing to do
> with page size, has it? Especially since you are referencing it again
> below too.
> 
> 
> > +   if (kdata.num * sizeof(*kbufs) > PAGE_SIZE)
> > +   return -E2BIG;
> > +
> > +   kbufs = kcalloc(kdata.num, sizeof(*kbufs), GFP_KERNEL);
> > +   if (!kbufs)
> > +   return -ENOMEM;
> > +
> > +   if (copy_from_user(kbufs, kdata.ubufs,
> > +  sizeof(*kbufs) * kdata.num)) {
> > +   rc = -EFAULT;
> > +   goto out;
> > +   }
> > +
> > +   for (i = 0; i < kdata.num; i++) {
> > +   if (!access_ok(VERIFY_WRITE, kbufs[i].uptr,
> > +  kbufs[i].size)) {
> > +   rc = -EFAULT;
> > +   goto out;
> > +   }
> > +
> > +   nr_pages += DIV_ROUND_UP(
> > +   offset_in_page(kbufs[i].uptr) + kbufs[i].size,
> > +   PAGE_SIZE);
> > +   }
> > +
> > +   /*
> > +* Again, set a tolerable upper limit on the number of pages
> > +* needed to lock all the buffers without being overly
> > +* restrictive, since we can't easily predict the size of
> > +* buffers future dm_ops may use.
> > +*/
> 
> OTOH, these two cases describe different types of copying (the first one
> is for buffer descriptors and the second is for buffers themselves). And
> so should they be limited by the same value?
> 

I think there needs to be some limit and limiting the allocation to a page was 
the best I came up with. Can you think of a better one?

> > +   if (nr_pages * sizeof(*pages) > PAGE_SIZE) {
> > +   rc = -E2BIG;
> > +   goto out;
> > +   }
> > +
> > +   pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
> > +   if (!pages) {
> > +   rc = -ENOMEM;
> > +   goto out;
> > +   }
> > +
> > +   xbufs = kcalloc(kdata.num, sizeof(*xbufs), GFP_KERNEL);
> > +   if (!xbufs) {
> > +   rc = -ENOMEM;
> > +   goto out;
> > +   }
> > +
> > +   rc = lock_pages(kbufs, kdata.num, pages, nr_pages);
> 
> 
> Aren't those buffers already locked (as Andrew mentioned)? They are
> mmapped with MAP_LOCKED.

No, they're not. The new libxendevicemodel code I have does not make any use of 
xencall or guest handles when privcmd supports the DM_OP ioctl, so the caller 
buffers will not be locked.

> 
> And I also wonder whether we need to take rlimit(RLIMIT_MEMLOCK) into
> account.
> 

Maybe. I'll look at that.

  Paul

> -boris
> 
> 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function

2017-02-10 Thread Waiman Long
On 02/10/2017 11:19 AM, Peter Zijlstra wrote:
> On Fri, Feb 10, 2017 at 10:43:09AM -0500, Waiman Long wrote:
>> It was found when running fio sequential write test with a XFS ramdisk
>> on a VM running on a 2-socket x86-64 system, the %CPU times as reported
>> by perf were as follows:
>>
>>  69.75%  0.59%  fio  [k] down_write
>>  69.15%  0.01%  fio  [k] call_rwsem_down_write_failed
>>  67.12%  1.12%  fio  [k] rwsem_down_write_failed
>>  63.48% 52.77%  fio  [k] osq_lock
>>   9.46%  7.88%  fio  [k] __raw_callee_save___kvm_vcpu_is_preempt
>>   3.93%  3.93%  fio  [k] __kvm_vcpu_is_preempted
>>
> Thinking about this again, wouldn't something like the below also work?
>
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 099fcba4981d..6aa33702c15c 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -589,6 +589,7 @@ static void kvm_wait(u8 *ptr, u8 val)
>   local_irq_restore(flags);
>  }
>  
> +#ifdef CONFIG_X86_32
>  __visible bool __kvm_vcpu_is_preempted(int cpu)
>  {
>   struct kvm_steal_time *src = _cpu(steal_time, cpu);
> @@ -597,6 +598,31 @@ __visible bool __kvm_vcpu_is_preempted(int cpu)
>  }
>  PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted);
>  
> +#else
> +
> +extern bool __raw_callee_save___kvm_vcpu_is_preempted(int);
> +
> +asm(
> +".pushsection .text;"
> +".global __raw_callee_save___kvm_vcpu_is_preempted;"
> +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
> +"__raw_callee_save___kvm_vcpu_is_preempted:"
> +FRAME_BEGIN
> +"push %rdi;"
> +"push %rdx;"
> +"movslq  %edi, %rdi;"
> +"movq$steal_time+16, %rax;"
> +"movq__per_cpu_offset(,%rdi,8), %rdx;"
> +"cmpb$0, (%rdx,%rax);"
> +"setne   %al;"
> +"pop %rdx;"
> +"pop %rdi;"
> +FRAME_END
> +"ret;"
> +".popsection");
> +
> +#endif
> +
>  /*
>   * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
>   */

That should work for now. I have done something similar for
__pv_queued_spin_unlock. However, this has the problem of creating a
dependency on the exact layout of the steal_time structure. Maybe the
constant 16 can be passed in as a parameter offsetof(struct
kvm_steal_time, preempted) to the asm call.

Cheers,
Longman



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function

2017-02-10 Thread Paolo Bonzini


On 10/02/2017 16:43, Waiman Long wrote:
> It was found when running fio sequential write test with a XFS ramdisk
> on a VM running on a 2-socket x86-64 system, the %CPU times as reported
> by perf were as follows:
> 
>  69.75%  0.59%  fio  [k] down_write
>  69.15%  0.01%  fio  [k] call_rwsem_down_write_failed
>  67.12%  1.12%  fio  [k] rwsem_down_write_failed
>  63.48% 52.77%  fio  [k] osq_lock
>   9.46%  7.88%  fio  [k] __raw_callee_save___kvm_vcpu_is_preempt
>   3.93%  3.93%  fio  [k] __kvm_vcpu_is_preempted
> 
> Making vcpu_is_preempted() a callee-save function has a relatively
> high cost on x86-64 primarily due to at least one more cacheline of
> data access from the saving and restoring of registers (8 of them)
> to and from stack as well as one more level of function call. As
> vcpu_is_preempted() is called within the spinlock, mutex and rwsem
> slowpaths, there isn't much to gain by making it callee-save. So it
> is now changed to a normal function call instead.
> 
> With this patch applied on both bare-metal & KVM guest on a 2-socekt
> 16-core 32-thread system with 16 parallel jobs (8 on each socket), the
> aggregrate bandwidth of the fio test on an XFS ramdisk were as follows:
> 
>Bare MetalKVM Guest
>I/O Type  w/o patchwith patch   w/o patchwith patch
>  ---   ---
>random read   8650.5 MB/s  8560.9 MB/s  7602.9 MB/s  8196.1 MB/s  
>seq read  9104.8 MB/s  9397.2 MB/s  8293.7 MB/s  8566.9 MB/s
>random write  1623.8 MB/s  1626.7 MB/s  1590.6 MB/s  1700.7 MB/s
>seq write 1626.4 MB/s  1624.9 MB/s  1604.8 MB/s  1726.3 MB/s
> 
> The perf data (on KVM guest) now became:
> 
>  70.78%  0.58%  fio  [k] down_write
>  70.20%  0.01%  fio  [k] call_rwsem_down_write_failed
>  69.70%  1.17%  fio  [k] rwsem_down_write_failed
>  59.91% 55.42%  fio  [k] osq_lock
>  10.14% 10.14%  fio  [k] __kvm_vcpu_is_preempted
> 
> On bare metal, the patch doesn't introduce any performance
> regression. On KVM guest, it produces noticeable performance
> improvement (up to 7%).
> 
> Signed-off-by: Waiman Long 
> ---
>  v1->v2:
>   - Rerun the fio test on a different system on both bare-metal and a
> KVM guest. Both sockets were utilized in this test.
>   - The commit log was updated with new performance numbers, but the
> patch wasn't changed.
>   - Drop patch 2.
> 
>  arch/x86/include/asm/paravirt.h   | 2 +-
>  arch/x86/include/asm/paravirt_types.h | 2 +-
>  arch/x86/kernel/kvm.c | 7 ++-
>  arch/x86/kernel/paravirt-spinlocks.c  | 6 ++
>  arch/x86/xen/spinlock.c   | 4 +---
>  5 files changed, 7 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index 864f57b..2515885 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -676,7 +676,7 @@ static __always_inline void pv_kick(int cpu)
>  
>  static __always_inline bool pv_vcpu_is_preempted(int cpu)
>  {
> - return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu);
> + return PVOP_CALL1(bool, pv_lock_ops.vcpu_is_preempted, cpu);
>  }
>  
>  #endif /* SMP && PARAVIRT_SPINLOCKS */
> diff --git a/arch/x86/include/asm/paravirt_types.h 
> b/arch/x86/include/asm/paravirt_types.h
> index bb2de45..88dc852 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -309,7 +309,7 @@ struct pv_lock_ops {
>   void (*wait)(u8 *ptr, u8 val);
>   void (*kick)(int cpu);
>  
> - struct paravirt_callee_save vcpu_is_preempted;
> + bool (*vcpu_is_preempted)(int cpu);
>  };
>  
>  /* This contains all the paravirt structures: we get a convenient
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 099fcba..eb3753d 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -595,7 +595,6 @@ __visible bool __kvm_vcpu_is_preempted(int cpu)
>  
>   return !!src->preempted;
>  }
> -PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted);
>  
>  /*
>   * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
> @@ -614,10 +613,8 @@ void __init kvm_spinlock_init(void)
>   pv_lock_ops.wait = kvm_wait;
>   pv_lock_ops.kick = kvm_kick_cpu;
>  
> - if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
> - pv_lock_ops.vcpu_is_preempted =
> - PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
> - }
> + if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME))
> + pv_lock_ops.vcpu_is_preempted = __kvm_vcpu_is_preempted;
>  }
>  
>  #endif   /* CONFIG_PARAVIRT_SPINLOCKS */
> diff --git a/arch/x86/kernel/paravirt-spinlocks.c 
> b/arch/x86/kernel/paravirt-spinlocks.c
> index 6259327..da050bc 100644
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -24,12 +24,10 @@ __visible bool __native_vcpu_is_preempted(int cpu)
>  {
>  

Re: [Xen-devel] [PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function

2017-02-10 Thread Peter Zijlstra
On Fri, Feb 10, 2017 at 10:43:09AM -0500, Waiman Long wrote:
> It was found when running fio sequential write test with a XFS ramdisk
> on a VM running on a 2-socket x86-64 system, the %CPU times as reported
> by perf were as follows:
> 
>  69.75%  0.59%  fio  [k] down_write
>  69.15%  0.01%  fio  [k] call_rwsem_down_write_failed
>  67.12%  1.12%  fio  [k] rwsem_down_write_failed
>  63.48% 52.77%  fio  [k] osq_lock
>   9.46%  7.88%  fio  [k] __raw_callee_save___kvm_vcpu_is_preempt
>   3.93%  3.93%  fio  [k] __kvm_vcpu_is_preempted
> 

Thinking about this again, wouldn't something like the below also work?


diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 099fcba4981d..6aa33702c15c 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -589,6 +589,7 @@ static void kvm_wait(u8 *ptr, u8 val)
local_irq_restore(flags);
 }
 
+#ifdef CONFIG_X86_32
 __visible bool __kvm_vcpu_is_preempted(int cpu)
 {
struct kvm_steal_time *src = _cpu(steal_time, cpu);
@@ -597,6 +598,31 @@ __visible bool __kvm_vcpu_is_preempted(int cpu)
 }
 PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted);
 
+#else
+
+extern bool __raw_callee_save___kvm_vcpu_is_preempted(int);
+
+asm(
+".pushsection .text;"
+".global __raw_callee_save___kvm_vcpu_is_preempted;"
+".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
+"__raw_callee_save___kvm_vcpu_is_preempted:"
+FRAME_BEGIN
+"push %rdi;"
+"push %rdx;"
+"movslq  %edi, %rdi;"
+"movq$steal_time+16, %rax;"
+"movq__per_cpu_offset(,%rdi,8), %rdx;"
+"cmpb$0, (%rdx,%rax);"
+"setne   %al;"
+"pop %rdx;"
+"pop %rdi;"
+FRAME_END
+"ret;"
+".popsection");
+
+#endif
+
 /*
  * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
  */

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP

2017-02-10 Thread Boris Ostrovsky
On 02/10/2017 09:24 AM, Paul Durrant wrote:
> +static long privcmd_ioctl_dm_op(void __user *udata)
> +{
> + struct privcmd_dm_op kdata;
> + struct privcmd_dm_op_buf *kbufs;
> + unsigned int nr_pages = 0;
> + struct page **pages = NULL;
> + struct xen_dm_op_buf *xbufs = NULL;
> + unsigned int i;
> + long rc;
> +
> + if (copy_from_user(, udata, sizeof(kdata)))
> + return -EFAULT;
> +
> + if (kdata.num == 0)
> + return 0;
> +
> + /*
> +  * Set a tolerable upper limit on the number of buffers
> +  * without being overly restrictive, since we can't easily
> +  * predict what future dm_ops may require.
> +  */

I think this deserves its own macro since it really has nothing to do
with page size, has it? Especially since you are referencing it again
below too.


> + if (kdata.num * sizeof(*kbufs) > PAGE_SIZE)
> + return -E2BIG;
> +
> + kbufs = kcalloc(kdata.num, sizeof(*kbufs), GFP_KERNEL);
> + if (!kbufs)
> + return -ENOMEM;
> +
> + if (copy_from_user(kbufs, kdata.ubufs,
> +sizeof(*kbufs) * kdata.num)) {
> + rc = -EFAULT;
> + goto out;
> + }
> +
> + for (i = 0; i < kdata.num; i++) {
> + if (!access_ok(VERIFY_WRITE, kbufs[i].uptr,
> +kbufs[i].size)) {
> + rc = -EFAULT;
> + goto out;
> + }
> +
> + nr_pages += DIV_ROUND_UP(
> + offset_in_page(kbufs[i].uptr) + kbufs[i].size,
> + PAGE_SIZE);
> + }
> +
> + /*
> +  * Again, set a tolerable upper limit on the number of pages
> +  * needed to lock all the buffers without being overly
> +  * restrictive, since we can't easily predict the size of
> +  * buffers future dm_ops may use.
> +  */

OTOH, these two cases describe different types of copying (the first one
is for buffer descriptors and the second is for buffers themselves). And
so should they be limited by the same value?

> + if (nr_pages * sizeof(*pages) > PAGE_SIZE) {
> + rc = -E2BIG;
> + goto out;
> + }
> +
> + pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
> + if (!pages) {
> + rc = -ENOMEM;
> + goto out;
> + }
> +
> + xbufs = kcalloc(kdata.num, sizeof(*xbufs), GFP_KERNEL);
> + if (!xbufs) {
> + rc = -ENOMEM;
> + goto out;
> + }
> +
> + rc = lock_pages(kbufs, kdata.num, pages, nr_pages);


Aren't those buffers already locked (as Andrew mentioned)? They are
mmapped with MAP_LOCKED.

And I also wonder whether we need to take rlimit(RLIMIT_MEMLOCK) into
account.

-boris




___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function

2017-02-10 Thread Waiman Long
It was found when running fio sequential write test with a XFS ramdisk
on a VM running on a 2-socket x86-64 system, the %CPU times as reported
by perf were as follows:

 69.75%  0.59%  fio  [k] down_write
 69.15%  0.01%  fio  [k] call_rwsem_down_write_failed
 67.12%  1.12%  fio  [k] rwsem_down_write_failed
 63.48% 52.77%  fio  [k] osq_lock
  9.46%  7.88%  fio  [k] __raw_callee_save___kvm_vcpu_is_preempt
  3.93%  3.93%  fio  [k] __kvm_vcpu_is_preempted

Making vcpu_is_preempted() a callee-save function has a relatively
high cost on x86-64 primarily due to at least one more cacheline of
data access from the saving and restoring of registers (8 of them)
to and from stack as well as one more level of function call. As
vcpu_is_preempted() is called within the spinlock, mutex and rwsem
slowpaths, there isn't much to gain by making it callee-save. So it
is now changed to a normal function call instead.

With this patch applied on both bare-metal & KVM guest on a 2-socekt
16-core 32-thread system with 16 parallel jobs (8 on each socket), the
aggregrate bandwidth of the fio test on an XFS ramdisk were as follows:

   Bare MetalKVM Guest
   I/O Type  w/o patchwith patch   w/o patchwith patch
     ---   ---
   random read   8650.5 MB/s  8560.9 MB/s  7602.9 MB/s  8196.1 MB/s  
   seq read  9104.8 MB/s  9397.2 MB/s  8293.7 MB/s  8566.9 MB/s
   random write  1623.8 MB/s  1626.7 MB/s  1590.6 MB/s  1700.7 MB/s
   seq write 1626.4 MB/s  1624.9 MB/s  1604.8 MB/s  1726.3 MB/s

The perf data (on KVM guest) now became:

 70.78%  0.58%  fio  [k] down_write
 70.20%  0.01%  fio  [k] call_rwsem_down_write_failed
 69.70%  1.17%  fio  [k] rwsem_down_write_failed
 59.91% 55.42%  fio  [k] osq_lock
 10.14% 10.14%  fio  [k] __kvm_vcpu_is_preempted

On bare metal, the patch doesn't introduce any performance
regression. On KVM guest, it produces noticeable performance
improvement (up to 7%).

Signed-off-by: Waiman Long 
---
 v1->v2:
  - Rerun the fio test on a different system on both bare-metal and a
KVM guest. Both sockets were utilized in this test.
  - The commit log was updated with new performance numbers, but the
patch wasn't changed.
  - Drop patch 2.

 arch/x86/include/asm/paravirt.h   | 2 +-
 arch/x86/include/asm/paravirt_types.h | 2 +-
 arch/x86/kernel/kvm.c | 7 ++-
 arch/x86/kernel/paravirt-spinlocks.c  | 6 ++
 arch/x86/xen/spinlock.c   | 4 +---
 5 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 864f57b..2515885 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -676,7 +676,7 @@ static __always_inline void pv_kick(int cpu)
 
 static __always_inline bool pv_vcpu_is_preempted(int cpu)
 {
-   return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu);
+   return PVOP_CALL1(bool, pv_lock_ops.vcpu_is_preempted, cpu);
 }
 
 #endif /* SMP && PARAVIRT_SPINLOCKS */
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index bb2de45..88dc852 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -309,7 +309,7 @@ struct pv_lock_ops {
void (*wait)(u8 *ptr, u8 val);
void (*kick)(int cpu);
 
-   struct paravirt_callee_save vcpu_is_preempted;
+   bool (*vcpu_is_preempted)(int cpu);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 099fcba..eb3753d 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -595,7 +595,6 @@ __visible bool __kvm_vcpu_is_preempted(int cpu)
 
return !!src->preempted;
 }
-PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted);
 
 /*
  * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -614,10 +613,8 @@ void __init kvm_spinlock_init(void)
pv_lock_ops.wait = kvm_wait;
pv_lock_ops.kick = kvm_kick_cpu;
 
-   if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
-   pv_lock_ops.vcpu_is_preempted =
-   PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
-   }
+   if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME))
+   pv_lock_ops.vcpu_is_preempted = __kvm_vcpu_is_preempted;
 }
 
 #endif /* CONFIG_PARAVIRT_SPINLOCKS */
diff --git a/arch/x86/kernel/paravirt-spinlocks.c 
b/arch/x86/kernel/paravirt-spinlocks.c
index 6259327..da050bc 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -24,12 +24,10 @@ __visible bool __native_vcpu_is_preempted(int cpu)
 {
return false;
 }
-PV_CALLEE_SAVE_REGS_THUNK(__native_vcpu_is_preempted);
 
 bool pv_is_native_vcpu_is_preempted(void)
 {
-   return pv_lock_ops.vcpu_is_preempted.func ==
-   __raw_callee_save___native_vcpu_is_preempted;
+   

[Xen-devel] [libvirt test] 105684: regressions - FAIL

2017-02-10 Thread osstest service owner
flight 105684 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105684/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw  6 xen-boot fail REGR. vs. 105657

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 105657
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 105657

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 build-arm64   5 xen-buildfail   never pass
 build-arm64-xsm   5 xen-buildfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 build-arm64-pvops 5 kernel-build fail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  09a91f0528bbe58ba0d8f9620fb978ad19f89052
baseline version:
 libvirt  c89a6e7878e630718cce0af940e9c070c132ce30

Last test of basis   105657  2017-02-09 04:20:10 Z1 days
Testing same since   105684  2017-02-10 04:21:28 Z0 days1 attempts


People who touched revisions under test:
  Boris Fiuczynski 
  David Dai 
  Jaroslav Safka 
  Jim Fehlig 
  Jiri Denemark 
  Marc Hartmayer 
  Maxim Nestratov 
  Michal Privoznik 
  Nitesh Konkar 
  Nitesh Konkar 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  fail
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  fail
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  blocked 
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopsfail
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-arm64-arm64-libvirt-xsm blocked 
 test-armhf-armhf-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-arm64-arm64-libvirt blocked 
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-arm64-arm64-libvirt-qcow2   blocked 
 test-armhf-armhf-libvirt-raw fail
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on 

Re: [Xen-devel] [PATCH v6 4/7] xen/x86: parse Dom0 kernel for PVHv2

2017-02-10 Thread Ian Jackson
Roger Pau Monne writes ("[PATCH v6 4/7] xen/x86: parse Dom0 kernel for PVHv2"):
> Introduce a helper to parse the Dom0 kernel.
> 
> A new helper is also introduced to libelf, that's used to store the
> destination vcpu of the domain. This parameter is needed when
> loading the kernel on a HVM domain (PVHv2), since
> hvm_copy_to_guest_phys requires passing the destination vcpu.

The new helper and variable seems fine to me.

> While there also fix image_base and image_start to be of type "void
> *", and do the necessary fixup of related functions.

IMO this should be separate patch(es).

> +static int __init pvh_load_kernel(struct domain *d, const module_t *image,
> +  unsigned long image_headroom,
> +  module_t *initrd, void *image_base,
> +  char *cmdline, paddr_t *entry,
> +  paddr_t *start_info_addr)
> +{

FAOD this is used for dom0 only, right ?  In which case I don't feel
the need to review it.

> diff --git a/xen/common/libelf/libelf-loader.c 
> b/xen/common/libelf/libelf-loader.c
> index 1644f16..de140ed 100644
> --- a/xen/common/libelf/libelf-loader.c
> +++ b/xen/common/libelf/libelf-loader.c
> @@ -153,10 +153,19 @@ static elf_errorstatus elf_load_image(struct elf_binary 
> *elf, elf_ptrval dst, el
>  return -1;
>  /* We trust the dom0 kernel image completely, so we don't care
>   * about overruns etc. here. */
> -rc = raw_copy_to_guest(ELF_UNSAFE_PTR(dst), ELF_UNSAFE_PTR(src), filesz);
> +if ( is_hvm_vcpu(elf->vcpu) )
> +rc = hvm_copy_to_guest_phys((paddr_t)ELF_UNSAFE_PTR(dst),
> +ELF_UNSAFE_PTR(src), filesz, elf->vcpu);
> +else
> +rc = raw_copy_to_guest(ELF_UNSAFE_PTR(dst), ELF_UNSAFE_PTR(src),
> +   filesz);
>  if ( rc != 0 )
>  return -1;
> -rc = raw_clear_guest(ELF_UNSAFE_PTR(dst + filesz), memsz - filesz);
> +if ( is_hvm_vcpu(elf->vcpu) )
> +rc = hvm_copy_to_guest_phys((paddr_t)ELF_UNSAFE_PTR(dst + filesz),
> +NULL, filesz, elf->vcpu);
> +else
> +rc = raw_clear_guest(ELF_UNSAFE_PTR(dst + filesz), memsz - filesz);

This seems to involve open coding all four elements of a 2x2 matrix.
Couldn't you provide a helper function that:
 * Checks is_hvm_vcpu
 * Has the "NULL means clear" behaviour which I infer
   hvm_copy_to_guest_phys has
 * Calls hvm_copy_to_guest_phys or raw_{copy_to,clear}_guest
(Does raw_copy_to_guest have the "NULL means clear" feature ?  Maybe
that feature should be added, further lifting that into more general
code.)

Then the source and destination calculations would be done once for
each part, rather than twice, and the is_hvm_vcpu condition would be
done once rather than twice.

Thanks,
Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [ovmf baseline-only test] 68544: all pass

2017-02-10 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 68544 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/68544/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 8d127a5a3a23d960644d1bd78891ae7d55b66544
baseline version:
 ovmf 41ccec58e07376fe3086d3fb4cf6290c53ca2303

Last test of basis68542  2017-02-09 08:16:39 Z1 days
Testing same since68544  2017-02-10 12:19:14 Z0 days1 attempts


People who touched revisions under test:
  Dandan Bi 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


commit 8d127a5a3a23d960644d1bd78891ae7d55b66544
Author: Dandan Bi 
Date:   Wed Feb 8 13:39:40 2017 +0800

OvmfPkg/QemuBootOrderLib: Fix NOOPT build failure

This patch is to fix the IA32/NOOPT/VS Toolchain build failure.
The VS2015 failure log as below:
QemuBootOrderLib.lib(ExtraRootBusMap.obj) :
error LNK2001: unresolved external symbol __allmul
s:\..\Build\OvmfIa32\NOOPT_VS2015\IA32\MdeModulePkg\
Universal\BdsDxe\BdsDxe\DEBUG\BdsDxe.dll :
fatal error LNK1120: 1 unresolved externals
NMAKE : fatal error U1077:
'"C:\Program Files\Microsoft Visual Studio 14.0\Vc\bin\link.exe"' :
return code '0x460'
Stop.

Cc: Jordan Justen 
Cc: Laszlo Ersek 
Cc: Liming Gao 
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Dandan Bi 
Reviewed-by: Laszlo Ersek 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP

2017-02-10 Thread Paul Durrant
Recently a new dm_op[1] hypercall was added to Xen to provide a mechanism
for restricting device emulators (such as QEMU) to a limited set of
hypervisor operations, and being able to audit those operations in the
kernel of the domain in which they run.

This patch adds IOCTL_PRIVCMD_DM_OP as gateway for __HYPERVISOR_dm_op,
bouncing the callers buffers through kernel memory to allow the address
ranges to be audited (and negating the need to bounce through locked
memory in user-space).

[1] http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=524a98c2

Signed-off-by: Paul Durrant 
---
Cc: Boris Ostrovsky 
Cc: Juergen Gross 

v2:
- Lock the user pages rather than bouncing through kernel memory
---
 arch/x86/include/asm/xen/hypercall.h |   7 ++
 drivers/xen/privcmd.c| 138 +++
 include/uapi/xen/privcmd.h   |  13 
 include/xen/interface/hvm/dm_op.h|  32 
 include/xen/interface/xen.h  |   1 +
 5 files changed, 191 insertions(+)
 create mode 100644 include/xen/interface/hvm/dm_op.h

diff --git a/arch/x86/include/asm/xen/hypercall.h 
b/arch/x86/include/asm/xen/hypercall.h
index a12a047..f6d20f6 100644
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -472,6 +472,13 @@ HYPERVISOR_xenpmu_op(unsigned int op, void *arg)
return _hypercall2(int, xenpmu_op, op, arg);
 }
 
+static inline int
+HYPERVISOR_dm_op(
+   domid_t dom, unsigned int nr_bufs, void *bufs)
+{
+   return _hypercall3(int, dm_op, dom, nr_bufs, bufs);
+}
+
 static inline void
 MULTI_fpu_taskswitch(struct multicall_entry *mcl, int set)
 {
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 5e5c7ae..d5cf042 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -548,6 +549,139 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, 
int version)
goto out;
 }
 
+static int lock_pages(
+   struct privcmd_dm_op_buf kbufs[], unsigned int num,
+   struct page *pages[], unsigned int nr_pages)
+{
+   unsigned int i;
+
+   for (i = 0; i < num; i++) {
+   unsigned int requested;
+   int pinned;
+
+   requested = DIV_ROUND_UP(
+   offset_in_page(kbufs[i].uptr) + kbufs[i].size,
+   PAGE_SIZE);
+   if (requested > nr_pages)
+   return -ENOSPC;
+
+   pinned = get_user_pages_fast(
+   (unsigned long) kbufs[i].uptr,
+   requested, FOLL_WRITE, pages);
+   if (pinned < 0)
+   return pinned;
+
+   nr_pages -= pinned;
+   pages += pinned;
+   }
+
+   return 0;
+}
+
+static void unlock_pages(struct page *pages[], unsigned int nr_pages)
+{
+   unsigned int i;
+
+   if (!pages)
+   return;
+
+   for (i = 0; i < nr_pages; i++) {
+   if (pages[i])
+   put_page(pages[i]);
+   }
+}
+
+static long privcmd_ioctl_dm_op(void __user *udata)
+{
+   struct privcmd_dm_op kdata;
+   struct privcmd_dm_op_buf *kbufs;
+   unsigned int nr_pages = 0;
+   struct page **pages = NULL;
+   struct xen_dm_op_buf *xbufs = NULL;
+   unsigned int i;
+   long rc;
+
+   if (copy_from_user(, udata, sizeof(kdata)))
+   return -EFAULT;
+
+   if (kdata.num == 0)
+   return 0;
+
+   /*
+* Set a tolerable upper limit on the number of buffers
+* without being overly restrictive, since we can't easily
+* predict what future dm_ops may require.
+*/
+   if (kdata.num * sizeof(*kbufs) > PAGE_SIZE)
+   return -E2BIG;
+
+   kbufs = kcalloc(kdata.num, sizeof(*kbufs), GFP_KERNEL);
+   if (!kbufs)
+   return -ENOMEM;
+
+   if (copy_from_user(kbufs, kdata.ubufs,
+  sizeof(*kbufs) * kdata.num)) {
+   rc = -EFAULT;
+   goto out;
+   }
+
+   for (i = 0; i < kdata.num; i++) {
+   if (!access_ok(VERIFY_WRITE, kbufs[i].uptr,
+  kbufs[i].size)) {
+   rc = -EFAULT;
+   goto out;
+   }
+
+   nr_pages += DIV_ROUND_UP(
+   offset_in_page(kbufs[i].uptr) + kbufs[i].size,
+   PAGE_SIZE);
+   }
+
+   /*
+* Again, set a tolerable upper limit on the number of pages
+* needed to lock all the buffers without being overly
+* restrictive, since we can't easily predict the size of
+* buffers future dm_ops may use.
+*/
+   if (nr_pages * sizeof(*pages) > PAGE_SIZE) {
+   rc = -E2BIG;
+   goto out;
+   }
+
+   

[Xen-devel] [PATCH v2 1/3] xen/privcmd: return -ENOTTY for unimplemented IOCTLs

2017-02-10 Thread Paul Durrant
The code sets the default return code to -ENOSYS but then overrides this
to -EINVAL in the switch() statement's default case, which is clearly
silly.

This patch removes the override and sets the default return code to
-ENOTTY, which is the conventional return for an unimplemented ioctl.

Signed-off-by: Paul Durrant 
---
Cc: Boris Ostrovsky 
Cc: Juergen Gross 

v2:
- Use -ENOTTY rather than -ENOSYS
---
 drivers/xen/privcmd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 6e3306f..5e5c7ae 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -551,7 +551,7 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, 
int version)
 static long privcmd_ioctl(struct file *file,
  unsigned int cmd, unsigned long data)
 {
-   int ret = -ENOSYS;
+   int ret = -ENOTTY;
void __user *udata = (void __user *) data;
 
switch (cmd) {
@@ -572,7 +572,6 @@ static long privcmd_ioctl(struct file *file,
break;
 
default:
-   ret = -EINVAL;
break;
}
 
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 3/3] xen/privcmd: add IOCTL_PRIVCMD_RESTRICT

2017-02-10 Thread Paul Durrant
The purpose if this ioctl is to allow a user of privcmd to restrict its
operation such that it will no longer service arbitrary hypercalls via
IOCTL_PRIVCMD_HYPERCALL, and will check for a matching domid when
servicing IOCTL_PRIVCMD_DM_OP. The aim of this is to limit the attack
surface for a compromised device model.

Signed-off-by: Paul Durrant 
---
Cc: Boris Ostrovsky 
Cc: Juergen Gross 

v2:
- Make sure that a restriction cannot be cleared
---
 drivers/xen/privcmd.c  | 67 +++---
 include/uapi/xen/privcmd.h |  2 ++
 2 files changed, 65 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index d5cf042..e372aae 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -44,16 +44,25 @@ MODULE_LICENSE("GPL");
 
 #define PRIV_VMA_LOCKED ((void *)1)
 
+struct privcmd_data {
+   domid_t domid;
+};
+
 static int privcmd_vma_range_is_mapped(
struct vm_area_struct *vma,
unsigned long addr,
unsigned long nr_pages);
 
-static long privcmd_ioctl_hypercall(void __user *udata)
+static long privcmd_ioctl_hypercall(struct file *file, void __user *udata)
 {
+   struct privcmd_data *data = file->private_data;
struct privcmd_hypercall hypercall;
long ret;
 
+   /* Disallow arbitrary hypercalls if restricted */
+   if (data->domid != DOMID_INVALID)
+   return -EPERM;
+
if (copy_from_user(, udata, sizeof(hypercall)))
return -EFAULT;
 
@@ -591,8 +600,9 @@ static void unlock_pages(struct page *pages[], unsigned int 
nr_pages)
}
 }
 
-static long privcmd_ioctl_dm_op(void __user *udata)
+static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
 {
+   struct privcmd_data *data = file->private_data;
struct privcmd_dm_op kdata;
struct privcmd_dm_op_buf *kbufs;
unsigned int nr_pages = 0;
@@ -604,6 +614,10 @@ static long privcmd_ioctl_dm_op(void __user *udata)
if (copy_from_user(, udata, sizeof(kdata)))
return -EFAULT;
 
+   /* If restriction is in place, check the domid matches */
+   if (data->domid != DOMID_INVALID && data->domid != kdata.dom)
+   return -EPERM;
+
if (kdata.num == 0)
return 0;
 
@@ -682,6 +696,23 @@ static long privcmd_ioctl_dm_op(void __user *udata)
return rc;
 }
 
+static long privcmd_ioctl_restrict(struct file *file, void __user *udata)
+{
+   struct privcmd_data *data = file->private_data;
+   domid_t dom;
+
+   if (copy_from_user(, udata, sizeof(dom)))
+   return -EFAULT;
+
+   /* Set restriction to the specified domain, or check it matches */
+   if (data->domid == DOMID_INVALID)
+   data->domid = dom;
+   else if (data->domid != dom)
+   return -EINVAL;
+
+   return 0;
+}
+
 static long privcmd_ioctl(struct file *file,
  unsigned int cmd, unsigned long data)
 {
@@ -690,7 +721,7 @@ static long privcmd_ioctl(struct file *file,
 
switch (cmd) {
case IOCTL_PRIVCMD_HYPERCALL:
-   ret = privcmd_ioctl_hypercall(udata);
+   ret = privcmd_ioctl_hypercall(file, udata);
break;
 
case IOCTL_PRIVCMD_MMAP:
@@ -706,7 +737,11 @@ static long privcmd_ioctl(struct file *file,
break;
 
case IOCTL_PRIVCMD_DM_OP:
-   ret = privcmd_ioctl_dm_op(udata);
+   ret = privcmd_ioctl_dm_op(file, udata);
+   break;
+
+   case IOCTL_PRIVCMD_RESTRICT:
+   ret = privcmd_ioctl_restrict(file, udata);
break;
 
default:
@@ -716,6 +751,28 @@ static long privcmd_ioctl(struct file *file,
return ret;
 }
 
+static int privcmd_open(struct inode *ino, struct file *file)
+{
+   struct privcmd_data *data = kzalloc(sizeof(*data), GFP_KERNEL);
+
+   if (!data)
+   return -ENOMEM;
+
+   /* DOMID_INVALID implies no restriction */
+   data->domid = DOMID_INVALID;
+
+   file->private_data = data;
+   return 0;
+}
+
+static int privcmd_release(struct inode *ino, struct file *file)
+{
+   struct privcmd_data *data = file->private_data;
+
+   kfree(data);
+   return 0;
+}
+
 static void privcmd_close(struct vm_area_struct *vma)
 {
struct page **pages = vma->vm_private_data;
@@ -784,6 +841,8 @@ static int privcmd_vma_range_is_mapped(
 const struct file_operations xen_privcmd_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = privcmd_ioctl,
+   .open = privcmd_open,
+   .release = privcmd_release,
.mmap = privcmd_mmap,
 };
 EXPORT_SYMBOL_GPL(xen_privcmd_fops);
diff --git a/include/uapi/xen/privcmd.h b/include/uapi/xen/privcmd.h
index f8c5d75..63ee95c 100644
--- a/include/uapi/xen/privcmd.h
+++ b/include/uapi/xen/privcmd.h
@@ -111,5 

[Xen-devel] [PATCH v2 0/3] xen/privcmd: support for dm_op and restriction

2017-02-10 Thread Paul Durrant
This patch series follows on from my recent Xen series [1], to provide
support in privcmd for de-privileging of device emulators.

[1] https://lists.xen.org/archives/html/xen-devel/2017-01/msg02558.html

Paul Durrant (3):
  xen/privcmd: return -ENOTTY for unimplemented IOCTLs
  xen/privcmd: Add IOCTL_PRIVCMD_DM_OP
  xen/privcmd: add IOCTL_PRIVCMD_RESTRICT

 arch/x86/include/asm/xen/hypercall.h |   7 ++
 drivers/xen/privcmd.c| 204 ++-
 include/uapi/xen/privcmd.h   |  15 +++
 include/xen/interface/hvm/dm_op.h|  32 ++
 include/xen/interface/xen.h  |   1 +
 5 files changed, 255 insertions(+), 4 deletions(-)
 create mode 100644 include/xen/interface/hvm/dm_op.h

-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 0/3] xen: optimize xenbus performance

2017-02-10 Thread Boris Ostrovsky
On 02/09/2017 08:39 AM, Juergen Gross wrote:
> The xenbus driver used for communication with Xenstore (all kernel
> accesses to Xenstore and in case of Xenstore living in another domain
> all accesses of the local domain to Xenstore) is rather simple
> especially regarding multiple concurrent accesses: they are just being
> serialized in spite of Xenstore being capable to handle multiple
> parallel accesses.
>
> Clean up the external interface(s) of xenbus and optimize its
> performance by allowing multiple concurrent accesses to Xenstore.
>

Applied to for-linus-4.11.

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v6 2/7] xen/x86: split Dom0 build into PV and PVHv2

2017-02-10 Thread Andrew Cooper
On 10/02/17 12:33, Roger Pau Monne wrote:
> Split the Dom0 builder into two different functions, one for PV (and classic
> PVH), and another one for PVHv2. Introduce a new command line parameter called
> 'dom0' that can be used to request the creation of a PVHv2 Dom0 by setting the
> 'hvm' sub-option. A panic has also been added if a user tries to use dom0=hvm
> until all the code is in place, then the panic will be removed.
>
> While there mark the dom0_shadow option that was used by PV Dom0 as 
> deprecated,
> it was lacking documentation and was not functional. Point users towards
> dom0=shadow instead.
>
> Signed-off-by: Roger Pau Monné 

Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/bitops: Force __scanbit() to be always inline

2017-02-10 Thread Jan Beulich
>>> On 10.02.17 at 12:44,  wrote:
> It turns out that GCCs 4.9.2 and 6.3.0 instantiate __scanbit() in three
> translation units, but never references the result.  All real uses of
> __scanbit() are already suitably inline.

While I'm not opposed to this at all, we should set ourselves a
reasonably clear rule of thumb of when to use always_inline. As
was noted the other day, mixing (apparently arbitrarily) with
normal inline functions is at least confusing to the reader. For
that it may be necessary to understand what exactly it is that
makes gcc create (even unreferenced) static function instances.

Jan

> Signed-off-by: Andrew Cooper 

Acked-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document

2017-02-10 Thread Paul Durrant
> -Original Message-
[snip]
> > Neither NVIDIA vGPU nor Intel GVT-g are pass-through. They both use
> emulation to synthesize GPU devices for guests and then use the actual GPU
> to service the commands sent by the guest driver to the virtual GPU. So, I
> think they fall outside the discussion here.
> 
> So in this case those devices would simply be assigned to Dom0, and
> everything
> would be trapped/emulated there? (by QEMU or whatever dm we are using)
> 

Basically, yes. (Actually QEMU isn't the dm in either case).

> > AMD MxGPU is somewhat different in that it is an almost-SRIOV solution. I
> say 'almost' because the VF's are not truly independent and so some
> interception of accesses to certain registers is required, so that arbitration
> can be applied, or they can be blocked. In this case a dedicated driver in
> dom0 is required, and I believe it needs access to both the PF and all the VFs
> to function correctly. However, once initial set-up is done, I think the VFs
> could then be hidden from dom0. The PF is never passed-through and so
> there should be no issue in leaving it visible to dom0.
> 
> The approach we where thinking of is hiding everything from Dom0 when it
> boots, so that Dom0 would never really see those devices. This would be
> done by
> Xen scanning the PCI bus and any ECAM areas. DEvices that first need to be
> assigned to Dom0 and then hidden where not part of the approach here.

That won't work for MxGPU then.

> 
> > There is a further complication with GVT-d (Intel's term for GPU pass-
> through) also because I believe there is also some initial set-up required and
> some supporting emulation (e.g. Intel's guest driver expects there to be an
> ISA bridge along with the GPU) which may need access to the real GPU. It is
> also possible that, once this set-up is done, the GPU can then be hidden from
> dom0 but I'm not sure because I was not involved with that code.
> 
> And then I guess some MMIO regions are assigned to the guest, and some
> dm
> performs the trapping of the accesses to the configuration space?
> 

Well, that's how passthrough to HVM guests works in general at the moment. My 
point was that there's still some need to see the device in the tools domain 
before it gets passed through.

> > Full pass-through of NVIDIA and AMD GPUs does not involve access from
> dom0 at all though, so I don't think there should be any complication there.
> 
> Yes, in that case they would be treated as regular PCI devices, no
> involvement
> from Dom0 would be needed. I'm more worried about this mixed cases,
> where some
> Dom0 interaction is needed in order to perform the passthrough.
> 
> > Does that all make sense?
> 
> I guess, could you please keep an eye on further design documents? Just to
> make sure that what's described here would work for the more complex
> passthrough scenarios that XenServer supports.

Ok, I will watch the list more closely for pass-through discussions, but please 
keep me cc-ed on anything you think may be relevant.

Thanks,

  Paul

> 
> Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document

2017-02-10 Thread Roger Pau Monne
On Fri, Feb 10, 2017 at 10:11:53AM +, Paul Durrant wrote:
> > -Original Message-
> > From: Roger Pau Monne
> > Sent: 10 February 2017 09:49
> > To: Stefano Stabellini 
> > Cc: Julien Grall ; xen-devel  > de...@lists.xenproject.org>; Edgar Iglesias (edgar.igles...@xilinx.com)
> > ; Steve Capper ; Punit
> > Agrawal ; Wei Chen ;
> > Campbell Sean ; Shanker Donthineni
> > ; Jiandi An ;
> > manish.ja...@caviumnetworks.com; alistair.fran...@xilinx.com; Andrew
> > Cooper ; Anshul Makkar
> > ; Paul Durrant 
> > Subject: Re: [early RFC] ARM PCI Passthrough design document
> > 
> > On Wed, Feb 01, 2017 at 10:50:49AM -0800, Stefano Stabellini wrote:
> > > On Wed, 1 Feb 2017, Roger Pau Monné wrote:
> > > > On Wed, Jan 25, 2017 at 06:53:20PM +, Julien Grall wrote:
> > > > > Hi Stefano,
> > > > >
> > > > > On 24/01/17 20:07, Stefano Stabellini wrote:
> > > > > > On Tue, 24 Jan 2017, Julien Grall wrote:
> > > > > When using ECAM like host bridge, I don't think it will be an issue to
> > have
> > > > > both DOM0 and Xen accessing configuration space at the same time.
> > Although,
> > > > > we need to define who is doing what. In general case, DOM0 should
> > not
> > > > > touched an assigned PCI device. The only possible interaction would be
> > > > > resetting a device (see my answer below).
> > > >
> > > > Iff Xen is really going to perform the reset of passthrough devices, 
> > > > then I
> > > > don't see any reason to expose those devices to Dom0 at all, IMHO you
> > should
> > > > hide them from ACPI and ideally prevent Dom0 from interacting with
> > them using
> > > > the PCI configuration space (although that would require trapping on
> > accesses
> > > > to the PCI config space, which AFAIK you would like to avoid).
> > >
> > > Right! A much cleaner solution! If we are going to have Xen handle ECAM
> > > and emulating PCI host bridges, then we should go all the way and have
> > > Xen do everything about PCI.
> > 
> > Replying here because this thread has become so long that's hard to find a
> > good
> > place to put this information.
> > 
> > I've recently been told (f2f), that more complex passthrough (like Nvidia
> > vGPU
> > or Intel XenGT) work in a slightly different way, which seems to be a bit
> > incompatible with what we are proposing. I've been told that Nvidia vGPU
> > passthrough requires a driver in Dom0 (closed-source Nvidia code AFAIK),
> > and
> > that upon loading this driver a bunch of virtual functions appear out of the
> > blue in the PCI bus.
> > 
> > Now, if we completely hide passed-through devices from Dom0, it would be
> > impossible to load this driver, and thus to make the virtual functions 
> > appear.
> > I would like someone that's more familiar with this to comment, so I'm
> > adding
> > Paul and Anshul to the conversation.
> > 
> > To give some context to them, we were currently discussing to completely
> > hide
> > passthrough PCI devices from Dom0, and have Xen perform the reset of the
> > device. This would apply to PVH and ARM. Can you comment on whether
> > such
> > approach would work with things like vGPU passthrough?
> 
> Neither NVIDIA vGPU nor Intel GVT-g are pass-through. They both use emulation 
> to synthesize GPU devices for guests and then use the actual GPU to service 
> the commands sent by the guest driver to the virtual GPU. So, I think they 
> fall outside the discussion here.

So in this case those devices would simply be assigned to Dom0, and everything
would be trapped/emulated there? (by QEMU or whatever dm we are using)

> AMD MxGPU is somewhat different in that it is an almost-SRIOV solution. I say 
> 'almost' because the VF's are not truly independent and so some interception 
> of accesses to certain registers is required, so that arbitration can be 
> applied, or they can be blocked. In this case a dedicated driver in dom0 is 
> required, and I believe it needs access to both the PF and all the VFs to 
> function correctly. However, once initial set-up is done, I think the VFs 
> could then be hidden from dom0. The PF is never passed-through and so there 
> should be no issue in leaving it visible to dom0.

The approach we where thinking of is hiding everything from Dom0 when it
boots, so that Dom0 would never really see those devices. This would be done by
Xen scanning the PCI bus and any ECAM areas. DEvices that first need to be
assigned to Dom0 and then hidden where not part of the approach here.

> There is a further complication with GVT-d (Intel's term for GPU 
> pass-through) also because I believe there is also some initial set-up 
> required and some supporting emulation (e.g. Intel's guest driver expects 
> there to be an ISA bridge along 

Re: [Xen-devel] [PATCH] xen-netback: vif counters from int/long to u64

2017-02-10 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
> Mart van Santen
> Sent: 10 February 2017 12:02
> To: Wei Liu ; Paul Durrant ;
> xen-de...@lists.xenproject.org; net...@vger.kernel.org
> Cc: Mart van Santen 
> Subject: [Xen-devel] [PATCH] xen-netback: vif counters from int/long to u64
> 
> This patch fixes an issue where the type of counters in the queue(s)
> and interface are not in sync (queue counters are int, interface
> counters are long), causing incorrect reporting of tx/rx values
> of the vif interface and unclear counter overflows.
> This patch sets both counters to the u64 type.
> 
> Signed-off-by: Mart van Santen 

Looks sensible to me.

Reviewed-by: Paul Durrant 

> ---
>  drivers/net/xen-netback/common.h| 8 
>  drivers/net/xen-netback/interface.c | 8 
>  2 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index 3ce1f7d..530586b 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -113,10 +113,10 @@ struct xenvif_stats {
>* A subset of struct net_device_stats that contains only the
>* fields that are updated in netback.c for each queue.
>*/
> - unsigned int rx_bytes;
> - unsigned int rx_packets;
> - unsigned int tx_bytes;
> - unsigned int tx_packets;
> + u64 rx_bytes;
> + u64 rx_packets;
> + u64 tx_bytes;
> + u64 tx_packets;
> 
>   /* Additional stats used by xenvif */
>   unsigned long rx_gso_checksum_fixup;
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index 5795213..50fa169 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -221,10 +221,10 @@ static struct net_device_stats
> *xenvif_get_stats(struct net_device *dev)
>  {
>   struct xenvif *vif = netdev_priv(dev);
>   struct xenvif_queue *queue = NULL;
> - unsigned long rx_bytes = 0;
> - unsigned long rx_packets = 0;
> - unsigned long tx_bytes = 0;
> - unsigned long tx_packets = 0;
> + u64 rx_bytes = 0;
> + u64 rx_packets = 0;
> + u64 tx_bytes = 0;
> + u64 tx_packets = 0;
>   unsigned int index;
> 
>   spin_lock(>lock);
> --
> 2.1.4
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 105680: regressions - FAIL

2017-02-10 Thread osstest service owner
flight 105680 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105680/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm5 xen-buildfail REGR. vs. 105279
 build-amd64   5 xen-buildfail REGR. vs. 105279
 build-amd64-xsm   5 xen-buildfail REGR. vs. 105279
 build-i3865 xen-buildfail REGR. vs. 105279
 build-armhf   5 xen-buildfail REGR. vs. 105279
 build-armhf-xsm   5 xen-buildfail REGR. vs. 105279

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-amd   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-pvh-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 

[Xen-devel] Xen Security Advisory 208 (CVE-2017-2615) - oob access in cirrus bitblt copy

2017-02-10 Thread Xen . org security team
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Xen Security Advisory CVE-2017-2615 / XSA-208

   oob access in cirrus bitblt copy

ISSUE DESCRIPTION
=

When doing bitblt copy backwards, qemu should negate the blit width.
This avoids an oob access before the start of video memory.

IMPACT
==

A malicious guest administrator can cause an out of bounds memory
access, possibly leading to information disclosure or privilege
escalation.

VULNERABLE SYSTEMS
==

Versions of qemu shipped with all Xen versions are vulnerable.

Xen systems running on x86 with HVM guests, with the qemu process
running in dom0 are vulnerable.

Only guests provided with the "cirrus" emulated video card can exploit
the vulnerability.  The non-default "stdvga" emulated video card is
not vulnerable.  (With xl the emulated video card is controlled by the
"stdvga=" and "vga=" domain configuration options.)

ARM systems are not vulnerable.  Systems using only PV guests are not
vulnerable.

For VMs whose qemu process is running in a stub domain, a successful
attacker will only gain the privileges of that stubdom, which should
be only over the guest itself.

Both upstream-based versions of qemu (device_model_version="qemu-xen")
and `traditional' qemu (device_model_version="qemu-xen-traditional")
are vulnerable.

MITIGATION
==

Running only PV guests will avoid the issue.

Running HVM guests with the device model in a stubdomain will mitigate
the issue.

Changing the video card emulation to stdvga (stdvga=1, vga="stdvga",
in the xl domain configuration) will avoid the vulnerability.

RESOLUTION
==

Applying the appropriate attached patch resolves this issue.

xsa208-qemuu.patchqemu-xen, mainline qemu
xsa208-qemut.patchqemu-xen-traditional

$ sha256sum xsa208*
4369cce9b72daf2418a1b9dd7be6529c312b447b814c44d634bab462e80a15f5  
xsa208-qemut.patch
1e516e3df1091415b6ba34aaf54fa67eac91e22daceaad569b11baa2316c78ba  
xsa208-qemuu.patch
$


NOTE REGARDING LACK OF EMBARGO
==

This issue has already been publicly disclosed.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJYnbVQAAoJEIP+FMlX6CvZs2sIAKtkU1ptqojrE6GpgdMegdIS
hMcCcEVdDoYt47z9BxXcNA87kyjGLbIaliACF3GQclhBy8f6Ytm6MLQMvh79YO/l
8AvZELKSo5U/Z1El/HQ/ezzWTV15FHwdG64HvDf7SdlRquVyS0fxWLuiq8gmWXRd
bpGcbAwwdRHvrvguMpajif89ZfTWPSHRq8onS1C96SBJW8aUXxzzyKWoX1EvNWN3
vnKC5eXQ5uhLERmh6meIZo2OwB7PlMTuasgVJan915/CGF8CS+B5wqQmiL0uxfRT
fnTBVTfXHC/TzkkREJtnwgHIEv/E+Vygheeg/2P9bEaNkiN3CG5kK/ZOxgWNYU4=
=eEKh
-END PGP SIGNATURE-


xsa208-qemut.patch
Description: Binary data


xsa208-qemuu.patch
Description: Binary data
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 3/7] xen/x86: populate PVHv2 Dom0 physical memory map

2017-02-10 Thread Roger Pau Monne
Craft the Dom0 e820 memory map and populate it. Introduce a helper to remove
memory pages that are shared between Xen and a domain, and use it in order to
remove low 1MB RAM regions from dom_io in order to assign them to a PVHv2 Dom0.

On hardware lacking support for unrestricted mode also craft the identity page
tables and the TSS used for virtual 8086 mode.

Signed-off-by: Roger Pau Monné 
---
Cc: Jan Beulich 
Cc: Andrew Cooper 
---
Changes since v5:
 - Adjust the logic to set need_paging.
 - Remove the usage of the _AC macro.
 - Subtract memory from the end of regions instead of the start.
 - Create the VM86_TSS before the identity page table, so that the page table
   is aligned to a page boundary.
 - Use MB1_PAGES in modify_identity_mmio.
 - Move and simply the ASSERT in pvh_setup_p2m.
 - Move the creation of the PSE page tables to a separate function, and use it
   in shadow_enable also.
 - Make the map modify_identiy_mmio parameter a constant.
 - Add a comment to HVM_VM86_TSS_SIZE, although it seems this might need
   further fixing.
 - Introduce pvh_add_mem_range in order to mark the regions used by the VM86
   TSS and the identity page tables as reserved in the memory map.
 - Add a parameter to request aligned memory from pvh_steal_ram.

Changes since v4:
 - Move process_pending_softirqs to previous patch.
 - Fix off-by-one errors in some checks.
 - Make unshare_xen_page_with_guest __init.
 - Improve unshare_xen_page_with_guest by making use of already existing
   is_xen_heap_page and put_page.
 - s/hvm/pvh/.
 - Use PAGE_ORDER_4K in pvh_setup_e820 in order to keep consistency with the
   p2m code.

Changes since v3:
 - Drop get_order_from_bytes_floor, it was only used by
   hvm_populate_memory_range.
 - Switch hvm_populate_memory_range to use frame numbers instead of full memory
   addresses.
 - Add a helper to steal the low 1MB RAM areas from dom_io and add them to Dom0
   as normal RAM.
 - Introduce unshare_xen_page_with_guest in order to remove pages from dom_io,
   so they can be assigned to other domains. This is needed in order to remove
   the low 1MB RAM regions from dom_io and assign them to the hardware_domain.
 - Simplify the loop in hvm_steal_ram.
 - Move definition of map_identity_mmio into this patch.

Changes since v2:
 - Introduce get_order_from_bytes_floor as a local function to
   domain_build.c.
 - Remove extra asserts.
 - Make hvm_populate_memory_range return an error code instead of panicking.
 - Fix comments and printks.
 - Use ULL sufix instead of casting to uint64_t.
 - Rename hvm_setup_vmx_unrestricted_guest to
   hvm_setup_vmx_realmode_helpers.
 - Only substract two pages from the memory calculation, that will be used
   by the MADT replacement.
 - Remove some comments.
 - Remove printing allocation information.
 - Don't stash any pages for the MADT, TSS or ident PT, those will be
   subtracted directly from RAM regions of the memory map.
 - Count the number of iterations before calling process_pending_softirqs
   when populating the memory map.
 - Move the initial call to process_pending_softirqs into construct_dom0,
   and remove the ones from construct_dom0_hvm and construct_dom0_pv.
 - Make memflags global so it can be shared between alloc_chunk and
   hvm_populate_memory_range.

Changes since RFC:
 - Use IS_ALIGNED instead of checking with PAGE_MASK.
 - Use the new %pB specifier in order to print sizes in human readable form.
 - Create a VM86 TSS for hardware that doesn't support unrestricted mode.
 - Subtract guest RAM for the identity page table and the VM86 TSS.
 - Split the creation of the unrestricted mode helper structures to a
   separate function.
 - Use preemption with paging_set_allocation.
 - Use get_order_from_bytes_floor.
---
 xen/arch/x86/domain_build.c | 360 +++-
 xen/arch/x86/mm.c   |  16 ++
 xen/arch/x86/mm/shadow/common.c |   7 +-
 xen/include/asm-x86/mm.h|   2 +
 xen/include/asm-x86/page.h  |  12 ++
 5 files changed, 387 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index 7123931..be50b65 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -44,6 +45,12 @@ static long __initdata dom0_min_nrpages;
 static long __initdata dom0_max_nrpages = LONG_MAX;
 
 /*
+ * Size of the VM86 TSS for virtual 8086 mode to use. This value has been
+ * taken from what hvmloader does.
+ * */
+#define HVM_VM86_TSS_SIZE   128
+
+/*
  * dom0_mem=[min:,][max:,][]
  * 
  * : The minimum amount of memory which should be allocated for dom0.
@@ -242,11 +249,12 @@ boolean_param("ro-hpet", ro_hpet);
 #define round_pgup(_p)(((_p)+(PAGE_SIZE-1))_MASK)
 #define round_pgdown(_p)  ((_p)_MASK)
 
+static unsigned int __initdata memflags = MEMF_no_dma|MEMF_exact_node;
+
 static 

[Xen-devel] [PATCH v6 5/7] x86/PVHv2: fix dom0_max_vcpus so it's capped to HVM_MAX_VCPUS for PVHv2 Dom0

2017-02-10 Thread Roger Pau Monne
PVHv2 Dom0 is limited to 128 vCPUs, as are all HVM guests at the moment. Fix
dom0_max_vcpus so it takes this limitation into account.

Signed-off-by: Roger Pau Monné 
Reviewed-by: Andrew Cooper 
---
Cc: Jan Beulich 
Cc: Andrew Cooper 
---
Changes since v5:
 - Introduce a new limit local variable and use that to store the guest max
   number of vCPUs, this allows having a single check suitable for both PVH and
   PV.

Changes since v4:
 - Fix codding style to match rest of the function.

Changes since v3:
 - New in the series.
---
 xen/arch/x86/domain_build.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index 01c9348..407e479 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -40,6 +40,7 @@
 
 #include 
 #include 
+#include 
 
 static long __initdata dom0_nrpages;
 static long __initdata dom0_min_nrpages;
@@ -157,7 +158,7 @@ static nodemask_t __initdata dom0_nodes;
 
 unsigned int __init dom0_max_vcpus(void)
 {
-unsigned int i, max_vcpus;
+unsigned int i, max_vcpus, limit;
 nodeid_t node;
 
 for ( i = 0; i < dom0_nr_pxms; ++i )
@@ -177,8 +178,9 @@ unsigned int __init dom0_max_vcpus(void)
 max_vcpus = opt_dom0_max_vcpus_min;
 if ( opt_dom0_max_vcpus_max < max_vcpus )
 max_vcpus = opt_dom0_max_vcpus_max;
-if ( max_vcpus > MAX_VIRT_CPUS )
-max_vcpus = MAX_VIRT_CPUS;
+limit = dom0_pvh ? HVM_MAX_VCPUS : MAX_VIRT_CPUS;
+if ( max_vcpus > limit )
+max_vcpus = limit;
 
 return max_vcpus;
 }
-- 
2.10.1 (Apple Git-78)


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 2/7] xen/x86: split Dom0 build into PV and PVHv2

2017-02-10 Thread Roger Pau Monne
Split the Dom0 builder into two different functions, one for PV (and classic
PVH), and another one for PVHv2. Introduce a new command line parameter called
'dom0' that can be used to request the creation of a PVHv2 Dom0 by setting the
'hvm' sub-option. A panic has also been added if a user tries to use dom0=hvm
until all the code is in place, then the panic will be removed.

While there mark the dom0_shadow option that was used by PV Dom0 as deprecated,
it was lacking documentation and was not functional. Point users towards
dom0=shadow instead.

Signed-off-by: Roger Pau Monné 
---
Cc: Jan Beulich 
Cc: Andrew Cooper 
---
Changes since v5:
 - Remove duplicate define.
 - Also move the sanity check for d->vcpu[0]->is_initialised.
 - Mark dom0_shadow as deprecated, point users to switch to dom0=shadow.
 - Move the temporary panic from setup.c to the end of construct_dom0_pvh.

Changes since v4:
 - Move common sanity BUG_ONs and process_pending_softirqs to construct_dom0.
 - Remove the non-existant documentation about dom0_shadow option.
 - Fix the define of dom0_shadow to be 'false' instead of 0.
 - Move the parsing of the dom0 command line option to domain_build.c.
 - s/hvm/pvh.

Changes since v3:
 - Correctly declare the parameter list.
 - Add a panic if dom0=hvm is used. This will be removed once all the code is in
   place.

Changes since v2:
 - Fix coding style.
 - Introduce a new dom0 option that allows passing several parameters.
   Currently supported ones are hvm and shadow.

Changes since RFC:
 - Add documentation for the new command line option.
 - Simplify the logic in construct_dom0.
---
 docs/misc/xen-command-line.markdown | 19 ++
 xen/arch/x86/domain_build.c | 71 +++--
 xen/arch/x86/setup.c|  8 +
 xen/include/asm-x86/setup.h |  7 
 4 files changed, 94 insertions(+), 11 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown 
b/docs/misc/xen-command-line.markdown
index a11fdf9..3acbb33 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -649,6 +649,8 @@ affinities to prefer but be not limited to the specified 
node(s).
 ### dom0\_shadow
 > `= `
 
+This option is deprecated, please use `dom0=shadow` instead.
+
 ### dom0\_vcpus\_pin
 > `= `
 
@@ -656,6 +658,23 @@ affinities to prefer but be not limited to the specified 
node(s).
 
 Pin dom0 vcpus to their respective pcpus
 
+### dom0
+> `= List of [ pvh | shadow ]`
+
+> Sub-options:
+
+> `pvh`
+
+> Default: `false`
+
+Flag that makes a dom0 boot in PVHv2 mode.
+
+> `shadow`
+
+> Default: `false`
+
+Flag that makes a dom0 use shadow paging.
+
 ### dom0pvh
 > `= `
 
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index 243df96..7123931 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -191,11 +191,38 @@ struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
 }
 
 #ifdef CONFIG_SHADOW_PAGING
-static bool_t __initdata opt_dom0_shadow;
+bool __initdata opt_dom0_shadow;
 boolean_param("dom0_shadow", opt_dom0_shadow);
-#else
-#define opt_dom0_shadow 0
 #endif
+bool __initdata dom0_pvh;
+
+/*
+ * List of parameters that affect Dom0 creation:
+ *
+ *  - pvh   Create a PVHv2 Dom0.
+ *  - shadowUse shadow paging for Dom0.
+ */
+static void __init parse_dom0_param(char *s)
+{
+char *ss;
+
+do {
+
+ss = strchr(s, ',');
+if ( ss )
+*ss = '\0';
+
+if ( !strcmp(s, "pvh") )
+dom0_pvh = true;
+#ifdef CONFIG_SHADOW_PAGING
+else if ( !strcmp(s, "shadow") )
+opt_dom0_shadow = true;
+#endif
+
+s = ss + 1;
+} while ( ss );
+}
+custom_param("dom0", parse_dom0_param);
 
 static char __initdata opt_dom0_ioports_disable[200] = "";
 string_param("dom0_ioports_disable", opt_dom0_ioports_disable);
@@ -951,7 +978,7 @@ static int __init setup_permissions(struct domain *d)
 return rc;
 }
 
-int __init construct_dom0(
+static int __init construct_dom0_pv(
 struct domain *d,
 const module_t *image, unsigned long image_headroom,
 module_t *initrd,
@@ -1007,13 +1034,6 @@ int __init construct_dom0(
 /* Machine address of next candidate page-table page. */
 paddr_t mpt_alloc;
 
-/* Sanity! */
-BUG_ON(d->domain_id != 0);
-BUG_ON(d->vcpu[0] == NULL);
-BUG_ON(v->is_initialised);
-
-process_pending_softirqs();
-
 printk("*** LOADING DOMAIN 0 ***\n");
 
 d->max_pages = ~0U;
@@ -1655,6 +1675,35 @@ out:
 return rc;
 }
 
+static int __init construct_dom0_pvh(struct domain *d, const module_t *image,
+ unsigned long image_headroom,
+ module_t *initrd,
+ void *(*bootstrap_map)(const module_t *),
+ char *cmdline)
+{
+
+printk("** Building a 

[Xen-devel] [PATCH v6 4/7] xen/x86: parse Dom0 kernel for PVHv2

2017-02-10 Thread Roger Pau Monne
Introduce a helper to parse the Dom0 kernel.

A new helper is also introduced to libelf, that's used to store the destination
vcpu of the domain. This parameter is needed when loading the kernel on a HVM
domain (PVHv2), since hvm_copy_to_guest_phys requires passing the destination
vcpu.

While there also fix image_base and image_start to be of type "void *", and do
the necessary fixup of related functions.

Signed-off-by: Roger Pau Monné 
---
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Ian Jackson 
---
Changes since v5:
 - s/hvm_copy_to_guest_phys_vcpu/hvm_copy_to_guest_phys/.
 - Use void * for image_base and image_start, make the necessary changes.
 - Introduce elf_set_vcpu in order to store the destination vcpu in
   elf_binary, and use it in elf_load_image. This avoids having to override
   current.
 - Style fixes.
 - Round up the position of the modlist/start_info to an aligned address
   depending on the kernel bitness.

Changes since v4:
 - s/hvm/pvh.
 - Use hvm_copy_to_guest_phys_vcpu.

Changes since v3:
 - Change one error message.
 - Indent "out" label by one space.
 - Introduce hvm_copy_to_phys and slightly simplify the code in hvm_load_kernel.

Changes since v2:
 - Remove debug messages.
 - Don't hardcode the number of modules to 1.
---
 xen/arch/x86/bzimage.c|   3 +-
 xen/arch/x86/domain_build.c   | 136 +-
 xen/common/libelf/libelf-loader.c |  13 +++-
 xen/include/asm-x86/bzimage.h |   2 +-
 xen/include/xen/libelf.h  |   6 ++
 5 files changed, 155 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/bzimage.c b/xen/arch/x86/bzimage.c
index 50ebb84..124c386 100644
--- a/xen/arch/x86/bzimage.c
+++ b/xen/arch/x86/bzimage.c
@@ -104,7 +104,8 @@ unsigned long __init bzimage_headroom(char *image_start,
 return headroom;
 }
 
-int __init bzimage_parse(char *image_base, char **image_start, unsigned long 
*image_len)
+int __init bzimage_parse(void *image_base, void **image_start,
+ unsigned long *image_len)
 {
 struct setup_header *hdr = (struct setup_header *)(*image_start);
 int err = bzimage_check(hdr, *image_len);
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index be50b65..01c9348 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -39,6 +39,7 @@
 #include 
 
 #include 
+#include 
 
 static long __initdata dom0_nrpages;
 static long __initdata dom0_min_nrpages;
@@ -1026,7 +1027,7 @@ static int __init construct_dom0_pv(
 unsigned long long value;
 char *image_base = bootstrap_map(image);
 unsigned long image_len = image->mod_end;
-char *image_start = image_base + image_headroom;
+void *image_start = image_base + image_headroom;
 unsigned long initrd_len = initrd ? initrd->mod_end : 0;
 l4_pgentry_t *l4tab = NULL, *l4start = NULL;
 l3_pgentry_t *l3tab = NULL, *l3start = NULL;
@@ -1457,6 +1458,7 @@ static int __init construct_dom0_pv(
 /* Copy the OS image and free temporary buffer. */
 elf.dest_base = (void*)vkern_start;
 elf.dest_size = vkern_end - vkern_start;
+elf_set_vcpu(, v);
 rc = elf_load_binary();
 if ( rc < 0 )
 {
@@ -2015,12 +2017,136 @@ static int __init pvh_setup_p2m(struct domain *d)
 #undef MB1_PAGES
 }
 
+static int __init pvh_load_kernel(struct domain *d, const module_t *image,
+  unsigned long image_headroom,
+  module_t *initrd, void *image_base,
+  char *cmdline, paddr_t *entry,
+  paddr_t *start_info_addr)
+{
+void *image_start = image_base + image_headroom;
+unsigned long image_len = image->mod_end;
+struct elf_binary elf;
+struct elf_dom_parms parms;
+paddr_t last_addr;
+struct hvm_start_info start_info = { 0 };
+struct hvm_modlist_entry mod = { 0 };
+struct vcpu *v = d->vcpu[0];
+int rc;
+
+if ( (rc = bzimage_parse(image_base, _start, _len)) != 0 )
+{
+printk("Error trying to detect bz compressed kernel\n");
+return rc;
+}
+
+if ( (rc = elf_init(, image_start, image_len)) != 0 )
+{
+printk("Unable to init ELF\n");
+return rc;
+}
+#ifdef VERBOSE
+elf_set_verbose();
+#endif
+elf_parse_binary();
+if ( (rc = elf_xen_parse(, )) != 0 )
+{
+printk("Unable to parse kernel for ELFNOTES\n");
+return rc;
+}
+
+if ( parms.phys_entry == UNSET_ADDR32 )
+{
+printk("Unable to find XEN_ELFNOTE_PHYS32_ENTRY address\n");
+return -EINVAL;
+}
+
+printk("OS: %s version: %s loader: %s bitness: %s\n", parms.guest_os,
+   parms.guest_ver, parms.loader,
+   elf_64bit() ? "64-bit" : "32-bit");
+
+/* Copy the OS image and free temporary buffer. */
+elf.dest_base = (void *)(parms.virt_kstart - 

[Xen-devel] [PATCH v6 6/7] xen/x86: Setup PVHv2 Dom0 CPUs

2017-02-10 Thread Roger Pau Monne
Initialize Dom0 BSP/APs and setup the memory and IO permissions. This also sets
the initial BSP state in order to match the protocol specified in
docs/misc/hvmlite.markdown.

Signed-off-by: Roger Pau Monné 
---
Cc: Jan Beulich 
Cc: Andrew Cooper 
---
Changes since v5:
 - Make cpus and i unsigned ints.
 - Use an initializer for cpu_ctx (and remove the memset).
 - Move the clear_bit of vcpu 0 the end of pvh_setup_cpus.
---
 xen/arch/x86/domain_build.c | 61 +
 1 file changed, 61 insertions(+)

diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index 407e479..1ff2ddb 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -41,6 +41,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static long __initdata dom0_nrpages;
 static long __initdata dom0_min_nrpages;
@@ -2142,6 +2143,59 @@ static int __init pvh_load_kernel(struct domain *d, 
const module_t *image,
 return 0;
 }
 
+static int __init pvh_setup_cpus(struct domain *d, paddr_t entry,
+ paddr_t start_info)
+{
+struct vcpu *v = d->vcpu[0];
+unsigned int cpu, i;
+int rc;
+/* 
+ * This sets the vCPU state according to the state described in
+ * docs/misc/hvmlite.markdown.
+ */
+vcpu_hvm_context_t cpu_ctx = {
+.mode = VCPU_HVM_MODE_32B,
+.cpu_regs.x86_32.ebx = start_info,
+.cpu_regs.x86_32.eip = entry,
+.cpu_regs.x86_32.cr0 = X86_CR0_PE | X86_CR0_ET,
+.cpu_regs.x86_32.cs_limit = ~0u,
+.cpu_regs.x86_32.ds_limit = ~0u,
+.cpu_regs.x86_32.ss_limit = ~0u,
+.cpu_regs.x86_32.tr_limit = 0x67,
+.cpu_regs.x86_32.cs_ar = 0xc9b,
+.cpu_regs.x86_32.ds_ar = 0xc93,
+.cpu_regs.x86_32.ss_ar = 0xc93,
+.cpu_regs.x86_32.tr_ar = 0x8b,
+};
+
+cpu = v->processor;
+for ( i = 1; i < d->max_vcpus; i++ )
+{
+cpu = cpumask_cycle(cpu, _cpus);
+setup_dom0_vcpu(d, i, cpu);
+}
+
+rc = arch_set_info_hvm_guest(v, _ctx);
+if ( rc )
+{
+printk("Unable to setup Dom0 BSP context: %d\n", rc);
+return rc;
+}
+
+rc = setup_permissions(d);
+if ( rc )
+{
+panic("Unable to setup Dom0 permissions: %d\n", rc);
+return rc;
+}
+
+update_domain_wallclock_time(d);
+
+clear_bit(_VPF_down, >pause_flags);
+
+return 0;
+}
+
 static int __init construct_dom0_pvh(struct domain *d, const module_t *image,
  unsigned long image_headroom,
  module_t *initrd,
@@ -2170,6 +2224,13 @@ static int __init construct_dom0_pvh(struct domain *d, 
const module_t *image,
 return rc;
 }
 
+rc = pvh_setup_cpus(d, entry, start_info);
+if ( rc )
+{
+printk("Failed to setup Dom0 CPUs: %d\n", rc);
+return rc;
+}
+
 panic("Building a PVHv2 Dom0 is not yet supported.");
 return 0;
 }
-- 
2.10.1 (Apple Git-78)


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 1/7] xen/x86: remove XENFEAT_hvm_pirqs for PVHv2 guests

2017-02-10 Thread Roger Pau Monne
PVHv2 guests, unlike HVM guests, won't have the option to route interrupts
from physical or emulated devices over event channels using PIRQs. This
applies to both DomU and Dom0 PVHv2 guests.

Introduce a new XEN_X86_EMU_USE_PIRQ to notify Xen whether a HVM guest can
route physical interrupts (even from emulated devices) over event channels,
and is thus allowed to use some of the PHYSDEV ops.

Signed-off-by: Roger Pau Monné 
Reviewed-by: Andrew Cooper 
---
Cc: Jan Beulich 
Cc: Andrew Cooper 
---
Changes since v5:
 - Introduce a has_pirq macro to match other XEN_X86_EMU_ options, and simplify
   some of the code.

Changes since v3:
 - Update docs.

Changes since v2:
 - Change local variable name to currd instead of d.
 - Use currd where it makes sense.
---
 docs/misc/hvmlite.markdown| 20 
 xen/arch/x86/hvm/hvm.c| 23 ++-
 xen/arch/x86/physdev.c|  4 ++--
 xen/common/kernel.c   |  2 +-
 xen/include/asm-x86/domain.h  |  2 ++
 xen/include/public/arch-x86/xen.h |  4 +++-
 6 files changed, 42 insertions(+), 13 deletions(-)

diff --git a/docs/misc/hvmlite.markdown b/docs/misc/hvmlite.markdown
index 898b8ee..b2557f7 100644
--- a/docs/misc/hvmlite.markdown
+++ b/docs/misc/hvmlite.markdown
@@ -75,3 +75,23 @@ info structure that's passed at boot time (field rsdp_paddr).
 
 Description of paravirtualized devices will come from XenStore, just as it's
 done for HVM guests.
+
+## Interrupts ##
+
+### Interrupts from physical devices ###
+
+Interrupts from physical devices are delivered using native methods, this is
+done in order to take advantage of new hardware assisted virtualization
+functions, like posted interrupts. This implies that PVHv2 guests with physical
+devices will also have the necessary interrupt controllers in order to manage
+the delivery of interrupts from those devices, using the same interfaces that
+are available on native hardware.
+
+### Interrupts from paravirtualized devices ###
+
+Interrupts from paravirtualized devices are delivered using event channels, see
+[Event Channel Internals][event_channels] for more detailed information about
+event channels. Delivery of those interrupts can be configured in the same way
+as HVM guests, check xen/include/public/hvm/params.h and
+xen/include/public/hvm/hvm_op.h for more information about available delivery
+methods.
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5f72758..9e40865 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3764,10 +3764,12 @@ static long hvm_memory_op(int cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 
 static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
+struct domain *currd = current->domain;
+
 switch ( cmd )
 {
 default:
-if ( !is_pvh_vcpu(current) || !is_hardware_domain(current->domain) )
+if ( !is_pvh_domain(currd) || !is_hardware_domain(currd) )
 return -ENOSYS;
 /* fall through */
 case PHYSDEVOP_map_pirq:
@@ -3775,7 +3777,8 @@ static long hvm_physdev_op(int cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 case PHYSDEVOP_eoi:
 case PHYSDEVOP_irq_status_query:
 case PHYSDEVOP_get_free_pirq:
-return do_physdev_op(cmd, arg);
+return (has_pirq(currd) || is_pvh_domain(currd)) ?
+do_physdev_op(cmd, arg) : -ENOSYS;
 }
 }
 
@@ -3808,17 +3811,19 @@ static long hvm_memory_op_compat32(int cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 static long hvm_physdev_op_compat32(
 int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
+struct domain *d = current->domain;
+
 switch ( cmd )
 {
-case PHYSDEVOP_map_pirq:
-case PHYSDEVOP_unmap_pirq:
-case PHYSDEVOP_eoi:
-case PHYSDEVOP_irq_status_query:
-case PHYSDEVOP_get_free_pirq:
-return compat_physdev_op(cmd, arg);
+case PHYSDEVOP_map_pirq:
+case PHYSDEVOP_unmap_pirq:
+case PHYSDEVOP_eoi:
+case PHYSDEVOP_irq_status_query:
+case PHYSDEVOP_get_free_pirq:
+return has_pirq(d) ? compat_physdev_op(cmd, arg) : -ENOSYS;
 break;
 default:
-return -ENOSYS;
+return -ENOSYS;
 break;
 }
 }
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 5a49796..b4cc6a8 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -94,7 +94,7 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int 
*pirq_p,
 int pirq, irq, ret = 0;
 void *map_data = NULL;
 
-if ( domid == DOMID_SELF && is_hvm_domain(d) )
+if ( domid == DOMID_SELF && is_hvm_domain(d) && has_pirq(d) )
 {
 /*
  * Only makes sense for vector-based callback, else HVM-IRQ logic
@@ -265,7 +265,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 if ( ret )
 goto free_domain;
 
-if ( is_hvm_domain(d) )
+if ( is_hvm_domain(d) 

[Xen-devel] [PATCH v6 0/7] Initial PVHv2 Dom0 support

2017-02-10 Thread Roger Pau Monne
Hello,

This is the first batch of the PVHv2 Dom0 support series that includes
everything up to the point where ACPI tables for Dom0 are crafted. I've
decided to left the last part of the series (the one that contains the PCI
config space handlers, and other emulation/trapping related code) separated,
in order to focus and ease the review. This is of course not functional, one
might be able to partially boot a Dom0 kernel if it doesn't try to access
any physical devices, and the panic in setup.c is removed.

The full series can also be found on a git branch in my personal git repo:

git://xenbits.xen.org/people/royger/xen.git dom0_hvm_v6

Each patch contains the changelog between versions.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 7/7] xen/x86: setup PVHv2 Dom0 ACPI tables

2017-02-10 Thread Roger Pau Monne
Create a new MADT table that contains the topology exposed to the guest. A
new XSDT table is also created, in order to filter the tables that we want
to expose to the guest, plus the Xen crafted MADT. This in turn requires Xen
to also create a new RSDP in order to make it point to the custom XSDT.

Also, regions marked as E820_ACPI or E820_NVS are identity mapped into Dom0
p2m, plus any top-level ACPI tables that should be accessible to Dom0 and
reside in reserved regions. This is needed because some memory maps don't
properly account for all the memory used by ACPI, so it's common to find ACPI
tables in reserved regions.

Signed-off-by: Roger Pau Monné 
---
Cc: Jan Beulich 
Cc: Andrew Cooper 
---
Changes since v5:
 - s/hvm_copy_to_guest_phys_vcpu/hvm_copy_to_guest_phys/.
 - Move pvh_add_mem_range to previous patch.
 - Add a comment regarding the current limitation to only 1 emulated IO APIC.
 - s/dom0_max_vcpus()/max_vcpus/ in pvh_setup_acpi_madt.
 - Cast structures to void when assigning.
 - Declare banned_tables with the initconst annotation.
 - Expand some comments messages.
 - Initialize the RSDP local variable.
 - Only provide x2APIC entries in the MADT.

Changes since v4:
 - s/hvm/pvh.
 - Use hvm_copy_to_guest_phys_vcpu.
 - Don't allocate up to E820MAX entries for the Dom0 memory map and instead
   allow pvh_add_mem_range to dynamically grow the memory map.
 - Add a comment about the lack of x2APIC MADT entries.
 - Change acpi_intr_overrides to unsigned int and the max iterator bound to
   UINT_MAX.
 - Set the MADT version as the minimum version between the hardware value and
   our supported version (4).
 - Set the MADT IO APIC ID to the current value of the domain vioapic->id.
 - Use void * when subtracting two pointers.
 - Fix indentation of nr_pages and use PFN_UP instead of DIV_ROUND_UP.
 - Change wording of the pvh_acpi_table_allowed error message.
 - Make j unsigned in pvh_setup_acpi_xsdt.
 - Move initialization of local variables with declarations in
   pvh_setup_acpi_xsdt.
 - Reword the comment about the allocated size of the xsdt custom table.
 - Fix line splitting.
 - Add a comment regarding the layering violation caused by the usage of
   acpi_tb_checksum.
 - Pass IO APIC NMI sources found in the MADT to Dom0.
 - Create x2APIC entries if the native MADT also contains them.
 - s/acpi_intr_overrrides/acpi_intr_overrides/.
 - Make sure the MADT is properly mapped into Dom0, or else Dom0 might not be
   able to access the output of the _MAT method depending on the
   implementation.
 - Get the first ACPI processor ID and use that as the base processor ID of the
   crafted MADT. This is done so that local/x2 APIC NMI entries match with the
   local/x2 APIC objects.

Changes since v3:
 - Use hvm_copy_to_phys in order to copy the tables to Dom0 memory.
 - Return EEXIST for overlaping ranges in hvm_add_mem_range.
 - s/ov/ovr/ for interrupt override parsing functions.
 - Constify intr local variable in acpi_set_intr_ovr.
 - Use structure asignement for type safety.
 - Perform sizeof using local variables in hvm_setup_acpi_madt.
 - Manually set revision of crafted/modified tables.
 - Only map tables to guest that reside in reserved or ACPI memory regions.
 - Copy the RSDP OEM signature to the crafted RSDP.
 - Pair calls to acpi_os_map_memory/acpi_os_unmap_memory.
 - Add memory regions for allowed ACPI tables to the memory map and then
   perform the identity mappings. This avoids having to call 
modify_identity_mmio
   multiple times.
 - Add a FIXME comment regarding the lack of multiple vIO-APICs.

Changes since v2:
 - Completely reworked.
---
 xen/arch/x86/domain_build.c | 434 
 1 file changed, 434 insertions(+)

diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index 1ff2ddb..85de84f 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -23,6 +23,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -38,6 +39,8 @@
 #include 
 #include 
 
+#include 
+
 #include 
 #include 
 #include 
@@ -53,6 +56,12 @@ static long __initdata dom0_max_nrpages = LONG_MAX;
  * */
 #define HVM_VM86_TSS_SIZE   128
 
+static unsigned int __initdata acpi_intr_overrides;
+static struct acpi_madt_interrupt_override __initdata *intsrcovr;
+
+static unsigned int __initdata acpi_nmi_sources;
+static struct acpi_madt_nmi_source __initdata *nmisrc;
+
 /*
  * dom0_mem=[min:,][max:,][]
  * 
@@ -2196,6 +2205,424 @@ static int __init pvh_setup_cpus(struct domain *d, 
paddr_t entry,
 return 0;
 }
 
+static int __init acpi_count_intr_ovr(struct acpi_subtable_header *header,
+ const unsigned long end)
+{
+
+acpi_intr_overrides++;
+return 0;
+}
+
+static int __init acpi_set_intr_ovr(struct acpi_subtable_header *header,
+const unsigned long end)
+{
+const struct 

[Xen-devel] [PATCH] xen-netback: vif counters from int/long to u64

2017-02-10 Thread Mart van Santen
This patch fixes an issue where the type of counters in the queue(s)
and interface are not in sync (queue counters are int, interface
counters are long), causing incorrect reporting of tx/rx values
of the vif interface and unclear counter overflows.
This patch sets both counters to the u64 type.

Signed-off-by: Mart van Santen 
---
 drivers/net/xen-netback/common.h| 8 
 drivers/net/xen-netback/interface.c | 8 
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 3ce1f7d..530586b 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -113,10 +113,10 @@ struct xenvif_stats {
 * A subset of struct net_device_stats that contains only the
 * fields that are updated in netback.c for each queue.
 */
-   unsigned int rx_bytes;
-   unsigned int rx_packets;
-   unsigned int tx_bytes;
-   unsigned int tx_packets;
+   u64 rx_bytes;
+   u64 rx_packets;
+   u64 tx_bytes;
+   u64 tx_packets;
 
/* Additional stats used by xenvif */
unsigned long rx_gso_checksum_fixup;
diff --git a/drivers/net/xen-netback/interface.c 
b/drivers/net/xen-netback/interface.c
index 5795213..50fa169 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -221,10 +221,10 @@ static struct net_device_stats *xenvif_get_stats(struct 
net_device *dev)
 {
struct xenvif *vif = netdev_priv(dev);
struct xenvif_queue *queue = NULL;
-   unsigned long rx_bytes = 0;
-   unsigned long rx_packets = 0;
-   unsigned long tx_bytes = 0;
-   unsigned long tx_packets = 0;
+   u64 rx_bytes = 0;
+   u64 rx_packets = 0;
+   u64 tx_bytes = 0;
+   u64 tx_packets = 0;
unsigned int index;
 
spin_lock(>lock);
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/time: tsc_check_writability() may need to be run a second time

2017-02-10 Thread Joao Martins
On 02/10/2017 11:17 AM, Andrew Cooper wrote:
> On 10/02/17 11:11, Joao Martins wrote:
>> On 02/10/2017 11:03 AM, Jan Beulich wrote:
>>> While we shouldn't remove its current invocation, we need to re-run it
>>> for the case that the X86_FEATURE_TSC_RELIABLE feature flag has been
>>> cleared, in order to avoid using the TSC rendezvous function in case
>>> the TSC can't be written.
>>>
>>> Signed-off-by: Jan Beulich 
>> FWIW,
> 
> Independent reviews are always worth it.  Please continue!

Nice, Thanks!

Joao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [ovmf test] 105679: all pass - PUSHED

2017-02-10 Thread osstest service owner
flight 105679 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105679/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 8d127a5a3a23d960644d1bd78891ae7d55b66544
baseline version:
 ovmf 41ccec58e07376fe3086d3fb4cf6290c53ca2303

Last test of basis   105658  2017-02-09 05:46:04 Z1 days
Testing same since   105679  2017-02-10 02:15:38 Z0 days1 attempts


People who touched revisions under test:
  Dandan Bi 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=ovmf
+ revision=8d127a5a3a23d960644d1bd78891ae7d55b66544
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push ovmf 
8d127a5a3a23d960644d1bd78891ae7d55b66544
+ branch=ovmf
+ revision=8d127a5a3a23d960644d1bd78891ae7d55b66544
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=ovmf
+ xenbranch=xen-unstable
+ '[' xovmf = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable
+ prevxenbranch=xen-4.8-testing
+ '[' x8d127a5a3a23d960644d1bd78891ae7d55b66544 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : 

[Xen-devel] [PATCH] x86/bitops: Force __scanbit() to be always inline

2017-02-10 Thread Andrew Cooper
It turns out that GCCs 4.9.2 and 6.3.0 instantiate __scanbit() in three
translation units, but never references the result.  All real uses of
__scanbit() are already suitably inline.

Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 

Forcing __scanbit() to be always_inline appears to cause GCC to reorder some
of its basic blocks, so there is a moderately large perturbance to functions.
As far as I can see, even the register scheduling is the same, and the delta
is just changes in the nops used to align the basic blocks.
---
 xen/include/asm-x86/bitops.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/asm-x86/bitops.h b/xen/include/asm-x86/bitops.h
index fd494e8..0f18645 100644
--- a/xen/include/asm-x86/bitops.h
+++ b/xen/include/asm-x86/bitops.h
@@ -334,7 +334,7 @@ extern unsigned int __find_first_zero_bit(
 extern unsigned int __find_next_zero_bit(
 const unsigned long *addr, unsigned int size, unsigned int offset);
 
-static inline unsigned int __scanbit(unsigned long val, unsigned int max)
+static always_inline unsigned int __scanbit(unsigned long val, unsigned int 
max)
 {
 if ( __builtin_constant_p(max) && max == BITS_PER_LONG )
 alternative_io("bsf %[in],%[out]; cmovz %[max],%k[out]",
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 105692: tolerable trouble: broken/fail/pass - PUSHED

2017-02-10 Thread osstest service owner
flight 105692 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105692/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64   5 xen-buildfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 build-arm64-pvops 5 kernel-build fail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  6f6d3b10ec8168e2a78cf385d89803397f116397
baseline version:
 xen  ac6e7fd7a4826c14b85b9da59fc800a3a1bd3fd0

Last test of basis   105670  2017-02-09 16:03:22 Z0 days
Testing same since   105692  2017-02-10 10:01:03 Z0 days1 attempts


People who touched revisions under test:
  Razvan Cojocaru 
  Roger Pau Monné 
  Tamas K Lengyel 

jobs:
 build-amd64  pass
 build-arm64  fail
 build-armhf  pass
 build-amd64-libvirt  pass
 build-arm64-pvopsfail
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-smoke
+ revision=6f6d3b10ec8168e2a78cf385d89803397f116397
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable-smoke 
6f6d3b10ec8168e2a78cf385d89803397f116397
+ branch=xen-unstable-smoke
+ revision=6f6d3b10ec8168e2a78cf385d89803397f116397
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-smoke
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-smoke
+ prevxenbranch=xen-4.8-testing
+ '[' x6f6d3b10ec8168e2a78cf385d89803397f116397 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : 

[Xen-devel] [qemu-upstream-4.8-testing test] 105678: regressions - trouble: blocked/broken/fail/pass

2017-02-10 Thread osstest service owner
flight 105678 qemu-upstream-4.8-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105678/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit2   3 host-install(3)broken REGR. vs. 102941
 test-amd64-i386-freebsd10-i386  3 host-install(3)  broken REGR. vs. 102941
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 3 host-install(3) broken REGR. 
vs. 102941
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 3 host-install(3) broken REGR. 
vs. 102941
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail REGR. vs. 102941

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64   5 xen-buildfail   never pass
 build-arm64-xsm   5 xen-buildfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 build-arm64-pvops 5 kernel-build fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu46e1db013347a3356ac05b83c0243313d74d2193
baseline version:
 qemuu4220231eb22235e757d269722b9f6a594fbcb70f

Last test of basis   102941  2016-12-05 12:51:08 Z   66 days
Testing same since   105678  2017-02-09 23:14:16 Z0 days1 attempts


People who touched revisions under test:
  Gerd Hoffmann 
  Li Qiang 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  fail
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  fail
 

Re: [Xen-devel] [PATCH] x86/time: tsc_check_writability() may need to be run a second time

2017-02-10 Thread Andrew Cooper
On 10/02/17 11:11, Joao Martins wrote:
> On 02/10/2017 11:03 AM, Jan Beulich wrote:
>> While we shouldn't remove its current invocation, we need to re-run it
>> for the case that the X86_FEATURE_TSC_RELIABLE feature flag has been
>> cleared, in order to avoid using the TSC rendezvous function in case
>> the TSC can't be written.
>>
>> Signed-off-by: Jan Beulich 
> FWIW,

Independent reviews are always worth it.  Please continue!

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/time: tsc_check_writability() may need to be run a second time

2017-02-10 Thread Joao Martins
On 02/10/2017 11:03 AM, Jan Beulich wrote:
> While we shouldn't remove its current invocation, we need to re-run it
> for the case that the X86_FEATURE_TSC_RELIABLE feature flag has been
> cleared, in order to avoid using the TSC rendezvous function in case
> the TSC can't be written.
> 
> Signed-off-by: Jan Beulich 

FWIW,

Reviewed-by: Joao Martins 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/time: tsc_check_writability() may need to be run a second time

2017-02-10 Thread Andrew Cooper
On 10/02/17 11:03, Jan Beulich wrote:
> While we shouldn't remove its current invocation, we need to re-run it
> for the case that the X86_FEATURE_TSC_RELIABLE feature flag has been
> cleared, in order to avoid using the TSC rendezvous function in case
> the TSC can't be written.
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] x86/time: tsc_check_writability() may need to be run a second time

2017-02-10 Thread Jan Beulich
While we shouldn't remove its current invocation, we need to re-run it
for the case that the X86_FEATURE_TSC_RELIABLE feature flag has been
cleared, in order to avoid using the TSC rendezvous function in case
the TSC can't be written.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1643,6 +1643,14 @@ static int __init verify_tsc_reliability
 }
 
 /*
+ * Re-run the TSC writability check if it didn't run to completion, as
+ * X86_FEATURE_TSC_RELIABLE may have been cleared by now. This is needed
+ * for determining which rendezvous function to use (below).
+ */
+if ( !disable_tsc_sync )
+tsc_check_writability();
+
+/*
  * While with constant-rate TSCs the scale factor can be shared, when TSCs
  * are not marked as 'reliable', re-sync during rendezvous.
  */



x86/time: tsc_check_writability() may need to be run a second time

While we shouldn't remove its current invocation, we need to re-run it
for the case that the X86_FEATURE_TSC_RELIABLE feature flag has been
cleared, in order to avoid using the TSC rendezvous function in case
the TSC can't be written.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1643,6 +1643,14 @@ static int __init verify_tsc_reliability
 }
 
 /*
+ * Re-run the TSC writability check if it didn't run to completion, as
+ * X86_FEATURE_TSC_RELIABLE may have been cleared by now. This is needed
+ * for determining which rendezvous function to use (below).
+ */
+if ( !disable_tsc_sync )
+tsc_check_writability();
+
+/*
  * While with constant-rate TSCs the scale factor can be shared, when TSCs
  * are not marked as 'reliable', re-sync during rendezvous.
  */
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86emul/test: fix 32-bit build

2017-02-10 Thread Andrew Cooper
On 10/02/17 07:38, Jan Beulich wrote:
> Commit 7603eb256 ("x86emul: use eflags definitions in x86-defns.h")
> removed the EFLG_* definitions without updating the use sites (which
> - oddly enough - happen to all be in 32-bit only code paths).
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86emul: always init mmval

2017-02-10 Thread Andrew Cooper
On 10/02/17 07:39, Jan Beulich wrote:
> ... to avoid buggy read/write sizes becoming info leaks.
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-upstream-4.6-testing test] 105677: trouble: broken/fail/pass

2017-02-10 Thread osstest service owner
flight 105677 qemu-upstream-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105677/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd  3 host-install(3)broken REGR. vs. 102708
 test-amd64-i386-libvirt-xsm   3 host-install(3)broken REGR. vs. 102708
 test-amd64-i386-xl3 host-install(3)broken REGR. vs. 102708

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 102708
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 102708
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 102708
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 102708
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 102708

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu15c0f1500fc078b6411d2c86842cb2f3fd7393c0
baseline version:
 qemuuba9175c5bde6796851d3b9d888ee488fd0257d05

Last test of basis   102708  2016-11-29 06:57:36 Z   73 days
Testing same since   105677  2017-02-09 23:14:01 Z0 days1 attempts


People who touched revisions under test:
  Gerd Hoffmann 
  Li Qiang 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document

2017-02-10 Thread Paul Durrant
> -Original Message-
> From: Roger Pau Monne
> Sent: 10 February 2017 09:49
> To: Stefano Stabellini 
> Cc: Julien Grall ; xen-devel  de...@lists.xenproject.org>; Edgar Iglesias (edgar.igles...@xilinx.com)
> ; Steve Capper ; Punit
> Agrawal ; Wei Chen ;
> Campbell Sean ; Shanker Donthineni
> ; Jiandi An ;
> manish.ja...@caviumnetworks.com; alistair.fran...@xilinx.com; Andrew
> Cooper ; Anshul Makkar
> ; Paul Durrant 
> Subject: Re: [early RFC] ARM PCI Passthrough design document
> 
> On Wed, Feb 01, 2017 at 10:50:49AM -0800, Stefano Stabellini wrote:
> > On Wed, 1 Feb 2017, Roger Pau Monné wrote:
> > > On Wed, Jan 25, 2017 at 06:53:20PM +, Julien Grall wrote:
> > > > Hi Stefano,
> > > >
> > > > On 24/01/17 20:07, Stefano Stabellini wrote:
> > > > > On Tue, 24 Jan 2017, Julien Grall wrote:
> > > > When using ECAM like host bridge, I don't think it will be an issue to
> have
> > > > both DOM0 and Xen accessing configuration space at the same time.
> Although,
> > > > we need to define who is doing what. In general case, DOM0 should
> not
> > > > touched an assigned PCI device. The only possible interaction would be
> > > > resetting a device (see my answer below).
> > >
> > > Iff Xen is really going to perform the reset of passthrough devices, then 
> > > I
> > > don't see any reason to expose those devices to Dom0 at all, IMHO you
> should
> > > hide them from ACPI and ideally prevent Dom0 from interacting with
> them using
> > > the PCI configuration space (although that would require trapping on
> accesses
> > > to the PCI config space, which AFAIK you would like to avoid).
> >
> > Right! A much cleaner solution! If we are going to have Xen handle ECAM
> > and emulating PCI host bridges, then we should go all the way and have
> > Xen do everything about PCI.
> 
> Replying here because this thread has become so long that's hard to find a
> good
> place to put this information.
> 
> I've recently been told (f2f), that more complex passthrough (like Nvidia
> vGPU
> or Intel XenGT) work in a slightly different way, which seems to be a bit
> incompatible with what we are proposing. I've been told that Nvidia vGPU
> passthrough requires a driver in Dom0 (closed-source Nvidia code AFAIK),
> and
> that upon loading this driver a bunch of virtual functions appear out of the
> blue in the PCI bus.
> 
> Now, if we completely hide passed-through devices from Dom0, it would be
> impossible to load this driver, and thus to make the virtual functions appear.
> I would like someone that's more familiar with this to comment, so I'm
> adding
> Paul and Anshul to the conversation.
> 
> To give some context to them, we were currently discussing to completely
> hide
> passthrough PCI devices from Dom0, and have Xen perform the reset of the
> device. This would apply to PVH and ARM. Can you comment on whether
> such
> approach would work with things like vGPU passthrough?

Neither NVIDIA vGPU nor Intel GVT-g are pass-through. They both use emulation 
to synthesize GPU devices for guests and then use the actual GPU to service the 
commands sent by the guest driver to the virtual GPU. So, I think they fall 
outside the discussion here.
AMD MxGPU is somewhat different in that it is an almost-SRIOV solution. I say 
'almost' because the VF's are not truly independent and so some interception of 
accesses to certain registers is required, so that arbitration can be applied, 
or they can be blocked. In this case a dedicated driver in dom0 is required, 
and I believe it needs access to both the PF and all the VFs to function 
correctly. However, once initial set-up is done, I think the VFs could then be 
hidden from dom0. The PF is never passed-through and so there should be no 
issue in leaving it visible to dom0.

There is a further complication with GVT-d (Intel's term for GPU pass-through) 
also because I believe there is also some initial set-up required and some 
supporting emulation (e.g. Intel's guest driver expects there to be an ISA 
bridge along with the GPU) which may need access to the real GPU. It is also 
possible that, once this set-up is done, the GPU can then be hidden from dom0 
but I'm not sure because I was not involved with that code.

Full pass-through of NVIDIA and AMD GPUs does not involve access from dom0 at 
all though, so I don't think there should be any complication there.

Does that all make sense?

  Paul

> 
> Roger.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] qemu-upstream triggering OOM killer

2017-02-10 Thread Jan Beulich
>>> On 09.02.17 at 23:24,  wrote:
> On Thu, 9 Feb 2017, Jan Beulich wrote:
>> the recent qemuu update results in the produced binary triggering the
>> OOM killer on the first system I tried the updated code on. Is there
>> anything known in this area? Are there any hints as to finding out
>> what is going wrong?
> 
> Do you mean QEMU upstream (from qemu.org) or qemu-xen/staging (that
> hasn't changed much in the last couple of months)?

The latter. The diff to my last snapshot (from early January) is 6.6Mb
though - I wouldn't call this "hasn't changed much". Looks like Anthony
did update to 2.8.0 in early January (a day or two after I had last
snapshotted it).

> Do you know if it's something Xen specific?

Not so far. It appears to happen when grub clears the screen
before displaying its graphical menu, so I'd rather suspect an issue
with a graphics related change (the one you pointed out isn't).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] kernel BUG at block/bio.c:1786 -- (xen_blkif_schedule on the stack)

2017-02-10 Thread Roger Pau Monné
On Fri, Feb 10, 2017 at 09:27:46AM +0100, Håkon Alstadheim wrote:
> I just tried to provoke the bug, after applying your patch and
> re-enabling tmem, but it seems there are more variables in the equation
> to make a crash happen. Before this week the VM in question would
> reliably crash/hang on boot during the past month and through several
> re-boots of the dom0.
> 
> I have sligthly reduced memory allotment to several VMs, which might be
> keeping the bug from triggering. I will not be actively trying to
> provoke this any more, but I'll keep you posted if it re-surfaces.

OK, let us know if you are able to re-trigger this.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document

2017-02-10 Thread Roger Pau Monné
On Wed, Feb 01, 2017 at 10:50:49AM -0800, Stefano Stabellini wrote:
> On Wed, 1 Feb 2017, Roger Pau Monné wrote:
> > On Wed, Jan 25, 2017 at 06:53:20PM +, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 24/01/17 20:07, Stefano Stabellini wrote:
> > > > On Tue, 24 Jan 2017, Julien Grall wrote:
> > > When using ECAM like host bridge, I don't think it will be an issue to 
> > > have
> > > both DOM0 and Xen accessing configuration space at the same time. 
> > > Although,
> > > we need to define who is doing what. In general case, DOM0 should not
> > > touched an assigned PCI device. The only possible interaction would be
> > > resetting a device (see my answer below).
> > 
> > Iff Xen is really going to perform the reset of passthrough devices, then I
> > don't see any reason to expose those devices to Dom0 at all, IMHO you should
> > hide them from ACPI and ideally prevent Dom0 from interacting with them 
> > using
> > the PCI configuration space (although that would require trapping on 
> > accesses
> > to the PCI config space, which AFAIK you would like to avoid).
> 
> Right! A much cleaner solution! If we are going to have Xen handle ECAM
> and emulating PCI host bridges, then we should go all the way and have
> Xen do everything about PCI.

Replying here because this thread has become so long that's hard to find a good
place to put this information.

I've recently been told (f2f), that more complex passthrough (like Nvidia vGPU
or Intel XenGT) work in a slightly different way, which seems to be a bit
incompatible with what we are proposing. I've been told that Nvidia vGPU
passthrough requires a driver in Dom0 (closed-source Nvidia code AFAIK), and
that upon loading this driver a bunch of virtual functions appear out of the
blue in the PCI bus.

Now, if we completely hide passed-through devices from Dom0, it would be
impossible to load this driver, and thus to make the virtual functions appear.
I would like someone that's more familiar with this to comment, so I'm adding
Paul and Anshul to the conversation.

To give some context to them, we were currently discussing to completely hide
passthrough PCI devices from Dom0, and have Xen perform the reset of the
device. This would apply to PVH and ARM. Can you comment on whether such
approach would work with things like vGPU passthrough?

Roger.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] X86/vmx: Dump PIR and vIRR before ASSERT()

2017-02-10 Thread Jan Beulich
>>> On 07.02.17 at 00:32,  wrote:
> Commit c7bdecae42 ("x86/apicv: fix RTC periodic timer and apicv issue") has
> added a assertion that intack.vector is the highest priority vector. But
> according to the osstest, the assertion failed sometimes. More discussion can
> be found in the thread
> (https://lists.xenproject.org/archives/html/xen-devel/2017-01/msg01019.html).
> 
> The assertion failure is hard to reproduce. In order to root cause issue, this
> patch is to add logs to dump PIR and vIRR when failure takes place. It should
> be reverted once the root cause is found.
> 
> Signed-off-by: Chao Gao 

Jun, Kevin - can you ack this? Or was Chao expected to make any
changes? I'd like to see this go in rather sooner than later, so we
can get it back out well before 4.9 is going to settle.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] X86/vmx: Dump PIR and vIRR before ASSERT()

2017-02-10 Thread Jan Beulich
>>> On 08.02.17 at 08:49,  wrote:
> Curious how this issue was initially caught? Would same practice make
> sure next fail catching our eye?

I guess Andrew just happened to look at the logs of a spurious
failure of some test.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [distros-debian-jessie test] 68543: tolerable trouble: blocked/broken/pass

2017-02-10 Thread Platform Team regression test user
flight 68543 distros-debian-jessie real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/68543/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-armhf-jessie-netboot-pygrub  1 build-check(1) blocked n/a
 build-arm64   2 hosts-allocate   broken never pass
 build-arm64-pvops 2 hosts-allocate   broken never pass
 build-arm64-pvops 3 capture-logs broken never pass
 build-arm64   3 capture-logs broken never pass
 test-armhf-armhf-armhf-jessie-netboot-pygrub 11 migrate-support-check fail 
never pass
 test-armhf-armhf-armhf-jessie-netboot-pygrub 12 saverestore-support-check fail 
never pass

baseline version:
 flight   68507

jobs:
 build-amd64  pass
 build-arm64  broken  
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-arm64-pvopsbroken  
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-jessie-netboot-pvgrub pass
 test-amd64-i386-i386-jessie-netboot-pvgrub   pass
 test-amd64-i386-amd64-jessie-netboot-pygrub  pass
 test-arm64-arm64-armhf-jessie-netboot-pygrub blocked 
 test-armhf-armhf-armhf-jessie-netboot-pygrub pass
 test-amd64-amd64-i386-jessie-netboot-pygrub  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] build/printf: fix incorrect format specifiers

2017-02-10 Thread Jan Beulich
>>> On 09.02.17 at 17:35,  wrote:
> --- a/xen/arch/x86/cpu/mcheck/mce.c
> +++ b/xen/arch/x86/cpu/mcheck/mce.c
> @@ -596,8 +596,8 @@ int show_mca_info(int inited, struct cpuinfo_x86 *c)
>  };
>  
>  snprintf(prefix, ARRAY_SIZE(prefix),
> - g_type != mcheck_unset ? XENLOG_WARNING "CPU%i: "
> - : XENLOG_INFO,
> + g_type != mcheck_unset ? XENLOG_WARNING "CPU%i: ":
> +  XENLOG_INFO "CPU%i: ",

At the very least there a blank missing ahead of the colon. But I
think we generally prefer to align the colon with the question
mark, despite otherwise placing operators last on a line when
needing to break it. Plus I don#t see why you want the format
string duplicated - just use

snprintf(prefix, ARRAY_SIZE(prefix), "%sCPU%i: ",
 g_type != mcheck_unset ? XENLOG_WARNING : XENLOG_INFO,
 smp_processor_id());

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -3284,7 +3284,7 @@ gnttab_release_mappings(
>  
>  ref = map->ref;
>  
> -gdprintk(XENLOG_INFO, "Grant release (%hu) ref:(%hu) "
> +gdprintk(XENLOG_INFO, "Grant release (%u) ref:(%u) "
>  "flags:(%x) dom:(%hu)\n",

I have always been puzzled by these h modifiers; I don't think it's
useful to have even for the domain ID (which after all we print
with %d almost everywhere else).

Since you're touching these anyway, I'd like to also bring up the
question of decimal vs hex: The larger the amount of grants in use,
the less useful I consider decimal numbers being logged. So perhaps
these should switch to %#x at once.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-upstream-4.7-testing test] 105676: regressions - FAIL

2017-02-10 Thread osstest service owner
flight 105676 qemu-upstream-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/105676/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm 15 guest-start/debian.repeat fail REGR. vs. 102701
 test-armhf-armhf-xl-credit2  16 guest-start.2fail REGR. vs. 102701
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 windows-install fail REGR. vs. 
102709

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 102709
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 102709
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 102709
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 102709

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-rtds  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64-xsm   5 xen-buildfail   never pass
 build-arm64   5 xen-buildfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 build-arm64-pvops 5 kernel-build fail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu7eaaf4ba68fab40f1945d761438bdaa44fbf37d7
baseline version:
 qemuue27a2f17bc2d9d7f8afce2c5918f4f23937b268e

Last test of basis   102709  2016-11-29 07:53:18 Z   73 days
Testing same since   105676  2017-02-09 23:13:26 Z0 days1 attempts


People who touched revisions under test:
  Gerd Hoffmann 
  Li Qiang 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  fail
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  

Re: [Xen-devel] [PATCH] x86emul/test: fix 32-bit build

2017-02-10 Thread Jan Beulich
>>> On 10.02.17 at 09:15,  wrote:
> On Fri, Feb 10, 2017 at 12:38:51AM -0700, Jan Beulich wrote:
>> Commit 7603eb256 ("x86emul: use eflags definitions in x86-defns.h")
>> removed the EFLG_* definitions without updating the use sites (which
>> - oddly enough - happen to all be in 32-bit only code paths).
>> 
>> Signed-off-by: Jan Beulich 
> 
> Reviewed-by: Wei Liu 

Thanks.

> And I also notice that this directory is not built by default, hence it
> slipped my pre-commit build test (which does 32bit build as well) and
> oostest build test.

And we shouldn't try to build this by default, as it definitely
requires a much newer tool chain than what our base requirement
is.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen on ARM IRQ latency and scheduler overhead

2017-02-10 Thread Dario Faggioli
On Thu, 2017-02-09 at 16:54 -0800, Stefano Stabellini wrote:
> Hi all,
> 
Hi,

> I have run some IRQ latency measurements on Xen on ARM on a Xilinx
> ZynqMP board (four Cortex A53 cores, GICv2).
> 
> Dom0 has 1 vcpu pinned to cpu0, DomU has 1 vcpu pinned to cpu2.
> Dom0 is Ubuntu. DomU is an ad-hoc baremetal app to measure interrupt
> latency: https://github.com/edgarigl/tbm
> 
Right, interesting use case. I'm glad to see there's some interest in
it, and am happy to help investigating, and trying to make things
better.

> I modified the app to use the phys_timer instead of the virt_timer. 
> You
> can build it with:
> 
> make CFG=configs/xen-guest-irq-latency.cfg 
> 
Ok, do you (or anyone) mind explaining in a little bit more details
what the app tries to measure and how it does that.

As a matter of fact, I'm quite familiar with the scenario (I've spent a
lot of time playing with cyclictest https://rt.wiki.kernel.org/index.ph
p/Cyclictest ) but I don't immediately understand the meaning of way
the timer is programmed, what is supposed to be in the various
variables/register, what actually is 'freq', etc.

> These are the results, in nanosec:
> 
>     AVG MIN MAX WARM MAX
> 
> NODEBUG no WFI  1890    1800    3170    2070
> NODEBUG WFI 4850    4810    7030    4980
> NODEBUG no WFI credit2  2217    2090    3420    2650
> NODEBUG WFI credit2 8080    7890    10320   8300
> 
> DEBUG no WFI    2252    2080    3320    2650
> DEBUG WFI   6500    6140    8520    8130
> DEBUG WFI, credit2  8050    7870    10680   8450
> 
> DEBUG means Xen DEBUG build.
>
Mmm, and Credit2 (with WFI) behave almost the same (and even a bit
better in some cases) with debug enabled. While in Credit1, debug yes
or no makes quite a few difference, AFAICT, especially in the WFI case.

That looks a bit strange, as I'd have expected the effect to be similar
(there's actually quite a bit of debug checks in Credit2, maybe even
more than in Credit1).

> WARM MAX is the maximum latency, taking out the first few interrupts
> to
> warm the caches.
> WFI is the ARM and ARM64 sleeping instruction, trapped and emulated
> by
> Xen by calling vcpu_block.
> 
> As you can see, depending on whether the guest issues a WFI or not
> while
> waiting for interrupts, the results change significantly.
> Interestingly,
> credit2 does worse than credit1 in this area.
> 
This is with current staging right? If yes, in Credit1, you on ARM
never stop the scheduler tick, like we do in x86. This means the system
is, in general, "more awake" than Credit2, which does not have a
periodic tick (and FWIW, also "more awake" of Credit1 in x86, as far as
the scheduler is concerned, at least).

Whether or not this impact significantly your measurements, I don't
know, as it depends on a bunch of factors. What we know is that this
has enough impact to trigger the RCU bug Julien discovered (in a
different scenario, I know), so I would not rule it out.

I can try sending a quick patch for disabling the tick when a CPU is
idle, but I'd need your help in testing it.

> Trying to figure out where those 3000-4000ns of difference between
> the
> WFI and non-WFI cases come from, I wrote a patch to zero the latency
> introduced by xen/arch/arm/domain.c:schedule_tail. That saves about
> 1000ns. There are no other arch specific context switch functions
> worth
> optimizing.
> 
Yeah. It would be interesting to see a trace, but we still don't have
that for ARM. :-(

> We are down to 2000-3000ns. Then, I started investigating the
> scheduler.
> I measured how long it takes to run "vcpu_unblock": 1050ns, which is
> significant. 
>
How you measured, if I can ask.

> I don't know what is causing the remaining 1000-2000ns, but
> I bet on another scheduler function. Do you have any suggestions on
> which one?
> 
Well, when a vcpu is woken up, it is put in a runqueue, and a pCPU is
poked to go get and run it. The other thing you may want to try to
measure is how much time passes between when the vCPU becomes runnable
and is added to the runqueue, and when it is actually put to run.

Again, this would be visible in tracing. :-/

> Assuming that the problem is indeed the scheduler, one workaround
> that
> we could introduce today would be to avoid calling vcpu_unblock on
> guest
> WFI and call vcpu_yield instead. This change makes things
> significantly
> better:
> 
>  AVG MIN MAX WARM MAX
> DEBUG WFI (yield, no block)  2900    2190    5130    5130
> DEBUG WFI (yield, no block) credit2  3514    2280    6180    5430
> 
> Is that a reasonable change to make? Would it cause significantly
> more
> power consumption in Xen (because xen/arch/arm/domain.c:idle_loop
> might
> not be called anymore)?
> 
Exactly. So, I think that, as Linux has 'idle=poll', it is conceivable
to have something similar in Xen, and if we do, I guess it can be
implemented as you suggest.

But, no, I 

Re: [Xen-devel] kernel BUG at block/bio.c:1786 -- (xen_blkif_schedule on the stack)

2017-02-10 Thread Håkon Alstadheim
I just tried to provoke the bug, after applying your patch and
re-enabling tmem, but it seems there are more variables in the equation
to make a crash happen. Before this week the VM in question would
reliably crash/hang on boot during the past month and through several
re-boots of the dom0.

I have sligthly reduced memory allotment to several VMs, which might be
keeping the bug from triggering. I will not be actively trying to
provoke this any more, but I'll keep you posted if it re-surfaces.

In the mean-time I'll try to learn more about how my system uses memory
(looking into "grants").

Den 09. feb. 2017 18:30, skrev Roger Pau Monné:
> On Mon, Feb 06, 2017 at 12:31:20AM +0100, Håkon Alstadheim wrote:
>> I get the BUG below in dom0 when trying to start a windows 10 domu (hvm,
>> with some pv-drivers installed ) . Below is "xl info", then comes dmesg
>> output, and finally domu config attached at end.
>>
>> This domain is started very rarely, so may have been broken for some
>> time. All my other domains ar linux. This message is just a data-point
>> for whoever is interested, with possibly more data if anybody wants to
>> ask me anything. NOT expecting quick resolution of this :-/ .
>>
>> The domain boots part of the way, screen resolution gets changed and
>> then it keeps spinning for ~ 5 seconds before stopping.
> [...]
>> [339809.663061] br0: port 12(vif7.0) entered blocking state
>> [339809.663063] br0: port 12(vif7.0) entered disabled state
>> [339809.663123] device vif7.0 entered promiscuous mode
>> [339809.664885] IPv6: ADDRCONF(NETDEV_UP): vif7.0: link is not ready
>> [339809.742522] br0: port 13(vif7.0-emu) entered blocking state
>> [339809.742523] br0: port 13(vif7.0-emu) entered disabled state
>> [339809.742573] device vif7.0-emu entered promiscuous mode
>> [339809.744386] br0: port 13(vif7.0-emu) entered blocking state
>> [339809.744388] br0: port 13(vif7.0-emu) entered forwarding state
>> [339864.059095] xen-blkback: backend/vbd/7/768: prepare for reconnect
>> [339864.138002] xen-blkback: backend/vbd/7/768: using 1 queues, protocol
>> 1 (x86_64-abi)
>> [339864.241039] xen-blkback: backend/vbd/7/832: prepare for reconnect
>> [339864.337997] xen-blkback: backend/vbd/7/832: using 1 queues, protocol
>> 1 (x86_64-abi)
>> [339875.245306] vif vif-7-0 vif7.0: Guest Rx ready
>> [339875.245345] IPv6: ADDRCONF(NETDEV_CHANGE): vif7.0: link becomes ready
>> [339875.245391] br0: port 12(vif7.0) entered blocking state
>> [339875.245395] br0: port 12(vif7.0) entered forwarding state
>> [339894.122151] [ cut here ]
>> [339894.122169] kernel BUG at block/bio.c:1786!
>> [339894.122173] invalid opcode:  [#1] SMP
>> [339894.122176] Modules linked in: xt_physdev iptable_filter ip_tables
>> x_tables nfsd auth_rpcgss oid_registry nfsv4 dns_resolver nfsv3 nfs_acl
>> binfmt_misc intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp
>> crc32c_intel pcspkr serio_raw i2c_i801 i2c_smbus iTCO_wdt
>> iTCO_vendor_support amdgpu drm_kms_helper syscopyarea bcache input_leds
>> sysfillrect sysimgblt fb_sys_fops ttm drm uas shpchp ipmi_ssif rtc_cmos
>> acpi_power_meter wmi tun snd_hda_codec_realtek snd_hda_codec_generic
>> snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core snd_pcm snd_timer snd
>> usbip_host usbip_core pktcdvd tmem lpc_ich xen_wdt nct6775 hwmon_vid
>> dm_zero dm_thin_pool dm_persistent_data dm_bio_prison dm_service_time
>> dm_round_robin dm_queue_length dm_multipath dm_log_userspace cn
>> virtio_pci virtio_scsi virtio_blk virtio_console virtio_balloon
>> [339894.122233]  xts gf128mul aes_x86_64 cbc sha512_generic
>> sha256_generic sha1_generic libiscsi scsi_transport_iscsi virtio_net
>> virtio_ring virtio tg3 libphy e1000 fuse overlay nfs lockd grace sunrpc
>> jfs multipath linear raid10 raid1 raid0 dm_raid raid456
>> async_raid6_recov async_memcpy async_pq async_xor xor async_tx raid6_pq
>> dm_snapshot dm_bufio dm_crypt dm_mirror dm_region_hash dm_log dm_mod
>> hid_sunplus hid_sony hid_samsung hid_pl hid_petalynx hid_monterey
>> hid_microsoft hid_logitech ff_memless hid_gyration hid_ezkey hid_cypress
>> hid_chicony hid_cherry hid_a4tech sl811_hcd xhci_plat_hcd ohci_pci
>> ohci_hcd uhci_hcd aic94xx lpfc qla2xxx aacraid sx8 DAC960 hpsa cciss
>> 3w_9xxx 3w_ mptsas mptfc scsi_transport_fc mptspi mptscsih mptbase
>> atp870u dc395x qla1280 imm parport dmx3191d sym53c8xx gdth initio BusLogic
>> [339894.122325]  arcmsr aic7xxx aic79xx sg pdc_adma sata_inic162x
>> sata_mv sata_qstor sata_vsc sata_uli sata_sis sata_sx4 sata_nv sata_via
>> sata_svw sata_sil24 sata_sil sata_promise pata_sis usbhid led_class igb
>> ptp dca i2c_algo_bit ehci_pci ehci_hcd xhci_pci megaraid_sas xhci_hcd
>> [339894.122350] CPU: 3 PID: 23514 Comm: 7.hda-0 Tainted: GW
>>  4.9.8-gentoo #1
>> [339894.122353] Hardware name: ASUSTeK COMPUTER INC. Z10PE-D8
>> WS/Z10PE-D8 WS, BIOS 3304 06/22/2016
>> [339894.122358] task: 880244b55b00 task.stack: c90042fcc000
>> [339894.122361] RIP: e030:[]  []
>> 

Re: [Xen-devel] [RFC XEN PATCH 15/16] tools/libxl: handle return code of libxl__qmp_initializations()

2017-02-10 Thread Wei Liu
On Fri, Feb 10, 2017 at 08:11:20AM +, Wei Liu wrote:
> On Fri, Feb 10, 2017 at 10:37:44AM +0800, Haozhong Zhang wrote:
> > On 02/09/17 10:13 +, Wei Liu wrote:
> > > On Thu, Feb 09, 2017 at 10:47:01AM +0800, Haozhong Zhang wrote:
> > > > On 02/08/17 10:31 +, Wei Liu wrote:
> > > > > On Wed, Feb 08, 2017 at 02:07:26PM +0800, Haozhong Zhang wrote:
> > > > > > On 01/27/17 17:11 -0500, Konrad Rzeszutek Wilk wrote:
> > > > > > > On Mon, Oct 10, 2016 at 08:32:34AM +0800, Haozhong Zhang wrote:
> > > > > > > > If any error code is returned when creating a domain, stop the 
> > > > > > > > domain
> > > > > > > > creation.
> > > > > > >
> > > > > > > This looks like it is a bug-fix that can be spun off from this
> > > > > > > patchset?
> > > > > > >
> > > > > >
> > > > > > Yes, if everyone considers it's really a bug and the fix does not
> > > > > > cause compatibility problem (e.g. xl w/o this patch does not abort 
> > > > > > the
> > > > > > domain creation if it fails to connect to QEMU VNC port).
> > > > > >
> > > > >
> > > > > I'm two minded here. If the failure to connect is caused by some
> > > > > temporary glitches in QEMU and we're sure it will eventually succeed,
> > > > > there is no need to abort domain creation. If failure to connect is 
> > > > > due
> > > > > to permanent glitches, we should abort.
> > > > >
> > > > 
> > > > Sorry, I should say "*query* QEMU VNC port" instead of *connect*.
> > > > 
> > > > libxl__qmp_initializations() currently does following tasks.
> > > > 1/ Create a QMP socket.
> > > > 
> > > >   I think all failures in 1/ should be considered as permanent. It
> > > >   does not only fail the following tasks, but also fails the device
> > > >   hotplug which needs to cooperate with QEMU.
> > > > 
> > > > 2/ If 1/ succeeds, query qmp about parameters of serial port and fill
> > > >   them in xenstore.
> > > > 3/ If 1/ and 2/ succeed, set and query qmp about parameters (password,
> > > >   address, port) of VNC and fill them in xenstore.
> > > > 
> > > >   If we assume Xen always send the correct QMP commands and
> > > >   parameters, the QMP failures in 2/ and 3/ will be caused by QMP
> > > >   socket errors (see qmp_next()), which are hard to tell whether they
> > > >   are permanent or temporal. However, if the missing of serial port
> > > >   or VNC is considered as not affecting the execution of guest
> > > >   domain, we may ignore failures here.
> > > > 
> > > > > OOI how did you discover this issue? That could be the key to 
> > > > > understand
> > > > > the issue here.
> > > > 
> > > > The next patch adds code in libxl__qmp_initialization() to query qmp
> > > > about vNVDIMM parameters (e.g. the base gpfn which is calculated by
> > > > QEMU) and return error code if it fails. While I was developing that
> > > > patch, I found xl didn't stop even if bugs in my QEMU patches failed
> > > > the code in my Xen patch.
> > > > 
> > > 
> > > Right, this should definitely be fatal.
> > > 
> > > > Maybe we could let libxl__qmp_initializations() report whether a
> > > > failure can be tolerant. For non-tolerant failures (e.g. those in 1/),
> > > > xl should stop. For tolerant failures (e.g. those in 2/ and 3/), xl
> > > > can continue, but it needs to warn those failures.
> > > > 
> > > 
> > > Yes, we can do that. It's an internal function, we can change things as
> > > we see fit.
> > > 
> > > I would suggest you only make vNVDIMM failure fatal as a start.
> > > 
> > 
> > I'll send a patch out of this series to implement above w/o NVDIMM
> > stuffs.
> > 
> 
> Sorry, I'm not sure I follow, correct me if I'm wrong: I think we're
> fine with this function as-is because we don't want to make VNC / serial
> error fatal, right?
> 
> (not going to work today so please allow me some time to read your
> reply)
> 
> Wei.
> 
> 
> 
> > Thanks,
> > Haozhong

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC XEN PATCH 15/16] tools/libxl: handle return code of libxl__qmp_initializations()

2017-02-10 Thread Haozhong Zhang

On 02/10/17 08:11 +, Wei Liu wrote:

On Fri, Feb 10, 2017 at 10:37:44AM +0800, Haozhong Zhang wrote:

On 02/09/17 10:13 +, Wei Liu wrote:
> On Thu, Feb 09, 2017 at 10:47:01AM +0800, Haozhong Zhang wrote:
> > On 02/08/17 10:31 +, Wei Liu wrote:
> > > On Wed, Feb 08, 2017 at 02:07:26PM +0800, Haozhong Zhang wrote:
> > > > On 01/27/17 17:11 -0500, Konrad Rzeszutek Wilk wrote:
> > > > > On Mon, Oct 10, 2016 at 08:32:34AM +0800, Haozhong Zhang wrote:
> > > > > > If any error code is returned when creating a domain, stop the 
domain
> > > > > > creation.
> > > > >
> > > > > This looks like it is a bug-fix that can be spun off from this
> > > > > patchset?
> > > > >
> > > >
> > > > Yes, if everyone considers it's really a bug and the fix does not
> > > > cause compatibility problem (e.g. xl w/o this patch does not abort the
> > > > domain creation if it fails to connect to QEMU VNC port).
> > > >
> > >
> > > I'm two minded here. If the failure to connect is caused by some
> > > temporary glitches in QEMU and we're sure it will eventually succeed,
> > > there is no need to abort domain creation. If failure to connect is due
> > > to permanent glitches, we should abort.
> > >
> >
> > Sorry, I should say "*query* QEMU VNC port" instead of *connect*.
> >
> > libxl__qmp_initializations() currently does following tasks.
> > 1/ Create a QMP socket.
> >
> >   I think all failures in 1/ should be considered as permanent. It
> >   does not only fail the following tasks, but also fails the device
> >   hotplug which needs to cooperate with QEMU.
> >
> > 2/ If 1/ succeeds, query qmp about parameters of serial port and fill
> >   them in xenstore.
> > 3/ If 1/ and 2/ succeed, set and query qmp about parameters (password,
> >   address, port) of VNC and fill them in xenstore.
> >
> >   If we assume Xen always send the correct QMP commands and
> >   parameters, the QMP failures in 2/ and 3/ will be caused by QMP
> >   socket errors (see qmp_next()), which are hard to tell whether they
> >   are permanent or temporal. However, if the missing of serial port
> >   or VNC is considered as not affecting the execution of guest
> >   domain, we may ignore failures here.
> >
> > > OOI how did you discover this issue? That could be the key to understand
> > > the issue here.
> >
> > The next patch adds code in libxl__qmp_initialization() to query qmp
> > about vNVDIMM parameters (e.g. the base gpfn which is calculated by
> > QEMU) and return error code if it fails. While I was developing that
> > patch, I found xl didn't stop even if bugs in my QEMU patches failed
> > the code in my Xen patch.
> >
>
> Right, this should definitely be fatal.
>
> > Maybe we could let libxl__qmp_initializations() report whether a
> > failure can be tolerant. For non-tolerant failures (e.g. those in 1/),
> > xl should stop. For tolerant failures (e.g. those in 2/ and 3/), xl
> > can continue, but it needs to warn those failures.
> >
>
> Yes, we can do that. It's an internal function, we can change things as
> we see fit.
>
> I would suggest you only make vNVDIMM failure fatal as a start.
>

I'll send a patch out of this series to implement above w/o NVDIMM
stuffs.



Sorry, I'm not sure I follow, correct me if I'm wrong: I think we're
fine with this function as-is because we don't want to make VNC / serial
error fatal, right?



I misunderstood that xl should fail if encountering errors in 1/, but
now you indicate it's fine to leave it as-is, so no patch will be
needed until NVDIMM support is added.

Haozhong


(not going to work today so please allow me some time to read your
reply)

Wei.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86emul/test: fix 32-bit build

2017-02-10 Thread Wei Liu
On Fri, Feb 10, 2017 at 12:38:51AM -0700, Jan Beulich wrote:
> Commit 7603eb256 ("x86emul: use eflags definitions in x86-defns.h")
> removed the EFLG_* definitions without updating the use sites (which
> - oddly enough - happen to all be in 32-bit only code paths).
> 
> Signed-off-by: Jan Beulich 

Reviewed-by: Wei Liu 

Ah sorry! I would have sworn I did several mechanical replaces, but
apparently something went wrong...

And I also notice that this directory is not built by default, hence it
slipped my pre-commit build test (which does 32bit build as well) and
oostest build test.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] x86emul: always init mmval

2017-02-10 Thread Jan Beulich
... to avoid buggy read/write sizes becoming info leaks.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2671,6 +2671,8 @@ x86_emulate(
 ea.reg = decode_register(modrm_rm, &_regs,
  (d & ByteOp) && !rex_prefix);
 
+memset(mmvalp, 0xaa /* arbitrary */, sizeof(*mmvalp));
+
 /* Decode and fetch the source operand: register, memory or immediate. */
 switch ( d & SrcMask )
 {



x86emul: always init mmval

... to avoid buggy read/write sizes becoming info leaks.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2671,6 +2671,8 @@ x86_emulate(
 ea.reg = decode_register(modrm_rm, &_regs,
  (d & ByteOp) && !rex_prefix);
 
+memset(mmvalp, 0xaa /* arbitrary */, sizeof(*mmvalp));
+
 /* Decode and fetch the source operand: register, memory or immediate. */
 switch ( d & SrcMask )
 {
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] x86emul/test: fix 32-bit build

2017-02-10 Thread Jan Beulich
Commit 7603eb256 ("x86emul: use eflags definitions in x86-defns.h")
removed the EFLG_* definitions without updating the use sites (which
- oddly enough - happen to all be in 32-bit only code paths).

Signed-off-by: Jan Beulich 

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -497,7 +497,7 @@ int main(int argc, char **argv)
 if ( (rc != X86EMUL_OKAY) ||
  (*res != 0x1112) ||
  (regs.ecx != 0x) ||
- !(regs.eflags & EFLG_ZF) ||
+ !(regs.eflags & X86_EFLAGS_ZF) ||
  (regs.eip != (unsigned long)[2]) )
 goto fail;
 #else
@@ -571,11 +571,11 @@ int main(int argc, char **argv)
 
 #ifndef __x86_64__
 printf("%-40s", "Testing daa/das (all inputs)...");
-/* Bits 0-7: AL; Bit 8: EFLG_AF; Bit 9: EFLG_CF; Bit 10: DAA vs. DAS. */
+/* Bits 0-7: AL; Bit 8: EFLAGS.AF; Bit 9: EFLAGS.CF; Bit 10: DAA vs. DAS. 
*/
 for ( i = 0; i < 0x800; i++ )
 {
-regs.eflags  = (i & 0x200) ? EFLG_CF : 0;
-regs.eflags |= (i & 0x100) ? EFLG_AF : 0;
+regs.eflags  = (i & 0x200) ? X86_EFLAGS_CF : 0;
+regs.eflags |= (i & 0x100) ? X86_EFLAGS_AF : 0;
 if ( i & 0x400 )
 __asm__ (
 "pushf; and $0xffee,(%%esp); or %1,(%%esp); popf; das; "
@@ -588,24 +588,24 @@ int main(int argc, char **argv)
 "pushf; popl %1"
 : "=a" (bcdres_native), "=r" (regs.eflags)
 : "0" (i & 0xff), "1" (regs.eflags) );
-bcdres_native |= (regs.eflags & EFLG_PF) ? 0x1000 : 0;
-bcdres_native |= (regs.eflags & EFLG_ZF) ? 0x800 : 0;
-bcdres_native |= (regs.eflags & EFLG_SF) ? 0x400 : 0;
-bcdres_native |= (regs.eflags & EFLG_CF) ? 0x200 : 0;
-bcdres_native |= (regs.eflags & EFLG_AF) ? 0x100 : 0;
+bcdres_native |= (regs.eflags & X86_EFLAGS_PF) ? 0x1000 : 0;
+bcdres_native |= (regs.eflags & X86_EFLAGS_ZF) ? 0x800 : 0;
+bcdres_native |= (regs.eflags & X86_EFLAGS_SF) ? 0x400 : 0;
+bcdres_native |= (regs.eflags & X86_EFLAGS_CF) ? 0x200 : 0;
+bcdres_native |= (regs.eflags & X86_EFLAGS_AF) ? 0x100 : 0;
 
 instr[0] = (i & 0x400) ? 0x2f: 0x27; /* daa/das */
-regs.eflags  = (i & 0x200) ? EFLG_CF : 0;
-regs.eflags |= (i & 0x100) ? EFLG_AF : 0;
+regs.eflags  = (i & 0x200) ? X86_EFLAGS_CF : 0;
+regs.eflags |= (i & 0x100) ? X86_EFLAGS_AF : 0;
 regs.eip= (unsigned long)[0];
 regs.eax= (unsigned char)i;
 rc = x86_emulate(, );
 bcdres_emul  = regs.eax;
-bcdres_emul |= (regs.eflags & EFLG_PF) ? 0x1000 : 0;
-bcdres_emul |= (regs.eflags & EFLG_ZF) ? 0x800 : 0;
-bcdres_emul |= (regs.eflags & EFLG_SF) ? 0x400 : 0;
-bcdres_emul |= (regs.eflags & EFLG_CF) ? 0x200 : 0;
-bcdres_emul |= (regs.eflags & EFLG_AF) ? 0x100 : 0;
+bcdres_emul |= (regs.eflags & X86_EFLAGS_PF) ? 0x1000 : 0;
+bcdres_emul |= (regs.eflags & X86_EFLAGS_ZF) ? 0x800 : 0;
+bcdres_emul |= (regs.eflags & X86_EFLAGS_SF) ? 0x400 : 0;
+bcdres_emul |= (regs.eflags & X86_EFLAGS_CF) ? 0x200 : 0;
+bcdres_emul |= (regs.eflags & X86_EFLAGS_AF) ? 0x100 : 0;
 if ( (rc != X86EMUL_OKAY) || (regs.eax > 255) ||
  (regs.eip != (unsigned long)[1]) )
 goto fail;



x86emul/test: fix 32-bit build

Commit 7603eb256 ("x86emul: use eflags definitions in x86-defns.h")
removed the EFLG_* definitions without updating the use sites (which
- oddly enough - happen to all be in 32-bit only code paths).

Signed-off-by: Jan Beulich 

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -497,7 +497,7 @@ int main(int argc, char **argv)
 if ( (rc != X86EMUL_OKAY) ||
  (*res != 0x1112) ||
  (regs.ecx != 0x) ||
- !(regs.eflags & EFLG_ZF) ||
+ !(regs.eflags & X86_EFLAGS_ZF) ||
  (regs.eip != (unsigned long)[2]) )
 goto fail;
 #else
@@ -571,11 +571,11 @@ int main(int argc, char **argv)
 
 #ifndef __x86_64__
 printf("%-40s", "Testing daa/das (all inputs)...");
-/* Bits 0-7: AL; Bit 8: EFLG_AF; Bit 9: EFLG_CF; Bit 10: DAA vs. DAS. */
+/* Bits 0-7: AL; Bit 8: EFLAGS.AF; Bit 9: EFLAGS.CF; Bit 10: DAA vs. DAS. 
*/
 for ( i = 0; i < 0x800; i++ )
 {
-regs.eflags  = (i & 0x200) ? EFLG_CF : 0;
-regs.eflags |= (i & 0x100) ? EFLG_AF : 0;
+regs.eflags  = (i & 0x200) ? X86_EFLAGS_CF : 0;
+regs.eflags |= (i & 0x100) ? X86_EFLAGS_AF : 0;
 if ( i & 0x400 )
 __asm__ (
 "pushf; and $0xffee,(%%esp); or %1,(%%esp); popf; das; "
@@ -588,24 +588,24 @@ int main(int argc, char **argv)
 "pushf; popl %1"
 : "=a" (bcdres_native), "=r" (regs.eflags)
 : "0" (i & 0xff), "1" (regs.eflags) );
-

Re: [Xen-devel] [RFC XEN PATCH 15/16] tools/libxl: handle return code of libxl__qmp_initializations()

2017-02-10 Thread Wei Liu
On Fri, Feb 10, 2017 at 10:37:44AM +0800, Haozhong Zhang wrote:
> On 02/09/17 10:13 +, Wei Liu wrote:
> > On Thu, Feb 09, 2017 at 10:47:01AM +0800, Haozhong Zhang wrote:
> > > On 02/08/17 10:31 +, Wei Liu wrote:
> > > > On Wed, Feb 08, 2017 at 02:07:26PM +0800, Haozhong Zhang wrote:
> > > > > On 01/27/17 17:11 -0500, Konrad Rzeszutek Wilk wrote:
> > > > > > On Mon, Oct 10, 2016 at 08:32:34AM +0800, Haozhong Zhang wrote:
> > > > > > > If any error code is returned when creating a domain, stop the 
> > > > > > > domain
> > > > > > > creation.
> > > > > >
> > > > > > This looks like it is a bug-fix that can be spun off from this
> > > > > > patchset?
> > > > > >
> > > > >
> > > > > Yes, if everyone considers it's really a bug and the fix does not
> > > > > cause compatibility problem (e.g. xl w/o this patch does not abort the
> > > > > domain creation if it fails to connect to QEMU VNC port).
> > > > >
> > > >
> > > > I'm two minded here. If the failure to connect is caused by some
> > > > temporary glitches in QEMU and we're sure it will eventually succeed,
> > > > there is no need to abort domain creation. If failure to connect is due
> > > > to permanent glitches, we should abort.
> > > >
> > > 
> > > Sorry, I should say "*query* QEMU VNC port" instead of *connect*.
> > > 
> > > libxl__qmp_initializations() currently does following tasks.
> > > 1/ Create a QMP socket.
> > > 
> > >   I think all failures in 1/ should be considered as permanent. It
> > >   does not only fail the following tasks, but also fails the device
> > >   hotplug which needs to cooperate with QEMU.
> > > 
> > > 2/ If 1/ succeeds, query qmp about parameters of serial port and fill
> > >   them in xenstore.
> > > 3/ If 1/ and 2/ succeed, set and query qmp about parameters (password,
> > >   address, port) of VNC and fill them in xenstore.
> > > 
> > >   If we assume Xen always send the correct QMP commands and
> > >   parameters, the QMP failures in 2/ and 3/ will be caused by QMP
> > >   socket errors (see qmp_next()), which are hard to tell whether they
> > >   are permanent or temporal. However, if the missing of serial port
> > >   or VNC is considered as not affecting the execution of guest
> > >   domain, we may ignore failures here.
> > > 
> > > > OOI how did you discover this issue? That could be the key to understand
> > > > the issue here.
> > > 
> > > The next patch adds code in libxl__qmp_initialization() to query qmp
> > > about vNVDIMM parameters (e.g. the base gpfn which is calculated by
> > > QEMU) and return error code if it fails. While I was developing that
> > > patch, I found xl didn't stop even if bugs in my QEMU patches failed
> > > the code in my Xen patch.
> > > 
> > 
> > Right, this should definitely be fatal.
> > 
> > > Maybe we could let libxl__qmp_initializations() report whether a
> > > failure can be tolerant. For non-tolerant failures (e.g. those in 1/),
> > > xl should stop. For tolerant failures (e.g. those in 2/ and 3/), xl
> > > can continue, but it needs to warn those failures.
> > > 
> > 
> > Yes, we can do that. It's an internal function, we can change things as
> > we see fit.
> > 
> > I would suggest you only make vNVDIMM failure fatal as a start.
> > 
> 
> I'll send a patch out of this series to implement above w/o NVDIMM
> stuffs.
> 

Sorry, I'm not sure I follow, correct me if I'm wrong: I think we're
fine with this function as-is because we don't want to make VNC / serial
error fatal, right?

(not going to work today so please allow me some time to read your
reply)

Wei.



> Thanks,
> Haozhong

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel